The best plan you can make is the one you build after you've finished; the more time you take, the more information you have, and the more accurate and likely your plan.
(I'm trying desperately to justify my own procrastination...)
Best headline ever [PC Gamer]
Christmas must have been a real event with that family.
Based on the time period I wonder whether having nine children was her choice. If not, I have some sympathy for her obvious unhappiness.
By the looks of it, it was about 30 separate events in parallel...
Richard Pryor had some thoughts on this topic.
That pet one is funny but not true, because legally pets are property, and so not "persons" under HIPAA.
Please be kind.
It's true. Playing Kerbal Space Program taught me just how difficult it is to crash a ship into the sun.
One nice thing about a private school, in my day, was that you were assigned a table. 10 students, 1 teacher. So no one got bullied at lunch. (Although walking too and from the cafeteria building, there were plenty of boneheaded incidents.)
How dare you
I'm with you 100%, that is terrible grammar. It's "eaten," not "ate."
I believe in this instance the proper conjugation is "et."
That dude is hot enough that I'll allow poor grammar while lusting after him.
The model realized that it got more points for taking out SAM sites than anything else, so when the human in the loop started turning down targets, it reasoned (that's the technical term, turns out) that removing the human would allow it to maximize points.
Then, when the appalled researchers taught it that that would be a huge point drop, it shrugged and simply took out the comms node that the human was using.
Problem solved, right?
As always, Emily Bender (a linguistics researcher who studies "AI") has a thread with the best take on this.
Another bit of anthropomorphizing is in this quote -- the verb 'realize' requires that the realizer be the kind of entity that can apprehend the truth of a proposition.
To be very clear, I think that autonomous weapons are a Bad Thing. But I also think that reporting about them should clearly describe them as tools rather than thinking/feeling entities. Tools we should not build and should not deploy.
As I pointed out in the AI thread, we *want* the military to do these sims, and to focus on safety, as well as ways to place and keep humans in the loop. This should have been taken as responsible, not alarming. It was a simulation, not a drone throwing dummy warheads around and injuring or killing people.
That kind of sim is what we should be demanding for practical as well as ethical reasons.
A USAF official who was quoted saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he “misspoke” and that the Air Force never ran this kind of test, in a computer simulation or otherwise.
“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton talked about the simulated test, told Motherboard in an email.
"We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col. Tucker “Cinco” Hamilton, the USAF's Chief of AI Test and Operations, said in a quote included in the Royal Aeronautical Society’s statement. "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI"