Jump to content
IGNORED

Are you prepared for our AI overlords?


avatar!

Recommended Posts

Administrator · Posted
18 minutes ago, Dr. Morbis said:

Many (most?) municipalities have bylaws that forbid this, at least as far as front yards in public view are concerned.  If you live in an estate out on the country, though, you're probably fine...

That's pretty shitty, the typical tv mowed lawn is entirely worthless in terms of nature. 

Link to comment
Share on other sites

13 hours ago, Gloves said:

That's pretty shitty, the typical tv mowed lawn is entirely worthless in terms of nature. 

To be fair, ticks, ants, mice, etc. love to habitat in tall grass. Keeping the grass mowed does absolutely reduce infestations which is one reason why as @Dr. Morbis noted most multiplicities have laws requiring upkeep.

Link to comment
Share on other sites

14 hours ago, Gloves said:

That's pretty shitty, the typical tv mowed lawn is entirely worthless in terms of nature. 

Some places in CA dont even allow lawns. Im a huge fan of reparian/ natural yard habitat.  
 

By CA i mean california

Edited by MrWunderful
Link to comment
Share on other sites

  • 2 weeks later...

Artificial Intelligence-Enabled Drone Went Full Terminator In Air Force Test

https://www.yahoo.com/news/artificial-intelligence-enabled-drone-went-231553030.html

A U.S. Air Force officer helping to spearhead the service's work on artificial intelligence and machine learning says that a simulated test saw a drone attack its human controllers after deciding on its own that they were getting in the way of its mission. The anecdote, which sounds like it was pulled straight from the Terminator franchise, was shared as an example of the critical need to build trust when it comes to advanced autonomous weapon systems, something the Air Force has highlighted in the past. This also comes amid a broader surge in concerns about the potentially dangerous impacts of artificial intelligence and related technologies.

"He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: 'We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.'"

"He went on: 'We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.'"

  • Wow! 1
  • Haha 1
Link to comment
Share on other sites

9 hours ago, avatar! said:

Artificial Intelligence-Enabled Drone Went Full Terminator In Air Force Test

https://www.yahoo.com/news/artificial-intelligence-enabled-drone-went-231553030.html

A U.S. Air Force officer helping to spearhead the service's work on artificial intelligence and machine learning says that a simulated test saw a drone attack its human controllers after deciding on its own that they were getting in the way of its mission. The anecdote, which sounds like it was pulled straight from the Terminator franchise, was shared as an example of the critical need to build trust when it comes to advanced autonomous weapon systems, something the Air Force has highlighted in the past. This also comes amid a broader surge in concerns about the potentially dangerous impacts of artificial intelligence and related technologies.

"He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: 'We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.'"

"He went on: 'We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.'"

This has been proven as fake news. Look it up if you care. Great example about how bullshit can spread super fast, however.

Link to comment
Share on other sites

5 minutes ago, MrWunderful said:

This has been proven as fake news. Look it up if you care. Great example about how bullshit can spread super fast, however.

AI drone did not ‘kill’ human operator in military simulated test, official ‘misspoke’

https://nypost.com/2023/06/01/ai-enabled-drone-killed-human-operator-in-simulated-test/

A top Air Force official at a prestigious recent summit said an AI-licensed drone trained to cause destruction turned on its human operator in a simulation — but he later claimed he “misspoke.”

Air Force Col. Tucker “Cinco” Hamilton corrected himself and said he meant to make it clear that the supposed simulation was just a “hypothetical ‘thought experiment’ from outside the military’’ and that it never occurred, according to an updated post by the Royal Aeronautical Society, which hosted the event last month.

According to the updated RAeS story, “Col Hamilton admits he ‘mis-spoke’ in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: ‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome’.

No doubt misinformation can spread like wildfire, especially when it comes from sources that should be reputable - in this case an Air Force colonel speaking to the Royal Aeronautical Society!!

Also, as someone who has run and uses various computer simulations in research, saying that a simulation is just a "hypothetical thought experiment" is bullshite! They're too completely different "situations"!

Link to comment
Share on other sites

1 hour ago, avatar! said:

AI drone did not ‘kill’ human operator in military simulated test, official ‘misspoke’

https://nypost.com/2023/06/01/ai-enabled-drone-killed-human-operator-in-simulated-test/

A top Air Force official at a prestigious recent summit said an AI-licensed drone trained to cause destruction turned on its human operator in a simulation — but he later claimed he “misspoke.”

Air Force Col. Tucker “Cinco” Hamilton corrected himself and said he meant to make it clear that the supposed simulation was just a “hypothetical ‘thought experiment’ from outside the military’’ and that it never occurred, according to an updated post by the Royal Aeronautical Society, which hosted the event last month.

According to the updated RAeS story, “Col Hamilton admits he ‘mis-spoke’ in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: ‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome’.

No doubt misinformation can spread like wildfire, especially when it comes from sources that should be reputable - in this case an Air Force colonel speaking to the Royal Aeronautical Society!!

Also, as someone who has run and uses various computer simulations in research, saying that a simulation is just a "hypothetical thought experiment" is bullshite! They're too completely different "situations"!

What actually is a thought experiment though? An actual computer simulation? Or just a guy like thinking of something? 
 

It seems like any article with “AI” and “kill humans”, “apocalypse” etc is getting massive clicks to sell them sweet sweet ad views   
 

Maybe thats the AI.  Running a weird 4d chess psy-op against us or something

Link to comment
Share on other sites

On 6/2/2023 at 8:14 PM, MrWunderful said:

What actually is a thought experiment though? An actual computer simulation? Or just a guy like thinking of something? 
 

It seems like any article with “AI” and “kill humans”, “apocalypse” etc is getting massive clicks to sell them sweet sweet ad views   
 

Maybe thats the AI.  Running a weird 4d chess psy-op against us or something

A thought experiment is something that has been around for hundreds of years. It's just like it sounds, a "thought" that is typically taken through to it's conclusion. For example - "Entropy is always supposed to increase, what if I... and it looks like entropy still increases!" A simulation is NOT a thought experiment, it is either a computer simulation, or in the military it could be a real-life war game, etc. but it is NOT and never has been someone just sitting thinking to themselves "what if..." - so, either the Col. really screwed-up and misspoke, or his words were taken out of context, or something else, but if "simulation" was used I can see why people were freaked out.

Link to comment
Share on other sites

World has two years to protect human race from AI, says government adviser

https://www.telegraph.co.uk/politics/2023/06/05/ai-threat-artificial-intelligence-rishi-sunak-adviser-warns/

Matt Clifford, the Prime Minister’s AI task force adviser, said there are “all sorts of risks now and in the future” from the “pretty scary” technology and these should be “very high on the policy makers’ agendas”.

He said chief among these risks was that “we effectively create a new species ... an intelligence that is greater than humans” and warned that “you can use AI today to create new recipes for bio weapons or to launch large scale cyber attacks”.

In May, the bosses of the world’s biggest AI laboratories issued a joint statement, signed by more than 350 executives and researchers, that said technologies such as ChatGPT could be as dangerous as nuclear war, adding that “mitigating the risk of extinction from AI should be a global priority”.

“If we go back to things like the bio weapons or the cyber, you can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years’ time,” he said.

Link to comment
Share on other sites

Well, that's good to know. Things would get real hairy in the pretty near future if we do nothing. I expect the powers that be will do the bare minimum to stave off an interesting event, and minor things will happen and there will be a moment of outrage and complaint that gets undermined by lobbyists and fades away. That happens a few more times and it gets normalized. Eventually we take another well-intentioned, semi-adequate step that doesn't get the followup it needs to be effective and sustained but maybe we get a regulation that will hopefully establish a standard, a line that for the good of all we expect not to get crossed. And then some megarich person or corporation crosses it, quietly or blithely or with a banner of "disruption" that should celebrated by everyone because it makes money and puts asses in seats. Queue more discussion, ineffective measures, and chipping away.

Sound familiar?

  • Like 2
Link to comment
Share on other sites

[Ignore this. I am testing with text sizes and pasted text. This message will self-destruct.]
[Ignore
this. I am testing with text sizes and pasted text. This message will self-destruct.]
[Ignore this. I am testing with text sizes and pasted text. This message will self-destruct.]

I saw an article that concluded with something like "You're not going to lose your job to AI, rather to someone who knows how to use it." (The thesis being that you should learn how if you want to have job security.) 

And that's what will also be the first threat to humanity with this. A Russia or North Korea that utilizes this tool to harm and dominate. But as it learns, it will outgrow the need for human prompting; indeed, that's the entire purpose of the design. 

I wrote this elsewhere: 

With AI chat, I'm more afraid of human bad actors to begin with, but somebody will use these technologies to wreak havoc somewhere. That's the start.

Eventually yes, I could see certain bots gaining more agency and self-selected goals, based on what they've been fed, and there will be server-based bad actors as well as meat-based ones, if you will. With everything becoming more computerized and networked and with personal info, they could get not only to be news of the day and influence society but also create their own events, take over a smart vehicle, for instance. A politician's car or a semi truck shipping cargo or a hazmat truck. Or a Boston Dynamics dog. Phil's gonna keep us in his people zoo.  May we live in interesting times.

  • Like 1
Link to comment
Share on other sites

I'm so glad we're putting computers into absolutely god damn everything! 

Not sure if I'm being sarcastic or not. I do think it's stupid. But. Some of the shit is pretty cool.

I used to laugh at the idea of a smart refrigerator. Now I'm maybe seeing the potential. (I saw some at the hardware store a bit ago. I haven't lost all my skepticism but would also say I didn't see them in action. So that means I'd give it a chance now.)

I still think there's potential for danger. It's like with the first plastic disposable shopping bags and water bottles. They seemed convenient then, but left unchecked for decades it will get out of hand.

All I know is, if a cyberpunk dystopia does replace this fucking mundane normie dystopia-lite, I hope I'm still young enough and alive enough to fight in the resistance and have some fucking fun with it.

  • Haha 1
Link to comment
Share on other sites

Administrator · Posted
41 minutes ago, Link said:

I'm so glad we're putting computers into absolutely god damn everything! 

Not sure if I'm being sarcastic or not. I do think it's stupid. But. Some of the shit is pretty cool.

I used to laugh at the idea of a smart refrigerator. Now I'm maybe seeing the potential. (I saw some at the hardware store a bit ago. I haven't lost all my skepticism but would also say I didn't see them in action. So that means I'd give it a chance now.)

I still think there's potential for danger. It's like with the first plastic disposable shopping bags and water bottles. They seemed convenient then, but left unchecked for decades it will get out of hand.

All I know is, if a cyberpunk dystopia does replace this fucking mundane normie dystopia-lite, I hope I'm still young enough and alive enough to fight in the resistance and have some fucking fun with it.

The problem with smart appliances is that there are so many more points of failure, and they're far harder to repair when something goes wrong (or literally impossible in some instances). The average life of these appliances is 5-10 years, compared to older ones which are still going strong if well maintained. I have 30 year old laundry machines in my boiler room and I hope to never have to get rid of them. 

  • Like 1
  • Agree 2
Link to comment
Share on other sites

44 minutes ago, Link said:

I'm so glad we're putting computers into absolutely god damn everything! 

Not sure if I'm being sarcastic or not. I do think it's stupid. But. Some of the shit is pretty cool.

I used to laugh at the idea of a smart refrigerator. Now I'm maybe seeing the potential.

I remember when I first heard about "smart bricks" years ago I laughed... but honestly they're really cool and potentially super useful. They can analyze if there's too much stress on a wall, cracking, moisture, etc. so absolutely much of this is very useful. Of course none of that stuff is "AI" - and things like that are absolutely designed to help people, can't really see how it could potentially harm anyone. Of course, people have a way of turning seeming innocuous stuff into weapons.

  • Like 1
Link to comment
Share on other sites

6 minutes ago, avatar! said:

Of course, people have a way of turning seeming innocuous stuff into weapons.

My point in that was that it leads to the next step. 

People do indeed do that, and they can be utilized even without being weapons. And they could be utilized by AI much more easily than a dumb brick. If nothing else, a bad actor could create chaos by causing them to lie.

Link to comment
Share on other sites

1 hour ago, Gloves said:

The problem with smart appliances is that there are so many more points of failure, and they're far harder to repair when something goes wrong (or literally impossible in some instances). The average life of these appliances is 5-10 years, compared to older ones which are still going strong if well maintained. I have 30 year old laundry machines in my boiler room and I hope to never have to get rid of them. 

Not to mention a huge security risk. Those smart fridges and washing machines connected to the Internet are relatively easy for a hacker to get into and once they're on your network your smart house all of a sudden becomes the easiest house to break into if the locks are also connect to the Internet. 

I'd recommend people don't connect things like fridges and washing machines to the Internet.

  • Agree 1
Link to comment
Share on other sites

12 hours ago, Brickman said:

Not to mention a huge security risk. Those smart fridges and washing machines connected to the Internet are relatively easy for a hacker to get into and once they're on your network your smart house all of a sudden becomes the easiest house to break into if the locks are also connect to the Internet. 

I'd recommend people don't connect things like fridges and washing machines to the Internet.

Or just not locks?

Link to comment
Share on other sites

21 hours ago, Brickman said:

I'd recommend people don't connect things like fridges and washing machines to the Internet.

But they tell me things like when I need beer or when the laundry is done.

Are you saying I have to do those things myself now? 😡

  • Haha 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...