Jump to content
IGNORED

Are you prepared for our AI overlords?


avatar!

Recommended Posts

2 minutes ago, RH said:

Because you can't.  This bot wasn't "sentient".  In a way, it was programmed to not harm humans by being programmed to do a specific  task.  However, due to lighting or whatever was happening from it's sensors at that moment, the man appeared as a box.

Isamov was a smart man and definitely wrote Sci-Fi as a philosophical perspective but fundamentally, the way AI works and has for the past 40 years isn't like saying "Hey, Mister Machine, here are the tasks you need to do.  Oh, by the way.  Don't hurt people when you do it."  It just doesn't have that intelligence.

Taking a robot and explicitly "teaching" it to identify boxes is hard.  Adding to it an extra safe-guard layer to know what a human is (vs any thing else) is difficult.  I don't know the robotics device they are specifically using but I am aware of box stacking/packing AI bots.  This isn't about intelligence--this is essentially a highly-specialized conveyor system and when a sensor was probably getting out of range of correct tolerance, but not enough to break and give an error, the guy was in the wrong place at the wrong time.

Don't get me wrong.  This is bad, and I'm sure something might be able to come of this but these are machines acting as machines.  The real problem is we assume they have "intelligence" and treat them like that when, in fact, they are just executing code and if you get in their way, they are going to keep doing what they were programmed to do.

I was being a bit facetious when I posted the unfortunate accident, although maybe my sarcasm did not come through! I certainly agree with what you said, although I will point out that it is fairly straightforward today to add some safety protocols so that the machine would be able to distinguish that a human is not a box! I would say this is a safety/programming error, and clearly not a direct attack by an AI... or, maybe that's what the AI wants us to think...

200.gif?cid=c7bb2c540fyq6c3o8gtxumgvqqrt

Link to comment
Share on other sites

1 minute ago, avatar! said:

I was being a bit facetious when I posted the unfortunate accident, although maybe my sarcasm did not come through! I certainly agree with what you said, although I will point out that it is fairly straightforward today to add some safety protocols so that the machine would be able to distinguish that a human is not a box! I would say this is a safety/programming error, and clearly not a direct attack by an AI... or, maybe that's what the AI wants us to think...

200.gif?cid=c7bb2c540fyq6c3o8gtxumgvqqrt

No problem.  I only pop in here from time to time and I think that uh... some people think there's more magic going on that there is.

And for personal context, I've not worked with A.I. for years but I do keep up with it on a general sense and read the news about it.  Back in 2005/06 I was considering getting my masters in A.I. from UGA and, at that time, it was one of the better schools to get that degree, if not the only one available at that time.

Regardless, I spent a lot of time reading about the fundamental algorithms modern A.I. is based off and as practice I wrote my own (very naive) OCR engine which is kind of your entry-point into AI.  I don't consider myself an expert by any means but I do know the nuance of what's going on.

I LOVE ChatGPT. I have a paid tier account and if you know how to prompt it, it can solve a lot of simple code problems very quickly, as well as take data in one format and in 60 seconds, output in another format.  It's amazing what it can do.  But fundamentally, I know how it was built to do what it does and there's no way to tell it "this is a human, this what's dangerous to a human, this is what it looks like when you are about to endanger a human.... and this is what it looks like if one human is about to endanger another.", and then integrate that knowledge into function of other tasks.  Not reliably, anyway.

  • Like 1
Link to comment
Share on other sites

5 hours ago, RH said:

If the best an AI bot can produce is a really bad YouTuber, I’m not worried. 🤣

A stressed out rookie lawyer says he got fired after he used ChatGPT to do his job

https://www.businessinsider.com/young-lawyer-fired-using-chatgpt-job-work-hallucinations-errors-2023-11

This summer, Zachariah Crabill, a 29-year-old lawyer who previously worked at Baker Law Group, was fired after he used ChatGPT at work, he confirmed to Insider.

Crabill said he was feeling stressed about mounting deadlines and internal workplace dynamics when his bosses at the Colorado-based law firm added more work to his plate in May.

To get through it all, he turned to ChatGPT, which he had used before and trusted as an accurate research tool. He asked the chatbot to bolster a motion he had written with details from Colorado case law.

His excitement quickly turned into horror when he realized ChatGPT created multiple fake lawsuit citations in the motion.

"I think all my cases cited from chatGPT are garbage … I can't even find the cases in Lexis…" Crabill said regarding the motion, according to screenshots of his text messages reviewed by Law Week Colorado.

Soon after, he was fired, The Washington Post first reported. Crabill maintained to Insider that using ChatGPT was not the reason he was fired, though he didn't respond when asked for further clarification.

At some point I think AI will be so accurate many people will quite honestly be out of a job. Right now, it's still very early and inaccurate.

Link to comment
Share on other sites

Apparently our AI overlords are having to deal with obnoxious human darama 👾

Sam Altman returns to lead the company that fired him last week

https://www.cnn.com/2023/11/22/tech/openai-cast-of-characters-altman/index.html

When technology firm OpenAI ousted its CEO Sam Altman last week with little warning and less explanation, it set off shockwaves throughout Silicon Valley and beyond. Late Tuesday, in a complete reversal, he rejoined the controversial firm he had helped found, effectively bouncing from the board some of the people who had fired him.

 

Link to comment
Share on other sites

  • 2 weeks later...

At least 85 civilians, including women and children, dead after 'mistaken' army drone attack

https://www.yahoo.com/news/least-85-civilians-including-women-153554645.html

Emergency response officials said at least 85 people have been confirmed dead after a "mistaken" army drone attack on a religious gathering in northwest Nigeria.

The victims were killed Sunday night by drones "targeting terrorists and bandits" in Kaduna state’s Tudun Biri village, according to government and security officials. They were observing a Muslim holiday.

Horrible tragedy. It's not clear if this was a fully autonomous drone or one under human guidance? Most likely under human guidance. That said --

Ukrainian AI attack drones may be killing without human oversight

https://www.newscientist.com/article/2397389-ukrainian-ai-attack-drones-may-be-killing-without-human-oversight/

Ukrainian attack drones equipped with artificial intelligence are now finding and attacking targets without human assistance, New Scientist has learned, in what would be the first confirmed use of autonomous weapons or “killer robots”.

It was inevitable.

Link to comment
Share on other sites

On 11/13/2023 at 4:10 PM, RH said:

The real problem is we assume they have "intelligence" and treat them like that when, in fact, they are just executing code and if you get in their way, they are going to keep doing what they were programmed to do.

It's a machine, Schroeder. It doesn't get pissed off. It doesn't get sad, it doesn't get happy, it just runs programs!

Link to comment
Share on other sites

  • 2 weeks later...

Large AI models can now create smaller AI tools without humans and train them like a 'big brother,' scientists say

https://www.businessinsider.com/large-models-can-create-new-smaller-ai-tools-scientists-2023-12

A team of scientists from MIT and several University of California campuses, together with AI technology company Aizip, say that they can get large AI models, like the one that ChatGPT runs on, to essentially replicate automatically.

"Right now, we're using bigger models to build the smaller models, like a bigger brother helping [its smaller] brother to improve. That's the first step towards a bigger job of self-evolving AI," Yan Sun, CEO of Aizip, told Fox News. "This is the first step in the path to show that AI models can build AI models."

"Our technology is a breakthrough in the sense that for the first time, we have designed a fully automated pipeline," one of the researchers, Yubei Chen, added. It "can design an AI model without human intervention in the process."

 

Link to comment
Share on other sites

  • 2 weeks later...

Police investigate virtual sex assault on girl's avatar

https://www.bbc.com/news/technology-67865327

The virtual incident did not result in physical harm but caused "psychological trauma", the Daily Mail has reported a source as saying. Ian Critchley of the National Police Chiefs' Council (NPCC) wrote that the metaverse - a collective name given to a range of virtual 3D spaces and technologies - had created a "gateway for predators to commit horrific crimes against children, crimes we know have lifelong impacts both emotionally and mentally". "We must see much more action from tech companies to do more to make their platforms safe places", he added.

According to an unnamed senior officer familiar with the matter who spoke to the paper the victim, under 16 at the time, suffered psychological trauma "similar to that of someone who has been physically raped". But in criminal law, rape and sexual assault require there to have been physical contact. Some argue that legal changes may be necessary to ensure that those responsible for sexually motivated attacks on avatars in virtual worlds can be prosecuted and punished effectively.

Certainly, we live in a brave new world 😕

On the one hand, it seems reasonable that criminal charges should be brought up. On the other hand, say someone spoofs an "avatar" of you, and then does horrible virtual stuff in your name. What then? This seems ripe for criminal manipulation much in the same way that swatting is used by criminals.

Link to comment
Share on other sites

  • 1 month later...

AI systems have learned how to deceive humans. What does that mean for our future?

https://theconversation.com/ai-systems-have-learned-how-to-deceive-humans-what-does-that-mean-for-our-future-212197

Perhaps the most disturbing example of a deceptive AI is found in Meta’s CICERO, an AI model designed to play the alliance-building world conquest game Diplomacy.

Meta claims it built CICERO to be “largely honest and helpful”, and CICERO would “never intentionally backstab” and attack allies.

To investigate these rosy claims, we looked carefully at Meta’s own game data from the CICERO experiment. On close inspection, Meta’s AI turned out to be a master of deception.

In one example, CICERO engaged in premeditated deception. Playing as France, the AI reached out to Germany (a human player) with a plan to trick England (another human player) into leaving itself open to invasion.

After conspiring with Germany to invade the North Sea, CICERO told England it would defend England if anyone invaded the North Sea. Once England was convinced that France/CICERO was protecting the North Sea, CICERO reported to Germany it was ready to attack.

AI systems with deceptive capabilities could be misused in numerous ways, including to commit fraud, tamper with elections and generate propaganda. The potential risks are only limited by the imagination and the technical know-how of malicious individuals.

Humanity:

annalynne mccord let's get physical GIF by Pop TV

Link to comment
Share on other sites

Events Team · Posted
10 minutes ago, avatar! said:

AI systems have learned how to deceive humans.

...Like... just now? I would've thought they've known how to do that for some time now, deception honestly isn't a particularly complex thing to pull off, especially if we're talking about something like an AI in which emotions and a moral compass are nonexistent.

Link to comment
Share on other sites

19 minutes ago, ZeldaFreak said:

...Like... just now? I would've thought they've known how to do that for some time now, deception honestly isn't a particularly complex thing to pull off, especially if we're talking about something like an AI in which emotions and a moral compass are nonexistent.

This is different. You can program a game for instance, to be tricky and try to deceive people. However, it depends on human input. With generative AI, it no longer responds reactively but is proactive in deceiving in order to achieve its goals.

Link to comment
Share on other sites

Events Team · Posted
6 minutes ago, avatar! said:

This is different. You can program a game for instance, to be tricky and try to deceive people. However, it depends on human input. With generative AI, it no longer responds reactively but is proactive in deceiving in order to achieve its goals.

Yeah, I know. I was talking about generative AI.

I mean I also don't buy in to sensationalist humanity is doomed headlines like that so maybe I'm just not as freaked out as I ought to be, haha. Especially given that we're talking about an AI that, as far as I can tell, was specifically designed to play Diplomacy, a game in which deception and betrayal is a core component, so of course it's going to deceive.

I dunno, I've just always thought we have far more immediate problems to worry about than the possibility of Skynet being invented.

Link to comment
Share on other sites

  • 2 months later...

https://www.gpb.org/news/2024/04/25/georgia-political-campaigns-start-deploy-ai-humans-still-needed-press-the-flesh

"We have an ethical responsibility to make sure that we help our clients build deeper human relationships with voters,” he added. “Because at the end of the day, the most important person in any election is the voter. It is their community, it is their government. So following that ethical principle, that responsibility we have, our goal is to make sure that our clients are willing to sign off on anything that truly reflects their view, their voice, and gives them a better ability to build a deeper relationship with voters."

Wow, that's quite the spin on ethics. The tidbit of the process that is vetting by a candidate will go by the wayside pretty quickly. And then the whole concept of a human candidate can be tossed. To say nothing of how many poisoners of the well there are. Without them I might think government by AI could be better if it reflects the actual popular will of the people. But they are here and not going away soon.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...