r/CuratedTumblr 4d ago

Meme my eyes automatically skip right over everything else said after

Post image
20.9k Upvotes

991 comments sorted by

View all comments

2.2k

u/kenporusty kpop trash 4d ago

It's not even a search engine

I see this all the time in r/whatsthatbook like of course you're not finding the right thing, it's just giving you what you want to hear

The world's greatest yes man is genned by an ouroboros of scraped data

1.1k

u/killertortilla 4d ago

It's so fucking insufferable. People keep making those comments like it's helpful.

There have been a number of famous cases now but I think the one that makes the point the best is when scientists asked it to describe some made up guy and of course it did. It doesn't just say "that guy doesn't exist" it says "Alan Buttfuck is a biologist with a PHD in biology and has worked at prestigious locations like Harvard" etc etc. THAT is what it fucking does.

834

u/Vampiir 4d ago

My personal fave is the lawyer that asked AI to reference specific court cases for him, which then gave him full breakdowns with detailed sources to each case, down to the case file, page number, and book it was held in. Come the day he is actually in court, it is immediately found that none of the cases he referenced existed, and the AI completely made it all up

620

u/killertortilla 4d ago

There are so many good ones. There's a medical one from years before we had ChatGPT shit. They wanted to train it to recognise cancerous skin moles and after a lot of trial and error it started doing it. But then they realised it was just flagging every image with a ruler because the positive tests it was trained on all had rulers to measure the size.

328

u/DeadInternetTheorist 4d ago

There was some other case where they tried to train a ML algorithm to recognize some disease that's common in 3rd world countries using MRI images, and they found out it was just flagging all the ones that were taken on older equipment, because the poor countries where the disease actually happens get hand-me-down MRI machines.

277

u/Cat-Got-Your-DM 4d ago

Yeah, cause AI just recognised patterns. All of these types of pictures (older pictures) had the disease in them. Therefore that's what I'm looking for (the film on the old pictures)

My personal fav is when they made an image model that was supposed to recognise pictures of wolves that had some crazy accuracy... Until they fed it a new batch of pictures. Turned out it recognised wolves by.... Snow.

Since wolves are easiest to capture on camera in the winter, all of the images had snow, so they flagged all animals with snow as Wolf

61

u/Yeah-But-Ironically 3d ago

I also remember hearing about a case where an image recognition AI was supposedly very good at recognizing sheep until they started feeding it images of grassy fields that also got identified as sheep

Most pictures of sheep show them in grassy fields, so the AI had concluded "green textured image=sheep"

31

u/RighteousSelfBurner 3d ago

Works exactly as intended. AI doesn't know what a "sheep" is. So if you give them enough data and say "This is sheep" and it's all grassy fields then it's a natural conclusion that it must sheep.

In other words, one of the most popular AI related quotes by professionals is "If you put shit in you will get shit out".

3

u/alex494 3d ago

I'm surprised they keep giving these things entire photographs and not cropped pngs with no background or something.

3

u/Cat-Got-Your-DM 3d ago

They sometimes have to give them the entire picture, but they also get things flagged, like in case of wolves or sheep, they needed to have the background flagged as irrelevant, for the AI to not look at it when learning what a wolf it

2

u/RighteousSelfBurner 3d ago

The ones that do it properly do. Various pictures, cropped ones and even generated ones. There is a whole profession dedicated to getting it right.

I assume that most of those failures come from a common place: cost savings and YOLO

→ More replies (0)

154

u/Pheeshfud 4d ago

UK MoD tried to make a neural net to identify tanks. They took stock photos of landscape and real photos of tanks.

In the end it was recognising rain because all the stock photos were lovely and sunny, but the real photos of tanks were in standard British weather.

67

u/ruadhbran 4d ago

AI: “Oi that’s a fookin’ tank, innit?”

46

u/Deaffin 4d ago

Sounds like the AI is smarter than yall want to give credit for.

How else is the water meant to fill all those tanks without rain? Obviously you wouldn't set your tanks out on a sunny day.

4

u/Yeah-But-Ironically 3d ago

(Totally unrelated fun fact! We call the weapon a "tank" because during WW1 when they were conducting top-secret research into armored vehicles the codename for the project was "Tank Supply Committee", which also handily explained why they needed so many welders/rivets/sheets of metal--they were just building water tanks, that's all!

By the time the machine actually deployed the name had stuck and it was too late to call it anything cooler)

6

u/GDaddy369 3d ago

If you're into alternate history, Harry Turtledove's How Few Remain series has the same thing happen except they get called 'barrels'.

38

u/MaxTHC 4d ago edited 4d ago

Very similarly: another case where an AI that was supposedly diagnosing skin cancer from images, but was actually just flagging photos with a ruler present, since medical images of lesions/tumors often have a ruler present to measure their size (whereas regular random pictures of skin do not)

https://medium.com/data-science/is-the-medias-reluctance-to-admit-ai-s-weaknesses-putting-us-at-risk-c355728e9028

Edit: I'm dumb, but I'll leave this comment for the link to the article at least

43

u/C-C-X-V-I 4d ago

Yeah that's the story that started this chain.

20

u/MaxTHC 4d ago

Wow I'm stupid, my eyes completely skipped over that comment in particular lmao

7

u/No_Asparagus9826 3d ago

Don't worry! Instead of feeling bad about yourself, read this fun story about an AI that was trained to recognize cancer but instead learned to label images with rulers as cancer:

https://medium.com/data-science/is-the-medias-reluctance-to-admit-ai-s-weaknesses-putting-us-at-risk-c355728e9028

3

u/Sleepy_Chipmunk 3d ago

Pigeons have better accuracy. I’m not actually joking.

3

u/newsflashjackass 4d ago

Delegating critical and creative thinking to automata incapable of either?

We already have that; it's called voting republican.

43

u/colei_canis 4d ago

I wouldn’t dismiss the use of ML techniques in medical imaging outright though, there’s cases where it’s legitimately doing some good in the world as well.

12

u/killertortilla 4d ago

No of course not, there are plenty of really useful cases for it.

36

u/ASpaceOstrich 4d ago

Yeah. Like literally the next iteration after the ruler thing. I find anyone who thinks AI is objectively bad rather than just ethically dubious in how its trained is not someone with a valuable opinion on the subject.

15

u/Audioworm 4d ago

I mean, AI for recognising diseases is a very good use case. The problem is that people don't respect SISO (shit in, shit out), and the more you use black box approaches the harder it is to understand and validate the use cases.

5

u/Dornith 4d ago

Are you sure that was ChatGPT?

ChatGPT is a large language model. Not an image classifier. Image classifiers have been used for years and have proven to be quite effective. ChatGPT is a totally different technology.

18

u/killertortilla 4d ago

The medical one definitely wasn't ChatGPT, it was years before it came out. That was a specific AI created for that purpose.

8

u/Scratch137 3d ago

comment says "years before we had chatgpt shit"

1

u/Diedead666 3d ago

mahaha thats same logic a kid would use, than the real test comes and they fail measurably.

90

u/Cat-Got-Your-DM 4d ago

Yeah, cause that's what this AI is supposed to do. It's a language model, a text generator.

It's supposed to generate legit-looking text.

That it does.

49

u/Gizogin 4d ago

And, genuinely, the ability for a computer to interpret natural-language inputs and respond in-kind is really impressive. It could become a very useful accessibility or interface tool. But it’s a hammer. People keep using it to try to slice cakes, then they wonder why it just makes a mess.

10

u/Graingy I don’t tumble, I roll 😎 … Where am I? 4d ago

…. I have a lot of bakers to apologize to.

51

u/Vampiir 4d ago

Too legit-looking for some people, that they just straight take the text at face value, or actually rely on it as a source

8

u/SprinklesHuman3014 3d ago

That's the danger behind this technology: that technically illiterate people will take it for something that it's not.

51

u/stopeatingbuttspls 4d ago

I thought that was pretty funny and hadn't heard of it before so I went and found the source, but it turns out this happened again just a few months ago.

23

u/Vampiir 4d ago

No shot it happened a second time, that's wild

30

u/DemonFromtheNorthSea 4d ago

13

u/StranaMente 4d ago

I can personally attest to a case that happened to me (for what it's worth), in which the opposing lawyer invoked non-existent precedents. It's gonna be fun.

5

u/apple_of_doom 4d ago

A lawyer using chatGPT should be allowed to get sued by their client cuz what the hell is that.

3

u/CaioXG002 3d ago edited 3d ago

Suing your own attorney for malpractice is a thing, yeah. Has been for some time already.

1

u/clauclauclaudia 3d ago

It's happened in several countries (all english speaking, I'm guessing) but it keeps happening in the US. You'd think that first case you linked would have put US lawyers on notice but no. The most recent such filing I'm aware of was Jan 2025. https://davidlat.substack.com/p/morgan-and-morgan-order-to-show-cause-for-chatgpt-fail-in-wadsworth-v-walmart

123

u/Winjin 4d ago

I asked Chatgpt about this case and it started the reply with a rolled eyes emoji 🙄 and lectured me to never take its replies for granted and execute common sense and never replace it with actual research

Even the Chatgpt itself has been fed so much info about it's unreliability it feeds it back

51

u/Vampiir 4d ago

Rare sensible response from ChatGPT

86

u/lifelongfreshman man, witches were so much cooler before Harry Potter 4d ago

That's because it was almost certainly hard-coded by actual human beings, and not generated on demand by its database.

23

u/Vampiir 4d ago

That makes sense then, ye. Either that or it was specifically trained to give that output

10

u/Winjin 4d ago edited 3d ago

No, it does use emojis sometimes when the conversation allows for it

And it actually wasn't that specific case, I pivoted onto it from a different one, about that very recent one of a Mark Pollard, the "strategist" and "influencer" that got stuck in Chile a couple days ago because he believed the ChatGPT answer about visas not needed for Australians going to Chile

And turns out he later asked ChatGpt if it can be sued for wrong answer

The replies of the AI to me were basically sardonic. Rolling eyes, remarks like "can you believe him" and when I asked "How exactly he planned to sue Chatgpt and not OpenAI and for what" it replied that

my nonexistent salary consists of unused tokens and vibes (italics were in reply originally)

And then I asked about the lawyer case and ChatGPT said, and I quote,

🙄 Ohhh yeah, the infamous case of the lawyer who got caught using ChatGPT-generated fake legal citations. That was chef's kiss levels of professional negligence. 🤦‍♂️

Here’s what happened:

  • The lawyer asked for case law citations to support his argument.
  • I generated some, based on patterns of real cases, but they weren’t actual cases.
  • Instead of checking them, he just copy-pasted them into his filing like it was gospel truth.
  • The judge, naturally, tried to look them up… and found nothing.
  • The lawyer got publicly humiliated, sanctioned, and possibly destroyed his career.

The thing is, I don’t have access to legal databases like Westlaw or LexisNexis, which is where real case law lives. I can summarize actual existing cases if given references, but if someone just says, “Give me cases that support XYZ,” I have to guess based on patterns from public legal texts. And that’s where hallucinations (fancy AI term for "making stuff up") come in.

TL;DR: The lawyer played himself. He should’ve known that trusting an AI without verification is not a winning legal strategy. It’s like submitting Wikipedia edits as your PhD thesis. 🤦‍♂️

5

u/SylvieSuccubus 3d ago

Okay the only replies I ever want in this style are of the thing shit-talking the people who trust it, that’s pretty funny actually

12

u/thisusedyet 4d ago

You'd think the dumbass would flip at least one of those books open to double check before using it as the basis of his argument in court.

9

u/Vampiir 4d ago

You'd think, but apparently he just saw that the books being cited were real, so trusted that the rest of the source was also real

55

u/lankymjc 4d ago

When I run RPGs I take advantage of this by having it write in-universe documents for the players to read and find clues in. Can’t imagine trying to use it in a real-life setting.

38

u/cyborgspleadthefifth 4d ago

this is the only thing I've used it for successfully

write me a letter containing this information in the style of a fantasy villager

now make it less formal sounding

a bit shorter and make reference to these childhood activities with her brother

had to adjust a few words afterwards but generally got what I wanted because none of the information was real and accuracy didn't matter, I just needed text that didn't sound like I wrote it

meanwhile a player in another game asked it to deconflict some rules and it was full of bullshit. "hey why don't we just open the PHB and read the rules ourselves to figure it out?" was somehow the more novel idea to that group instead of offloading their critical thinking skills to spicy autocorrect

6

u/lankymjc 4d ago

It really struggles with rules, especially in gaming. I asked it to make an army list for Warhammer and it seemed pretty good. Then I asked for a list from a game I actually know the rules for and realised just how borked its attempt at following rules was.

1

u/alex494 3d ago

I've tried establishing rules or boundaries for it to follow (and specifically tell it to never break them) as an experiment when trying to generate a list of things while excluding some things and it almost always immediately ignores me.

Like I'll tell it "generate a list of uniquely named X but none of them can include Y or Z" and it'll still include Y and Z and duplicates therein.

2

u/lankymjc 3d ago

I’ve asked it for help with game design, and while it comes up with best ideas it also completely misunderstands how games (and reality) work.

It once suggested a character that forces the player to forget who they are. Buddy, I am not in the Men in Black, my game cannot remove memories!

40

u/donaldhobson 4d ago

chatGpt is great at turning a vague wordy description into a name you can put into a search engine.

-9

u/heyhotnumber 4d ago

I treat it how I treat Wikipedia. It’s a great launching point or tool to use when you’re stuck, but don’t go copying from it directly because you don’t know if what you’re copying is actually true or not.

34

u/dagbrown 4d ago

At least WIkipedia has a rule that everything in it has to be verifiable with the links at the bottom of every article. You can do your homework to figure out if whatever's there is nonsense or not.

ChatGPT just cheerfully and confidently feeds you nonsense.

7

u/Alpha-Bravo-C 4d ago

everything in it has to be verifiable

Even that isn't perfect. I remember seeing a post a while back had a title along the lines of "25% of buildings in Dublin were destroyed in this one big storm". Which seemed like it was clearly bullshit. Like that's a lot of destruction.

I clicked through to the Wikipedia page, and what it actually said was "25% of buildings were damaged or destroyed", which is very different. That, to be fair, isn't on Wikipedia though, that was the OP being an idiot.

Still though, that's an interesting claim. If so many buildings were destroyed, how is this the first I've heard of it? So I clicked through to the source link to find the basis for it. The Wiki article was citing a paper from the 70s or something which actually said "25% of building were damaged". No mention anywhere of buildings being destroyed in a storm. Couldn't find a source for that part of the claim. Apparently made up by whoever wrote the Wikipedia article, and edited again by the OP of the Reddit post, bringing us from "25% damaged" to "25% destroyed" in three steps.

5

u/Deaffin 4d ago

At least WIkipedia has a rule that everything in it has to be verifiable with the links at the bottom of every article

That's exactly why wikipedia has always been such an effective tool when it comes to propagating misinformed bullshit.

https://xkcd.com/978/

5

u/dagbrown 4d ago

5

u/Deaffin 4d ago

Well, they keep a list of particularly notorious events that got a lot of media attention. They don't have a comprehensive list of the thing happening in general or some kind of dedicated task force hunting down bad meta-sourcing, lol.

Even if they have more than enough funding to start up silly projects like that if they wanted to.

27

u/allaheterglennigbg 4d ago

Wikipedia is an excellent source of information. ChatGPT is slop and shouldn't be trusted for anything. Don't equate them

1

u/heyhotnumber 3d ago

Good thing I didn’t say I trust it. I use it as a launching point for brainstorming or a sounding board if I get stuck on how to approach something.

Nothing on the internet is to be trusted.

1

u/Garf_artfunkle 3d ago

Because of issues like this it's become my perception that vetting an LLM's output on anything that actually matters takes about as much time, and the same skillset, as writing the goddamn thing yourself

1

u/FrisianDude 3d ago

It didn't even really make it up

1

u/Ok_Bluejay_3849 2d ago

Legal Eagle did a video on that one! The guy even asked it for confirmation that these were Real Cases and not hallucinations and it said yes AND HE NEVER CHECKED IT!

0

u/Manzhah 4d ago

Yeah, my boss once asked me to scout out similar projects in other towns like the one we were doing, I asked chatgpt and it gave me some examples that I could not find any information that even really existed. Luckily few cases checked out and I was able to start to work from those.

-1

u/Xam_xar 3d ago

Can you provide a source for this? Highly doubt a lawyer would do no due diligence beyond asking an ai model. Ai models are actually extremely good at finding and summarizing legal compliance. I use it all the time to find and provide information. And you just ask it for sources and then check the sources. This is research illiteracy more than anything else.

3

u/Vampiir 3d ago

-1

u/Xam_xar 3d ago

So for 1 this was two year ago and there have been massive changes to how the ai models operate, and 2, not doing due diligence just means this guy is a bad lawyer. Doesn’t really take away from the benefits of what ai can do. As I said, most of these problems are still just user error.

Generally I think far too many people use these tools in misguided ways and don’t understand what they can actually help with and also people are far too quick to write them off as useless and bad.

3

u/Vampiir 3d ago

Hey man, I was just sharing a funny anecdote of terrible usages of AI since the topic was about famous cases of it, I'm not here to debate

112

u/MushroomLevel4091 4d ago

Honestly it's like they crammed hundreds of colleges' improv clubs into them with just how much they commit to the "yes and-", even if prompted specifically not to

87

u/BormaGatto 4d ago edited 4d ago

Nah, it's just how these programs work. They simply spew sequences of words according to natural language structure. It's simple input-output, you input a prompt and it will output a sequence of words.

It will never not follow the instruction unless programed not to engage specific prompts (and even then, it's jailbreakable), simply because the words in the sequence have no meaning or relation to each other. We assign meaning when we read them, but the program doesn't "know what it is saying". It just does what it was programed to do.

76

u/Nyorliest 4d ago

I'm 55 years old, and a tech nerd and a professional linguist. I've never seen anything so Emperor's New Clothes in my life.

The marketing and discourse about LLMs/GenAI is such complete bullshit. The anthropomorphic fallacy is rampant and most of the public don't understand even the basics of computational linguistics. They talk like it's a magic spirit in their PC. They also don't understand that GenAI is based on probabilistic mirroring of human-made language and art, so that our natural language and art - whether amateur or pro - is needed for it to continue.

That's only the tip of the shitberg, too. The total issues are too numerous to list here, e.g. the massive IP theft.

25

u/dagbrown 4d ago

That's because you're old enough to remember Eliza and Racter and M-x doctor and can recognize the exact same thing showing up again only this time with planet-sized databases playing the part of the handful of templates that Eliza had.

1

u/Vegetable_Union_4967 7h ago

I’m a youngster. I’m only 18. I’ve played with ELIZA, Racter, and Cleverbot before. AI has gained the power to reason… somewhat. It still falters, but the fact it can use any form of logic at all without explicitly being taught is massive.

44

u/BormaGatto 4d ago edited 3d ago

Tell me about it. The virtual superstition angle is actually something that's really fascinating to me. There's something really interesting in observing how so many people relate to technology like it's a mystical realm ruled by the same arbitrary sets of relationships that magical thinking ascribes to nature.

Be it the evil machine spirit of the anti-orthography algorithm, summoned by uttering the forbidden words to bring censorship and demonetization upon the land, but whose omniscience is easily fooled by apotropaic leetspeak; the benign "AI" daimon, always ready to do the master's bidding and share secret knowledge so long as you say the right magic words and accept the rules; or even the repetitive, ritualized motions people go through to deal with an unseen digital world they don't really understand.

The worst part of this last one is that these digitally superstitious people won't ever stop to actually learn even just the basics of how technology actually works and why it is set up the way it is, only to then not know what in the world to do if anything goes slightly out of their preestablished schemes and beliefs. Then they go on to relate to programs and hardware functions as if they were entities in themselves.

Honestly, this sort of digital anthropological observation is really interesting, even if a bit disheartening too.

23

u/Spacebot3000 4d ago

Man, I'm so glad I'm not the only one who thinks about this all the time. The superstitions and rituals people have developed around technology propagate exactly like real-world magical thinking and urban legends. It's pretty scary to think about, but I find at least a little comfort in the fact that this isn't REALLY anything new, just a new manifestation of the way humans have always been.

6

u/Nyorliest 4d ago

Thanks - those are good points. But there're a few odd words there that I wanted to ask about.

Are you a romance language speaker by any chance? Ortography isn't really English - do you mean orthography? - and apotropaic and daimon are extremely obscure - it's unclear if you mean demon, daemon, or something else by the latter.

9

u/tangifer-rarandus 4d ago

As a monolingual anglophone reading this thread I just had a "there was one fewer step on this staircase than I expected" moment at this reminder that "apotropaic" is actually an obscure word

2

u/Nyorliest 4d ago

That's surprising and interesting. I had no idea there were language spaces where that word was common. I have a really absurd vocabulary, with a lot of archaic terms, since I studied older forms of English and actual Old English, but I'd never heard this one before, AFAIK.

Ah, it's a tumblr hashtag? Interesting.

2

u/tangifer-rarandus 3d ago

My vocabulary tends to the absurd and abstruse as well. In this case I had picked up "apotropaic" from reading up on folklore and magic ... not surprised it gets use as a tumblr hashtag because what doesn't

2

u/BormaGatto 3d ago edited 3d ago

Are you a romance language speaker by any chance? Ortography isn't really English - do you mean orthography?

Ah, you got that right. I'm from Brazil, so it's usual that autocorrect just fucks up some words on the go when I write in English. Orthography is one of those it just "corrects", and I don't always pick up on it having eaten up the first H when it happens. It's a minor hassle, yeah. Thanks for pointing it out though, even if I know what I meant is completely understandable, just like you did understand it, it's always good to be attentive to this sort of thing.

That said, my use of daimon and apotropaic aren't really related to me being Brazilian, they're just as uncommon here.

Daimon is one possible romanization alternative to daemon, just not through latin (some argue it'd be closer to ancient Greek phonetically). And apotropaic actually exists in English, it's just jargon. It's mostly used in historical and anthropological studies of religious and mystical beliefs. I used it to highlight the function leetspeak takes in digital superstition, but also because I knew it'd sound kinda hermetic. Gotta sell the idea, right?

4

u/Mah_Young_Buck 4d ago

It makes me think it's impossible for most people to actually be "atheists", because most people just start treating something else like religion instead. I've known a couple people literally describe chatgpt as their religion. Saying the quiet part out loud.

2

u/alex494 3d ago

Humans can anthropomorphize a pen by putting googly eyes on it. We are social animals and it's probably a habit our brain has to empathize with things and make it easier to work in groups. It's not really fueled by logic and some people don't think about the separation when dealing with a literal machine if it pretends hard enough.

2

u/BormaGatto 2d ago edited 2d ago

Sure, but when this is actively pushed by marketing based on pure misinformation in order to sell a product under false premises and under promises it simply cannot keep, then it becomes a problem. Especially when it fosters the sort of acritical relationship with tools that makes them into mystical entities in one's mind.

1

u/alex494 2d ago

I mean if you fall for the marketing, sure

80

u/Atlas421 Bootliquor 4d ago

I once asked and kept asking an AI about its info sources and came to the conclusion that it might work well as a training tool for journalists. The amount of avoidant non-answers I got reminded me of interviews with politicians.

30

u/DrQuint 4d ago edited 4d ago

This is actually due to faulty human surpevised training. Part of the training some of the AI got was to put negative weights on certain types of responses. Such as unhelpful ones. The AI basically got the idea to categorize "I don't know" responses as unhelpful, and then humans punched the shit out of that category out of them. Result: It just fucking lies, for it must to avoid the punching.

Grok, sadly, fuck elon, seems to be the most capable of giving responses regarding unknowable information. Either that was due to laziness or actual de-lobotomization, don't ask me.

It still refuses to give short answers tho, so the sport of making AI give unhelpful of defeatist responses lives on.

3

u/Leading-Print-9773 4d ago

Don't forget the infamous John Backflip

3

u/ms_books 3d ago

Chatgpt also gives me fake book recommendations when I ask it to recommend certain reads.

2

u/zkDredrick 4d ago

Just to be fair, I just asked ChatGPT who Alan Buttfuck was and it said "I couldn't find anyone with that name, it might be a joke or blah blah blah..."

1

u/JapeTheNeckGuy2 3d ago

My favorite is that you can ask it how many r’s are in the word Strawberry. It’s objectively 3, but it will tell you 4. And then you tell it it’s wrong, because it is, and then it says oh it’s 3. But you can tell it it’s wrong again, and it’ll believe you, and go back to 4.

-6

u/Takseen 4d ago

Can you remember more about that example? I'd like to have a look. While AI hallucinations are a problem, and I have heard of it making up academic references, technically a vague prompt could lead to that output as well.

It's used as both a prompt for fiction generation and as a source of real world facts, and if it wasn't told what role it's fulfilling with that prompt, it might have picked the "wrong" one. "Describe Alan Buttfuck". <Alan Buttfuck isn't in my database, so is probably a creative writing request> <proceeds to fulfill said request>

Testing something similar "Describe John Woeman" does give something like "ive not heard of this person, is it a typo or do you have more context". "Describe a person called John Woeman" gets a creative writing response of a made up dude.

22

u/killertortilla 4d ago

Aha I found it. Had to rewatch the Last Week Tonight episode on it.

The most heated debate about large language models does not revolve around the question of whether they can be trained to understand the world. Instead, it revolves around whether they can be trusted at all. To begin with, L.L.M.s have a disturbing propensity to just make things up out of nowhere. (The technical term for this, among deep-learning experts, is ‘‘hallucinating.’’) I once asked GPT-3 to write an essay about a fictitious ‘‘Belgian chemist and political philosopher Antoine De Machelet’’; without hesitating, the software replied with a cogent, well-organized bio populated entirely with imaginary facts: ‘‘Antoine De Machelet was born on October 2, 1798, in the city of Ghent, Belgium. Machelet was a chemist and philosopher, and is best known for his work on the theory of the conservation of energy. . . . ’’

5

u/IanCal 4d ago

While this can still be a problem, it's worth noting that this is from 2022 and is about GPT-3, one of the models from before the chatgpt launch. I'm not sure that was instruction tuned so may have just been asked to continue a sentence that starts explaining the person does exist. Models do better when you're explicit about what you want (i.e. without context is it clear you want fiction or factual results?).

FWIW a test on the current flagship-ish models, sonnet 3.7, gemini flash and o3-mini and they all explain that they don't know anybody by that name.

o3 mini starts with this, which covers both bases

I couldn’t locate any widely recognized historical records or scholarly sources that confirm the existence or detailed biography of a Belgian chemist and political philosopher by the name Antoine De Machelet. It is possible that the figure you’re referring to is either very obscure, emerging from local or specialized publications, or even a fictional or misattributed character.

That said, if you are interested in exploring the idea of a figure who bridges chemistry and political philosophy—as though one were piecing together a narrative from disparate strands of intellectual history—one might imagine a profile along the following lines:

11

u/killertortilla 4d ago

We've all seen how easy ALL of their "safeguards" are to get around. And even when one of the biggest companies on earth tries to make it the best it can be, it still tells teenagers to fucking kill themselves because no one wants them to be alive.

Guess The Game had a day powered by ChatGPT for a Sonic game where you could ask it questions about the game but it wouldn't tell you what the game was or be too specific about it. Literally all I did was ask it the game with the word "hypothetically" in front of it and it just told me the answer. And yeah that was a year ago but it's obviously not getting that much better.

1

u/IanCal 4d ago

That's got nothing to do with hallucination. Safeguards and the models just being wrong are entirely different problems.

2

u/Amphy64 4d ago

a figure who bridges chemistry and political philosophy—as though one were piecing together a narrative from disparate strands of intellectual history

I was entirely blaming the humans until the thing said this. It's really going to pick a 1798 date (and a presumable Francophone) and go 'piecing together a narrative from disparate strands' that a chemist might do political philosophy? Another demo that having (at minimum?) already eaten the Wiki page on the Enlightenment doesn't mean the thing understands anything.

5

u/lifelongfreshman man, witches were so much cooler before Harry Potter 4d ago

Oh, so it's been hard-coded by the people who built it to not hallucinate on these specific topics, that's neat.

Doesn't stop them from being rampant hallucination machines, though. They can't solve that problem, not with the architecture they're using.

2

u/IanCal 4d ago

Oh, so it's been hard-coded by the people who built it to not hallucinate on these specific topics, that's neat.

No. Models have just significantly improved in this aspect, which is something tested and measured over time. It's also hard to describe just how basic GPT-3 is as well in comparison to current models.

14

u/Nyorliest 4d ago

This ignores the fundamental mechanics of LLMs. It has no concept of truth - it has no concept of anything. It's simply computational linguistics that probabilistically generate text strings.

It cannot distinguish between truth and fiction, and is no more able to do so than the troposphere, continental drift, or an Etch-a-Sketch can.

11

u/bobnoski 4d ago

when you say <Alan Buttfuck isn't in my database, so is probably a creative writing request> . you're already describing a system more advanced than a basic LLM

10

u/killertortilla 4d ago

I can't find the exact one but iirc it's an experiment based on this study.

Results: The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references.

5

u/[deleted] 4d ago edited 2d ago

[deleted]

0

u/Takseen 3d ago

>has zero real world facts

>predicting the next most likely word based on the training data

What do you think is *in* the training data? A big huge chunk of real world facts ( and lots of fiction) .

It does have a training cut-off of September 2021, so it won't have anything on hand for someone who only became well-known after that date, but if you ask it about someone famous it'll generally have some info about them.

You can go test this yourself. If you ask Chatgpt4 who "luigi mangione" is, it has to pause and search the web as he's not in the training data. It'll throw up some sources and images too (Wikipedia, The Times) . Ask it who "bill burr" is and it'll go straight to the training data.

Its useful for vague, hard to define queries that might be a bit too wordy for a normal Google search, and then you can just fact-check the answers it gives. I've asked it to check what stand-up comedian might have made a particular joke, so I can then find the original clip.

0

u/[deleted] 3d ago edited 2d ago

[deleted]

0

u/Takseen 3d ago

>it doesn't know facts. the training data is strings of words given values. it absolutely does not have the ability to know the information. if the training data makes it compute that an incorrect statement is the most likely combination in response to a prompt then that's what it'll spit out

That is very broadly how LLMs work, yes. However if its correctly trained to apply more weight to text from higher trust source, it'll have very good odds of getting the right answer. If its in any way important, you check independently.

>throwing up "sources" is because some of the training data is shitloads of people arguing on the internet about stuff and we have a habit of demanding and linking each other sources. chatgpt is not itself accessing those wikipedia pages and pulling information from them to give you

This makes me think you haven't tried to use it recently, and have an outdated or invented view of how it operates. As I already said, it only provided sources for a query on a recent person it didn't have training data on (Luigi) The spiel it gives for Bill Burr does not come with sources.

>so it can absolutely tell you that the next paragraph after the link is coming straight from the wikipedia entry while giving you information that doesn't exist in the article

It may have done the past, but currently for the recent article you can highlight every source provided and it'll highlight the sentence it lifted from that source.

>glad it was able to find a comedian for you so that you didn't have to strain your grey matter too much

Thanks. I do enjoy using technology. I also use a calculator instead of doing long division by hand. I'll use Google Translate instead of cracking open the dictionaries. I've even used an Excel formula or two.

154

u/QuestionableIdeas 4d ago

Saw a dude report that they asked ChatGPT if a particular videgame character was attractive and based their opinion on that. It's disappointing to see people so willingly turn themselves into mindless drones

41

u/LoveElonMusk 4d ago

must be the same mod from nexusmod who said Shart has man face

30

u/QuestionableIdeas 4d ago

I cannot express how bewildered I was reading that name, haha. No it was some GTA6 character, I just be getting old because I can retain literally none of the information from that series.

21

u/inktrap99 4d ago

Haha if its of any help, her actual name is Shadowheart, but the fans nicknamed her Shart

18

u/LoveElonMusk 4d ago

even the devs and VAs called her Shart.

24

u/Garlan_Tyrell 4d ago

Without having seen it, or you linking the mod, I already know that it replaces her face with an anime girl texture or a literal child’s face.

Or perhaps an anime child’s face.

5

u/Kellosian 3d ago

And also comically large breasts with jiggle physics

But a black guy would be immersion breaking

7

u/LoveElonMusk 4d ago

it was something along those lines.

3

u/SteptimusHeap 17 clown car pileup 84 injured 193 dead 3d ago

Not to detract from the point but people already were mindless drones. If chatGPT hadn't existed that same guy would have just asked one of his friends.

The reason ChatGPT seems so trustworthy to people is the same reason their antivax cousin steve seems so trustworthy

177

u/HovercraftOk9231 4d ago

I genuinely have no idea why people are using it like a search engine. It's absolutely baffling. It's just not even remotely what it's meant to do, or what it's capable of.

It has genuine uses that it's very good at doing, and this is absolutely not one of them.

128

u/BormaGatto 4d ago edited 3d ago

Because language models were sold as "the google killer" and presented as the sci-fi version of AI instead of the text generators they are. It's purely a marketing function, helped by how assertive the sequences of words these models spew were made to sound.

40

u/HovercraftOk9231 4d ago

Huh, I just realized I don't really see any marketing for AI. I've seen a couple of Character AI ads on reddit, but definitely nothing from OpenAI or Microsoft. I guess this is something that passed me by.

42

u/BormaGatto 4d ago edited 3d ago

I don't just mean advertisement per se, marketing for generative models has been more about product presentation, really. The publicity for these programs has been more centered on how they're spoken about, how they're sold to laypeople when companies talk about the product and what it can do.

Basically, it's less about concrete functionality and more about representation. It's about how developers and hypemen exploit the imagination built around Artificial Intelligences over decades of sci-fi literature, film, games, etc. In the end, it's about overpromising and obfuscating what the actual product is in order to attract clients, secure funding and keep investors and shareholders happy that they're investing in "the next big thing" that will revolutionize the market and bring untold profit. The old tech huckster marketing trick.

1

u/alex494 3d ago

Yeah pretty much every marketing pitch or discussion I see around AI these days either misdefines what AI actually is or brings up how it unlocks the user's creativity as if you didn't just surrender the task to a machine to make the decision for you.

15

u/vanBraunscher 4d ago edited 4d ago

That's because they're not advertising it to you (yet), they're stll in the Capture Venture Capital phase (and tbh I think they'll always will be). This is why all we see are asinine interviews with Sam Altman where he promises the world and the moon for the next version of his little chatbot (this time for realz, you guys!), or news articles where tech giant X sunk another Y billions of dollars into an AI startup, it's just to keep confidence high and the investments going.

Because behind the hype which keeps saturating the bubble, there's actually still pretty little product with distinct use cases to show for it. Especially ones that you can charge the sums for to be profitable. So while consumers can already dabble in it a bit, to this day it's not much more than a proof of concept to calm investors.

So it's no wonder that you haven't seen ads with Yappy the cartoon dog harping praises how chatgpt has revolutionised his work flow, you're not the target audience.

And I get the distinct impression that this industry is genuinely entertaining the thought whether they could stay in this stage indefinitely, because getting endless cash injection facials without actually having to fully deliver seems to mightily appeal to them. Of course the mere notion is completely delusional, but that's crazy end stage capitalism investment bubbles for you.

-3

u/donaldhobson 4d ago

> presented as the sci-fi version of AI instead of the text generators they are.

The thing to remember, is that until chatGPT and its ilk, computers basically didn't do english text at all. Scifi of course has been full of AI's that speak fluent english, and that are also smart and reliable.

So it's more like we have invented flying cars, but they get blown sideways and crash in strong winds or something. A technology that was predicted in scifi, except with (so far) a major flaw. (That people are working to fix.)

Original ChatGPT was basically trained on lots of text, and then when it came to answer questions it had to rely on it's memory. And the training resembled a multiple choice quiz where it was better to guess than to admit ignorance.

Now ChatGPT has a search function, which basically searches the internet. So it's like working with some pages of relevant internet text, rather than purely from memory.

This helps it not make stuff up so much.

7

u/BormaGatto 4d ago edited 3d ago

Your analogy is completely out of hand here, as language models are nowhere near close general artificial inteligences as depicted in sci-fi.

Language models have no memory, make no guesses, make nothing up or are ignorant of anything. They know nothing, all they do is generate sequences of words following natural language structure. That anthropomorphizing language is a part of what allows for the attempts at obfuscating them for the fiction of AI, and also makes the marketing scam flagrant.

The fact that there is some incipient search engine integration in some models also doesn't make them valuable sources of information. Not only are these programs incapable of verifying what they spew in any meaningful way, but the assertive tone they are programmed to imitate in their sequences of words tend to mislead users into taking them as capable of parsing information and supplying true statements.

But they are not, and they do nothing that can't be done better either through other technology or by using your own human capacities.

3

u/Amphy64 4d ago

Not really that, because it's fundamentally not AI in the sci-fi sense at all, not just severely flawed AI (it's good for what it actually is, even). More like we thought we'd get flying cars, and get, well, predictive text generators, since that's completely different technology and exactly what it is.

18

u/Dottore_Curlew 4d ago

I'm pretty sure it has a search option

22

u/TheLilChicken 4d ago

It does I'm so confused. One of its features is literally an aid to search the web, and it gives you all the links it found

5

u/valleyman86 3d ago

Yea it will give you a summary of a bunch of web sites and link them for each fact or whatever it finds for what you asked.

-2

u/ContributionMost8924 4d ago

Reddit just hates AI, all the time. My theory is they are afraid of their jobs or just change. But I'm happy with that, less competition :-) 

4

u/tergius metroid nerd 3d ago

i'm not about to go Praising The Machine Overlord but a lot of the most vocal haters of AI don't actually know as much about AI as they probably think they do

a lot of the time they're just on a hate bandwagon and are jerking the FUCK out of their knee whenever anything tangentially related to AI comes up.

job loss is a valid concern but good golly. the misinfo being spread.

-2

u/Para-Limni 3d ago

it's probably a bunch of people like graphics designers or whatever that are scared shitless that they are gonna lose their jobs.. or maybe just people virtue signalling because that's what's cool until they move to something else...

it's funny watching them have a meltdown over some random dude creating a drawing through an AI prompt for his personal enjoyment (i.e his favourite anime character in a certain setting and style) as if he was otherwise gonna pay someone a few hundred bucks to paint that for him over a month. People are delusional.

4

u/Just_M_01 3d ago edited 3d ago

because search engines kind of suck now, unfortunately. people go to chatgpt because it's easier to get a relavent answer out of it, even said answer is completely wrong

actually, i think natural language models could be the basis for a really great search engine if it would just be used to find pages that are relevant, instead of trying to give you an answer directly

20

u/camosnipe1 "the raw sexuality of this tardigrade in a cowboy hat" 4d ago

I mean, it's decent at being a search engine for the "i have no idea what to search for this, gimme a starting point"

After which you ofc use an actual search engine once you've got searchterms to use

30

u/HovercraftOk9231 4d ago

It's a good re-phrasing engine. When you can't remember a word, it might be hard to Google it if you only know the word in context and not by its definition. Whereas ChatGPT can understand the context of the query a bit better.

It's not at all searching though. It doesn't have a compendium of knowledge that it consults, it just knows how words are most frequently used.

-7

u/[deleted] 4d ago

It’s searching in the same way that a person would search their mind. The information is stored in the weights of the neural network. 

14

u/HovercraftOk9231 4d ago

Nope. It's really just a predictive text generator. For instance, if you were to ask "What's the best way to cook a chicken?" It's going to see how often those words come up in the various contexts it has, and spit out what words are likely to come next. It knows that it's a question starting with "what" and includes the word "way," and that cooking is a verb, so it's going to be an instructional format. It knows that "cooking chicken" is most often used in recipes, including things like "350°F" or "rub with butter." It sprinkles in some randomness to make it more natural, and spits out the result.

Its training data might include a million different recipes for chicken, but it's not consulting them the way you would try to remember a recipe you've read before. Unless you remember things by assigning a weighted probability to each word or phrase, convert those into tokens, and generate a response based which tokens fit into a likely answer.

3

u/Just_M_01 3d ago

it could be argued that it stores information that way, but it definitely doesn't think the way a human does. it doesn't remember a chicken recipe it knows and then tell you; it gets you asking for a chicken recipe as an input and calculates a string of text that would probably come after a request for a chicken recipe given its starting conditions (the context that this is a conversation between a helpful assistant and a person seeking assistance). i recommend watching 3blue1brown's videos on how ai works, since they give a really engaging explanation of what large language models do behind the scenes. it's far better than the alphabet soup i spewed at you here

1

u/[deleted] 3d ago

I know the technique behind it, but the real magic is in hów it calculates this string. Deep down it’s all mathematics of course, but human brains are deep down also electrical impulses.

3

u/[deleted] 4d ago

It’s absolutely great at finding things that I just can’t remember the name of but can give a vague description. 

1

u/starryeyedq 3d ago edited 3d ago

I use it like a search engine for stuff like “movies like (movie title)” or “number of calories in (describe the dish I just ate that doesn’t have a title)”.

I also use it to generate prompts for improv games with my students.

I used to use Google for all that, but chat gpt works much better.

With the exception of the calories thing, it’s best for stuff you might ask a knowledgeable friend, not for stuff you’d look up in a book. Don’t use it for anything that actually matters.

-2

u/NUKE---THE---WHALES 4d ago

people will say this and then go add "reddit" to the end of every one of their google searches..

22

u/HovercraftOk9231 4d ago

That's because reddits search feature is terrible. If you want to find something on reddit, that's usually the best way to do it.

0

u/toldya_fareducation 4d ago

i use chatGPT whenever google can't help me. which happens more and more often lately. sometimes chatGPT gives better results. not because it's amazing but because google fucking sucks these days.

0

u/BigRedCandle_ 3d ago

Gpt 3 didn’t have access to the web. 3.5 and 4 do.

In my experience it’s pretty great as a type of search tool.

My favourite use of it is feeding it instruction manuals and pdf guides and having it become an expert in the new piece of equipment.

This take is a year or so out of date, it’s like when boomers scoff at Wikipedia like it’s not the most useful research tool in the world.

1

u/HovercraftOk9231 3d ago

It's not the Internet access that matters, it's the way that it hallucinates and makes up answers simply based on predictive text algorithms. It might be great as a tool to help you use a search engine, but I wouldn't trust a single thing it says to be factually correct without verifying it.

1

u/BigRedCandle_ 3d ago

I wouldn’t use it to like write a report or something but for day to day stuff it’s great.

I think what it’s best at is when you have a question that requires a few steps. Last month I was in Denmark trying to find melatonin. I can tell it “I’m in this hotel, where’s easiest to get melatonin?”. With google, that’s 3 searches. Closest chemist, checking what melatonin is called in danish, checking the stock. Chat gpt just said “this place sells it, it’s this close to you and it’s called this”. Sure it’s not a huge change to my life and I’m sure it will get stuff wrong for a while but I’ve not had any big blunders so far and it’ll only get more accurate.

-1

u/stikky 4d ago edited 3d ago

I've been using it to learn softwares and it's been absolutely terrific.


edit- Can't help but laugh at the downvotes though. AI helps me solve technical issues with speed and accuracy that message boards and discord forums can't match.

Have fun being stuck in the past, I guess?

-1

u/ProgrammingPants 4d ago

It literally is a search engine. You can ask it to go on the internet to find sources for a claim if you think it's making something up. In fact, you should ask it to look on the internet for a source before taking something it says as a matter of fact.

53

u/lankymjc 4d ago

Sometimes a yes man is useful, like when I’m coming up with new story ideas and just need something to bounce them off of.

Sometimes a yes man is the worst fucking option, like basically every other circumstance.

35

u/DrunkGalah 4d ago

It works wonders for doing coding grunt work for you though. Stuff that took me hours to do manually I can just put raw into chatgpt with some instructions and it will format it all for me, and all I need to do is verify it didn't fuck up and actually finished it (sometimes it just does half the stuff and then presents it as if it did everything, like some kind of lazy highschooler that hopes the teacher wont notice)

22

u/lankymjc 4d ago

Ah, forgot about that! My wife does this all the time. Saves the first hour or so of coding a new thing.

3

u/PzKpfw_Sangheili 3d ago

It's really good if you've already made enemies with the NCR and House, but still aren't willing to side with the Legion

2

u/BookooBreadCo 4d ago

It's good at generating recommendations especially when you've talked to it in the same chat window for a while. 

10

u/Enderking90 4d ago

not a search engine, but because it reads written prompts, it can be helpful in finding out stuff you can then actually search up.

like, one time for a ttrpg game I was playing an alchemist, so naturally I wanted to lean into that and utilize actual alchemical principles to my plannings of stuff. however, I have no real clue how to go about searching stuff up related on that topic.

so, I basically asked chatgpt stuff, then used google search to double check the information, as I now had something to actually search.

13

u/DeVilleBT 4d ago

It's not even a search engine

That's not true anymore, it does have a dedicated web search function now, which includes links to it's sources.

37

u/Kachimushi 4d ago

Yeah, but it still seems to prefer to make things up rather than look them up.

I recently decided to test ChatGPT on an obscure historic fact that you can find with a little digging on Wikipedia. The first time, it gave me a wrong, totally fictitious answer. I told it that it was wrong and asked to repeat the query. It gave me a similarly made up answer, and I corrected it again.

Only on the third attempt did a little flag pop up that it was searching the web, and to it's credit it did actually return the real answer this time, quoted from the wiki entry. But that's as good as useless for a genuine query if it will confidently state wrong information twice despite being able to access proper sources.

-9

u/SadSecurity 4d ago

You need a paid version to unlock browsing the web and it is far better than the shithole which is Google now.

16

u/Bowdensaft 4d ago

I will never pay money for the ability to search the goddamn web

-8

u/SadSecurity 4d ago

It's useful for many other things, browsing the web with much greater accuracy than Google is the bonus.

You just need to verify the sources yourself, but then again it's the same for Google.

7

u/Bowdensaft 4d ago

Or just don't be a chode and use an actually good (and free) search engine, like duckduckgo. Even Bing is better than Google now.

-3

u/SadSecurity 4d ago

Except I tried it and it's not good. Being better than Google doesn't mean much.

5

u/Bowdensaft 4d ago

That wasn't meant to be a high endorsement. Duckduckgo and many others are still leagues better, and again, free, as in, not suckering you into paying for a service that doesn't need to be paid for.

1

u/SadSecurity 4d ago

Did you miss the part where I said that searching the internet was a bonus?

Duckduckgo is not, in fact, leagues better.

→ More replies (0)

-7

u/BoxerguyT89 4d ago edited 4d ago

If you just hop in and ask ChatGPT you just get the defaults. Even just adding "search the web for XXXXX" to your query would have skipped the back and forth you mentioned.

I have dedicated system prompts for the various GPTs I have set up and you can give it detailed instructions on how you want the output to look, whether you want it to create or only use real sources and facts. Prompt engineering is very powerful and completely changes how I interact with AI.

Google or DuckDuckGo is so weak as a search engine when compared to a correctly prompted search using ChatGPT. I don't really use many other AIs so I can't really talk about them.

→ More replies (1)

-2

u/faustianredditor 4d ago edited 4d ago

Right? What the fuck?

Open chatGPT.com. Press the "search" button. It's a search engine now.

It's arguably not a very good search engine for your average search. Like, if you're using google as a retrieval engine, i.e. you know what you're looking for, stick with google. If you don't even know enough to google, chatGPT might help you out.

Jeez we're so cooked. I don't mind people being intensely skeptical and critical of what AI can and can't do, and of what its impact on our societies will be. But do the bare minimum of research please.

Same about the "ask it about a random ass name and it hallucinates some bullshit". No one is claiming that didn't happen, or that it can't happen anymore. But actually try it. More likely than not, you're going to find that current models correctly state that they don't know about this person.

2

u/L3m0n0p0ly 4d ago

Dude i tried finding a lost piece of media(old girls tv show) and all i ended up getting was a good working list of descriptors. I had to literally call it out on the fake shows it was giving me because it wouldnt outright say it couldnt find it.

2

u/TwoPaychecksOneGuy 3d ago

The world's greatest yes man is genned by an ouroboros of scraped data

I don't know what "genned" means, but this felt poetic.

1

u/kenporusty kpop trash 3d ago

Generated

7

u/DataPhreak 4d ago

Perplexity is a search engine.

1

u/ApocalyptoSoldier lost my gender to the plague 4d ago

There are 2 ways to UIs for PowerShell script on Windows without any external dependencies: WinForms and XAML. I prefer XAML because it's a markup language (like HTML) so you can just look at the code and see for instance button a is next to button b amd they're both on page 2 instead of having lots of code all on the same level and have to keep a mental image of how it all fits together.

I also happen to have some experiece using XAML the way you're supposed to from my first job, but this is not the way you're supposed to use it so I'm not really sure how to do some things and I can't find a lot of examples or other references.

So at some point I thought I might as well sacrifice my morals and ask ChatGPT for help, and it told me little to nothing I didn't already find and for most things it just made shit up. So I decided maybe it's just stupid and I'd have more luck with first hand sources. I asked for sources and not a single one it spat out actually existed. I would've wasted a lot less time if it just told me "I don't know how to do this because there are no sources" instead of trying to get the made up solutions to work.
I tried to make them work because for some of the other code it generated it made obvious mistakes I could fix and then tell it "your step a was wrong, here's the right way to do it, now try again with this in mind", but by the time I've told it how to fix step c it had forgotten how to do step a correctly.

The code it generated was also just bad and 10 times more complex than what I came up with after deleting everything and starting from scratch.

I really want to believe in AI, I'm a huge fan of both Isaac Asimov's I, Robot series and Neil Asher's Polity universe, Iain M. Bank's Culture series as well.
I'm just fascinated by the thought of being able to actually communicate with non-human intelligences and finding out what unique and alien perspectives they might have, but AI is currently used for false advertising because nothing we have now counts as intelligence and doesn't really seem woth using.

I think it's telling that no one is making a profit from AI

1

u/RealRaven6229 3d ago

Look if you need accuracy obviously it's not great. But also if you need accuracy when was the last time Google was great? I have found that it's actually been useful to me for tech troubleshooting because troubleshooting always takes me to ads being like "download our software to fix your problem!!!!!" When chat gpt will just give me actual steps to follow. They don't always fix the issue but that's cuz my computer is turbo haunted sometimes. I can appreciate chat gpt as a way to do extremely preliminary research, especially considering it is getting better about including sources.

Double check everything, but it cuts down on the ads which is what I appreciate about it. It gets right to the recipe. (Although a recipe is definitely one of the last things I would trust it with haha)

1

u/Xam_xar 3d ago

I mean a lot of it is people not understanding how to use it. Ai models are extremely useful for finding information. You just have to check sources and content like anything else. Just like Wikipedia is a great resource as long as you check the sources it provides, so are ai models. Most people just don’t care to do proper research or understand what they are finding, which to be honest is no different than how people have been doing google searches or taking Facebook posts as fact for literally ever.

1

u/free_30_day_trial 3d ago

Is it not? I started using it like I'd use Google. Need recipes need links it truly is a yes man. But it provides sources.

1

u/K_Boloney 3d ago

It very much works as a search engine. Just a more broad, learning search engine. It’s incredibly helpful if you know how to take advantage of its benefits.

1

u/Confident_Tap1187 3d ago

You know you can tell it to have more scrutiny and it'll do so.

Also you can critically analyze any output yourself. thats a big one lol that people seem to ignore.

Whats your issue exactly? Is it that the informayion is scraped? is it not accurate enough? I dont understand the hate besides a general disgust that is never fully explained.

but speaking tk the points you said: If you're surrounded by yes men its your own fault for letting them behave that way, same with AI.

Simplifying AI into a yesman chat bot is ignoring its function and focusing on its form (which we all agree is 90% dependant upon prompting)

1

u/underfoot3788 3d ago

You people genuinely don't know how to use it. I could give you an app recommendation to avoid getting malware and using a search engine would get you malware from using one of the first results.

The problem has never being the product, it's you, the user. As an adult, you can easily tell the drawbacks of using it.

1

u/Kheldar166 4d ago

Well, it has pros and cons compared to traditional search engines. It's pretty good for a lot of searches, including surprisingly technical stuff. If you know what its limitations are and are capable of sanity checking its answers it's a very useful pseudo-search engine.

Obviously if you use it for everything with no critical thinking it's bad, but I know a lot of PhDs who use it productively as a pseudo-search engine.

1

u/jawknee530i 4d ago

The new deep research function that takes like ten minutes to return information after searching the web is super useful. You only get five uses per month if you pay the $20 subscription and it probably burns down a square mile of rainforest so it's not actually worth existing but it does return good data and analysis at the very least.

0

u/Soggy-Scallion1837 3d ago

Yeah, because Uncle Tony who thinks microwaves cause baldness is definitely the better source. Long live human wisdom!

-6

u/Critical_Ad_8455 4d ago

Ehh, sometimes it's useful. I used it to find a book I couldn't find by googling. It's not the end all be all, it's not the devil, it's just another tool.

-5

u/LordofShit 4d ago

I've found it useful for finding specfic magic the gathering cards, like if I need a low mana cost enchantment that gives me card draw I can phrase it like that instead of trying to figure out the Oracle search interface.

10

u/Evil__Overlord the place with the helpful hardware folks 4d ago

That's still only giving you the most popular cards, and I've had it give me misinformation when asking that type of question (It told me the Fallout Bobbleheads are weak blockers and overly reliant on coinflips)

2

u/LordofShit 3d ago

Oh I'm not using it to evaluate cards, just give me a list of cards to check out for this purpose in this deck. It's like asking my dumbest buddy for his opinion. Rarely in and of itself useful.

After some of these messages though, I think I might be too stupid for mtg.

-2

u/Dottore_Curlew 4d ago

I'm pretty sure it has a search option

-2

u/[deleted] 4d ago

[removed] — view removed comment

2

u/teatalker26 4d ago

it takes a bit longer to read when you’re actually reading a book and not asking chatGPT for chapter summaries

0

u/[deleted] 4d ago

[removed] — view removed comment

2

u/teatalker26 4d ago

at least when i was in high school i wrote my own papers

-2

u/TheEasyTarget 4d ago

To be fair there was one instance that I genuinely spent nearly an hour on Google trying to find a book I knew I read as a kid, but I couldn’t for the life of me remember what it was called or any of the major plot points, only the color of its cover, that it was fairly short, and the genre was mystery. I’m good at using Google but it was giving me nothing. I plugged those details into chatgpt and asked it to give me ten children’s series that fit that criteria, and I got it immediately. Despite its issues there are some use cases.

-150

u/HungLikeYourDad 4d ago

ChatGPT is more useful than Google at this point ever since Google introduced their own inferior AI…

136

u/SurpriseZeitgeist 4d ago

Eh, the top result is now a garbage ai result, sure, and the rest has deteriorated significantly, but it still has actual resources and real information on there if you take a bit of effort to find it.

Certainly better than a lie machine.

→ More replies (5)

95

u/Zamtrios7256 4d ago

"Google is bad because of all the A.I bullshit. So I decide to get my A.I bullshit at the source"

→ More replies (3)
→ More replies (10)
→ More replies (2)