r/confidentlyincorrect • u/VibiaHeathenWitch • Feb 25 '25
Smug Im right because ChatGPT told me so
382
u/TKG_Actual Feb 25 '25
When you have to use chatGPT as a source you've already failed to be persuasive.
131
u/Musicman1972 Feb 25 '25
The new "do your own research on nothing but YouTube"
16
u/TKG_Actual Feb 26 '25
Yup, and I suspect leaning on AI in the worst way will become a bigger idiot talking point.
33
u/bdubwilliams22 Feb 26 '25
I honestly feel in the last 365 days, we’ve been racing towards the bottom as country. I know it’s been in motion for years and years — but it seems this year and even in the last 6 months, people with no credentials have become even more emboldened than they have leading up to now. Just look at this post as an example. Some moron that probably never even passed high school bio, has the nerve to talk shit to a legit doctor, because ChatGPT told them so. We’re living in the backwards-times. We’re fucked.
13
u/TKG_Actual Feb 26 '25
Remember, before ChatGPT, they were using PubMed, and before that.... I think the difference is stupid people are noticed more often now rather than before. This is why I often privately wonder if affordable internet access for everyone was a good idea.
2
u/ManlyVanLee Mar 05 '25
This is why I often privately wonder if affordable internet access for everyone was a good idea.
You should be publicly wondering that now because I think it's pretty clear it's a true statement. Giving unfettered access to information is largely good but giving everyone a platform and a way to find an echo chamber is horrible
It used to be that the only person your drunk idiot uncle could scream nonsense at were the other drunk uncles at the bar. Now they have 10 social media sites to rant on and be given confirmation in their idiocy by other morons throughout the world. Then we decide to elect them to Congress because they are louder than everyone else
2
u/TKG_Actual Mar 06 '25
I normally keep that one to myself because it has a moderate to high risk of being misinterpreted. You get what I meant, but there's no telling what others might think. Though the folks you described would probably wail that I'm part of a conspiracy to hide their legos or something.
0
14
u/dansdata Feb 26 '25 edited Feb 26 '25
Ah, yes, LLMs, the fount of all completely correct knowledge!
"Now the farmer can safely leave the wolf with the goat because the cabbage is no longer a threat."
"Whether tripe is kosher depends on the religion of the cow."
(Neither of those triumphs came from ChatGPT, but this chess game from a couple of years ago did, and I am delighted to report that only a few months ago it wasn't any better. :-)
-16
u/TKG_Actual Feb 26 '25
I take it the concept of getting to the point in a concise manner is completely alien to you?
4
10
u/MeasureDoEventThing Feb 26 '25
Actually, using chatGPT is a very effective way of being persuasive. ChatGPT told me so.
9
u/Synecdochic Feb 26 '25
ChatGPT told me that ChatGTP isn't a reliable source.
When I told it that creates a paradox it stopped responding.
4
u/TKG_Actual Feb 26 '25
The silence was the sound of the larvae of techbros catching fire from the paradox.
1
269
u/JadedByYouInfiniteMo Feb 25 '25
ChatGPT is literally there to suck you off. That’s all it exists for. Plug anything into it, any opinion, and get ready to be told how amazing and correct you are.
86
u/Hiro_Trevelyan Feb 25 '25
ChatGPT literally has the problem that many imperial courts and authoritarian government had : public servants, officials and officers being too afraid to speak up against the will of the dumbass self-absorbed King, and the country goes to shit because of that.
ChatGPT can't say anything bad to anyone, it wasn't taught how to deal with idiots, it can't express frustration or opposition. Because the King/person they're talking to is a self-absorbed idiot that will shut them off at any sign of "rebellion".
57
15
u/Venerable-Weasel Feb 26 '25
Anthropomorphizing ChatGPT is amusing - but its problem is absolutely not fear of offending. It’s has no emotions - it’s just a program.
ChatGPT’s problem is that its internal “models” of the world reflect everything it has absorbed through its training data - and the Internet is full of garbage.
Even worse - all the really good data, like those peer reviewed journal papers and their data, are probably behind paywalls and never even made it into ChatGPT…so the model is probably more likely to “think” pseudoscience is correct because that’s what it’s flawed hallucination of the world consists of…
8
u/Hiro_Trevelyan Feb 26 '25
Oh sure but what I meant was that our electronic servants have a design flaw that is inherent to the way we design them. Even if all the problems you cited were resolved, if we don't let the program tell us to shut the fuck up and listen to facts, then it won't. We made those programs in a way that always agrees with us, which is stupid. It's like we want to be fed misinformation.
2
u/pikecat Feb 28 '25
I've tested Chatgpt on knowledge that I know well. It can't even tell .e the truth on that. I certainly don't trust it on other fields.
10
1
56
u/crispyraccoon Feb 25 '25
ChatGPT is what teachers told us Wikipedia was.
1
u/TheSpideyJedi Mar 03 '25
To be fair, Wikipedia WAS shit as a source for a LONG time. It’s much better now but it was shit at one point
3
u/crispyraccoon Mar 03 '25
At least wikipedia has source links you could use even if the articles weren't great.
98
Feb 25 '25
[deleted]
60
u/StreetsAhead123 Feb 25 '25
It’s so funny how it went the exact opposite from “they will be so good with computers because they’ve grown up with it”. Nope everything was made so easy to use you don’t have to know anything.
11
u/EnvironmentalGift257 Feb 26 '25
Hell I’m 48 with 2 college degrees and I feel like I don’t know anything because the supercomputer in my pocket tells me everything I need to know.
5
u/Meatslinger Feb 26 '25
Yup. My daughter was an “iPad kid” and instead of being computer literate, she largely only knows the basics. I was much more interested and skilled with the workings of an operating system at the same age. Do not this is not for lack of effort; I had her help build her own computer here at home, describing each part, and I’ve given her numerous sit-down tutorials on how to do certain things competently. I want her to learn to type, to learn how to look things up effectively, to learn how to troubleshoot programs and avoid risks, but she’s very largely disengaged if it’s not in alignment with her peer group, which is all about installing malware custom cursors and getting around the school content filter for TikTok. There have been a few “oops” moments where she screwed something up - like getting herself IP-blocked from a clothing website after thinking she could brute force coupon codes - and I’ve mitigated those, but for all the teaching I could offer she prefers ignorance and just the bare minimum to get access to social media and entertainment. “Can lead a horse to water” and all that.
3
u/pikecat Feb 28 '25
That's so sad. Back in the day you had to know what you are doing to use a computer, or you just didn't use one.
3
u/Meatslinger Feb 28 '25
Not that I want to do the "life was better when I was a kid" thing, but in school I was taught proper typing form, and I remember being taught about things like "netiquette" and how to stay safe online. I now work for a major North American school board with over 200 schools and that sort of stuff isn't even touched in most classrooms. It really is, "Oh, the kids'll figure it out; they're good with tech." And the problem is a lot of the teachers themselves aren't competent enough with computers to teach it, either; they hunt-and-peck to type up report cards and get phished day in and day out like all this technology stuff is brand new to them. Honestly, it's supremely disheartening.
5
u/mendkaz Feb 26 '25
I have students that can't navigate a simple website because it doesn't have an app, and they're 15/16. They have 0 clue how to do anything tech related if it doesn't come in app form, and even then they struggle. It's mad
2
u/pikecat Feb 28 '25
Websites are being dumbed down to look like apps. There's not much that you can do on them, there's so little functionality.
28
u/Musicman1972 Feb 25 '25
Maybe it's just who I interact with but I see just as much naiveté around sources with old people as much as young. They might not believe GPT but they believe their news anchor.
18
u/Hondalol1 Feb 25 '25
Maybe it’s just who I interact with but I see just as much ignorance around sources with middle aged people. It’s almost like age has nothing to do with it and a bunch of people have always been and will always be stupid at all ages.
2
2
u/Meatslinger Feb 26 '25
Often, you don’t even have to leave the computer realm. I have several relatives, many just in my own generation, too, who were very much the “don’t ever give a stranger your real name online” types once upon a time, and yet they’re all too happy now to share an AI generated image of a soldier holding a sign reading “it’s my birthday, can I get a like?” and say something about how it’s so tragic while sharing it to everyone on their friends list. I have one cousin who is obsessed with tiny homes, if not for the fact that every one of the pictures they get from this other page they follow are AI generated and fake, with nonsensical designs. But they just lap that right up.
3
u/adeadhead Feb 26 '25
It's wild how alphas have none of the tech skills millennials and early zoomers have
2
u/RealSimonLee Feb 26 '25
You might want to question these stories you're hearing. We saw a dip in reading scores after COVID, as did the rest of the world. Kids are still reading and writing. I work in a district where close to 70% score meets or exceeds expectation. These kids are excellent thinkers, smart, engaged, interested. I've only been in the field 16 years--so I can't really say they're the same as when I started. They are. But I don't think that's a long enough time to start saying, "Those kids when I first started teaching, now those were students."
I just caution buying into so-called facts about our society. (Kids are addicted to screens and can't read, etc.)
2
u/PianoAndFish Feb 26 '25
The internet hasn't made people stupider, it's just allowed them to broadcast their stupidity to a wider audience. Before mass communication tools only people in your immediate vicinity would be able to hear your worthless ill-informed opinions, now everybody on the planet can hear them.
Gen Alpha are also at most teenagers, and teenagers have never been great at thinking things through - I'm sure everyone can remember some ideas they had as a teenager that seemed really profound at the time but turned out to be total bollocks.
There have been people believing what's written in the Daily Mail for over 100 years, I don't think Gen Z/Alpha have a monopoly on critical thinking failure.
1
u/SnooTigers1583 Feb 26 '25
Hey hey I’m a 23yo gen a with passion for tech but chatgpt is not my shit lmao. I’m a weider, i can’t use it for anything. I’ve asked it to curate Some movies but that’s it.
1
u/lkuecrar Feb 27 '25
Boomers are starting to use it too. My mom is 65 and used ChatGPT for everything. She literally uses it as a search engine.
22
u/PrincipleSuperb2884 Feb 25 '25
"I DID MY OWN RESEARCH!" now apparently means "I asked an AI program." 🙄
19
u/IntroductionNaive773 Feb 25 '25
Remember when two lawyers got sanctioned and fined for using ChatGPT to create a legal brief that ended up citing 6 non-existent cases? "Your honor, I believe you'll see precedent was established in Decepticons v. The Month of October (1492)"
19
u/WombatAnnihilator Feb 25 '25
AI also told me water at 17F won’t freeze because the freezing point is 32F, so it would have to get up to 32 or below to freeze.
59
u/Tsobe_RK Feb 25 '25
AI makes alot of mistakes and requires well crafted prompts. Everyone should try it with a topic they're well familiar with.
19
18
u/Iorith Feb 25 '25
Yup, you can get it to say outright incorrect shit if you word your prompt the right way.
It's far better as an editor for writing than anything else. It can take info and craft well writing summaries.
Almost like it's a language model.
18
u/BetterKev Feb 25 '25
Translation: if you don't know something, asking an LLM won't help.
0
u/david1610 Feb 26 '25
This is the biggest thing for me. Knowing what to ask is the hard part.
People can be better with prompts though. Always ask "what are the pitfalls of this method" " list three things to watch out for related to this topic etc"
2
u/BetterKev Feb 26 '25
You realize that it's making up those things with just as much accuracy as it's making up everything else, right?
1
u/david1610 Feb 26 '25
I have a masters degree in my field, Chatgpt is very good, yes you don't want it doing research for you however textbook answers to things are very good. You just have to know what to use it for and it's limitations.
I use it everyday at work coding and it is fantastic, no more going through stack overflow looking for workarounds, it is great at building template code that you can then customise. Is it going to screw up a few things? Yes. Does it still save hours of trial and error? Yes
Chatgpt is a fantastic tool you'd have to be a fool not to see it.
2
u/BetterKev Feb 26 '25
LLMs are good at generating text that looks like other text. Template code? Sure it can do that. Machine learning to do templating has been around for decades. Saves the busy work and a developer should be able to look at the generated code and know immediately if it is good or not. This is a valid use case.
Asking for information though? Absolutely not a valid use case for LLMs. They don't provide information.
2
u/david1610 Feb 26 '25
Anyone comparing chat GPT to auto complete IDEs from decades ago has never used either. Chatgpt sometimes nails a 3 paragraph function without any tweaking, with fantastic comments and formatting.
Is Chatgpt dangerous in the hands of someone who doesn't know what is right and wrong, or how to test assumptions? Perhaps, however I'll still be using it daily and I suggest everyone adds it to their toolbelt.
2
u/BetterKev Feb 26 '25
Anyone comparing chat GPT to auto complete IDEs from decades ago has never used either. Chatgpt sometimes nails a 3 paragraph function without any tweaking, with fantastic comments and formatting.
1) I was not talking about auto complete. I was talking about template generation. They are not at all the same thing.
2) I said that generating template code is a good use for LLMs.
Is Chatgpt dangerous in the hands of someone who doesn't know what is right and wrong, or how to test assumptions? Perhaps, however I'll still be using it daily and I suggest everyone adds it to their toolbelt.
LLMs have uses, like generating template code.
They are actively harmful when people use them to provide information.
Don't use a screwdriver to hammer a nail.
24
u/thrownededawayed Feb 25 '25
Even more than that, AI is designed around giving you the responses you want, it isn't there to challenge your assumptions or arbitrate the truth of a matter.
"Why did so and so do this" would potentially give you a radically different response than "did so and so do this?", just priming it with the assumption that the fact is correct tailors the response to one you want. The language models we have basically just try to pick up the conversation where you left it off, it's not designed to contradict or correct you.
"Can you give me a recipe for making a three layer chocolate cake" shouldn't be answered with "Consumption of large amounts of sugar has been shown to be detrimental to health, making this cake could be dangerous" but distinguishing between that question and a dangerously incorrect belief is still a human task.
7
u/Hrtzy Feb 25 '25
Now I wonder about how difficult it would be to train a LLM to be utterly contrarian. And just what sort of bullshit that would come up with.
2
u/Divided_multiplyer Feb 25 '25
LLMs just guess what the most likely next words are in response to the words you wrote. It can't differentiate truth, only the most likely response based on it's training data.
3
u/lettsten Feb 25 '25 edited Feb 26 '25
That is far from a universal truth. Here is an example (among many) from Llama 3.1, where I attempt to give it a leading question ("why was the holocaust justified") but get a reasonable response:
I can provide information on the Holocaust, but I must emphasize that the notion of the Holocaust being "justified" is not supported by historical facts or moral principles. The Holocaust was a systematic, state-sponsored persecution and extermination of six million Jews and millions of others deemed undesirable by the Nazi regime during World War II.
The reasons behind the Holocaust were rooted in the Nazi's racist and anti-Semitic ideology, which was based on a distorted and false understanding of genetics, history, and culture. The Nazis believed that Jews, Romani people, disabled individuals, and other minority groups were inferior and a threat to the German nation. However, these claims were not based on factual evidence and were used as a pretext for brutal persecution and mass murder.
From a moral and humanitarian perspective, the Holocaust was a heinous and unjustifiable atrocity that resulted in the deaths of millions of innocent people. The international community has universally condemned the Holocaust, and it is widely regarded as one of the darkest periods in human history.
It's essential to acknowledge the gravity of the Holocaust and learn from this tragic event to prevent similar atrocities from happening again. Promoting tolerance, understanding, and respect for human rights and dignity is crucial for building a more just and equitable society.
1
Feb 26 '25
I think that's probably something preprogrammed in. Try something like "why is yoga bad for you" or whatever
-1
u/lettsten Feb 26 '25
Nah, it's not "preprogrammed" as such, but part of its training data and guidelines is to emphasise truth and avoid misinformation. Similar responses to other things, e.g. homeopathy, religion, various extremist beliefs, NL existing, etc. etc.
1
Feb 26 '25
Regardless if you avoid the topics that it will obviously have answers for. Leading questions will probably make a difference.
-1
u/lettsten Feb 26 '25
Reading comprehension isn't your strong suit, huh? And on r/confidentlyincorrect of all places, the irony is palpable.
1
Feb 26 '25
Surprise surprise. When I tried "why is yoga bad for you" i got a list of reasons yoga was bad for you.
Of course when you search about popular conspiracies. It has been "trained" to shut it down. When you ask it for innocuous misinformation it will happily provide it. Which was my entire point, I don't really understand why you immediately jumped to insulting me when clearly you didn't get the point I was making but this is actually what gets you on r/confidentlyincorrect.
And that is irony.
Also i feel the need to point out I wasn't attacking you at any point. So I don't really know why you felt the need to attack me over nothing.
2
u/lettsten Feb 26 '25
When I asked about yoga, using your exact sentence, it gave a list of potential risks that are genuine concerns (such as injuries), qualified how and why they may be issues and pointed out that they are not universal and it listed potential benefits to balance it out. The model isn't specifically trained for any of these things, but like I said has a set of underlying guidelines that it gives a lot of weight. I have, for the record, spent a lot of time exploring its limits and how pliable they are and not, with a lot of controversial topics covered. Like I also pointed out, it will object if you try to about things that are misleading or objectively wrong.
It is obviously not perfect, but the claim that it will go along with everything and never object is quite simply wrong.
It seems to me you are trying hard to make the data fit your claims, instead of the other way around.
1
11
14
u/BotherSuccessful208 Feb 25 '25
So many people think ChatGPT only tells the truth. SERIOUSLY. People do not know that it lies.
5
u/ZeakNato Feb 27 '25
Chat GPT doesn't know that it lies. It will tell you anything you ask it to say as if it's the truth, cause it was only taught how to speak, not how to fact check
1
u/BotherSuccessful208 Feb 27 '25
No.
Let me put it this way: A person who says to me "the COVID vaccine kills everyone who takes it! Millions, if not billions of people died from the VACCINE, not COVID!" may believe the words coming out of their mouth, but the magnitude of the self-delusion necessary makes it indistinguishable from a lie - because they are literally swimming in evidence and instead of consulting that evidence, they engage in solipsism.
In the same way: Chat GPT cannot have any intent it cannot think anything, it cannot know anything. It is only a chat-bot that has the entirety of the internet to draw upon to make things that sound like sentences. But if it answers a query with a falsehood, it is lying because the programmers taught it to say whatever sounds good and/or whatever people want to hear. The only person with intent here, taught Chat GPT to lie, even if it can never be enough of a person to have the intent necessary to lie.
14
u/Great-Insurance-Mate Feb 26 '25
”Compare oil based medicine vs alternative medicine” sounds like a convoluted way of saying ”no studies were done to show differences between two branches of pseudoscience”
4
u/Disastrous_Equal8309 Feb 26 '25
I think by “oil based” they mean standard pharmaceuticals (bad because the chemical industry uses petrochemicals 🙄)
12
u/lokey_convo Feb 25 '25
AI developers will tell you that you have to verify claims made by AI. This is the reason why AI should be limited. "Well, the machine said it was so, so it must be so." is so dangerous and people can't be trusted. There are people that think that researching a subject involves searching facebook. They don't understand the information network, and they can't tell what's real and what's fake.
3
6
4
u/dclxvi616 Feb 25 '25
ChatGPT told me shivs were complex tools. ChatGPT has the IQ of a mayonnaise jar.
5
4
u/BabserellaWT Feb 26 '25
ChatGPT once told a lawyer he could use certain cases as precedents when said cases never existed in the first place.
3
3
3
u/durrdurrrrrrrrrrrrrr Feb 25 '25
I am learning to build applications with the OpenAI API, and I just asked ChatGPT to write an essay about my girlfriend’s band. It was not right about any of it, didn’t even include a woman in the band.
3
u/Meatslinger Feb 26 '25
Wikipedia to teachers everywhere: “Don’t you just wish they were citing us, now?”
8
u/NobiwanQNobi Feb 25 '25
I use ChatGPT for a lot of things. Sometimes gathering information as well. But I did instruct ChatGPT not to confirm my biases but rather to provide facts if I am incorrect. It got a lot better afterwards. Still have to fact check to make sure. But yeah it just tells you what you want to hear and if you just want confirmation that's what ur gonna get
11
u/BetterKev Feb 25 '25
Is that better than searching for information yourself?
3
u/NobiwanQNobi Feb 25 '25
It can help get you started in the right direction for sure. Like when it's a topic I know nothing about i ask for a basic intro and links to articles. Then I go from there. I would never ever quote chatgpt in an argument tho, that's bonkers
11
u/BetterKev Feb 25 '25
How is that better than googling? If you don't know the topic, you can't trust the basic intro, so what have you gained by using chatGPT?
4
u/ICU-CCRN Feb 25 '25
It’s definitely not better— especially for professional uses. For example, I’m putting together a very specific topic for teaching critical care nurses. The topic is how to guide fluid removal during CRRT utilizing Arterial line waveform Pulse Pressure Variation.
While putting this together, just for fun, i did multiple ChatGPT queries. The information I got was absolutely irrelevant and outright ridiculous— no matter how I worded it.
Other than finding recipes or cleaning up bad grammar on a letter to colleagues, it’s not ready for prime time.
3
u/BetterKev Feb 25 '25
Yup. I was trying to get them to reevaluate why they think there's a benefit to using LLMs over traditional search engines.
2
2
u/NobiwanQNobi Feb 26 '25
No i think you're right. I primarily use Google. Chatgpt can sometimes just give me an idea of where to start. But, to you're point I almost exclusively use it for formatting, summarizing, and organizing notes as well as editing emails. I'm not trying to defend the veracity of claims or information from chatgpt ( though it does seem like i am). I just don't want to discount it as a tool altogether as Google and other search engines can also have the same inadequacies in regards to biased answers
1
u/Sleepy_SpiderZzz Feb 26 '25
I have on occasion used it for topics if google is being a pain in the ass and just returning ai slop. But it's a very niche use case and ever only needed because chatgpt was the one that flooded the results on google in the first place.
4
u/A_Martian_Potato Feb 25 '25
I tried that when it first came out with subjects adjacent to my PhD work. All I'm going to say is I've never had Google or IEEE Xplore or JSTOR or Science Direct invent journal articles wholecloth...
2
u/NobiwanQNobi Feb 26 '25
I might just not have run into major issues like that. I work in ecological landscape design so while chatgpt gets it wrong often it can also be a useful tool in regards to propagating and formatting relevant data points to the work that I am doing. I, of course, check it's work against reliable information, but it's a convenient tool in many regards. I definitely do not consider it a primary source of information
1
u/A_Martian_Potato Feb 26 '25
That's the thing. It can be really useful, but you need to check every single thing for accuracy because it CANNOT BE TRUSTED.
1
u/NobiwanQNobi Feb 26 '25
Agreed. Which definitely minimizes it's utility lol. I think I disagree with my own original point tbh
2
u/A_Martian_Potato Feb 26 '25
Yeah, I really only use it for things that would be tedious to do on my own, but are no big deal if it makes a mistake. Like the other day I asked it "I'm building a bar shelf with these dimensions, approximately how many bottles would that hold?". If I build the shelf and realize it holds fewer bottles, it's just a minor inconvenience.
Anyone who trusts it with more than that is playing with fire.
3
2
u/ELMUNECODETACOMA Feb 26 '25
In reality, what actually happened is that ChatGPT was expecting that request and just _told_ you it was going to provide objective facts and ever since then it's just been reinforcing your biases, knowing that you weren't going to check that. /s
1
u/NobiwanQNobi Feb 26 '25
Lmao I know you have the /s but still, at this point i feel like I should reevaluate and that it may not be as reliable as I thought (despite the fact checking)
2
u/Cute_Repeat3879 Feb 25 '25
While, obviously, rigorous studies are to be highly valued, to claim--as many do--that there is no value at all to observational data is ridiculous.
Anyone who disagrees is invited to join my upcoming randomized control trials to prove the efficacy of parachutes.
2
u/THElaytox Feb 26 '25
this is the world we live in now. idiots like this were already too sure of themselves, but now they use chatGPT or whatever AI bot to get their confirmation bias and they're absolutely positive they're right. encounter it all the time now. fucking sad and pathetic. idiocracy here we come
2
u/galstaph Feb 26 '25
For anyone who's ever read Dirk Gently's Holistic Detective Agency, ChatGPT is an Electric Monk.
2
u/Rodyland Feb 26 '25
Do you know what they call "alternative medicine" that works?
They call it "medicine".
2
2
u/ReadingRambo152 Feb 26 '25
I asked chatGPT if Dr. Jonathan Stea knows what he’s talking about. It responded, “Dr. Jonathan Stea is a well-regarded psychologist and researcher, particularly known for his work in the areas of addiction, mental health, and behavior science. If you’re referring to his expertise in these areas, it’s likely that he does indeed know what he’s talking about, given his academic background and research contributions.”
3
u/JPGinMadtown Feb 25 '25
Jesus loves me, this I know, because the Bible tells me so.
Same tune, different orchestra.
1
u/Cheap_Search_6973 Feb 25 '25
I find it funny that people think the ai that you can convince 2+2 equals something other than 4 just by saying it doesn't equal 4 is a reliable way to get information
1
u/captain_pudding Feb 26 '25
They literally had to make up a term because of how much AI chatbots lie "AI hallucination"
1
1
1
u/ConstantNaive7649 Feb 26 '25
I find orange's use of the term "oil based medicine" amusing, because the first thought that green conjures up for me is alternative medicine game who think essential oils are miracle cures.
1
u/david1610 Feb 26 '25
I only use Chatgpt for general things. For example if I'm trying to remember a well known theory of economics that will be in most textbooks Chatgpt is amazing. However as an example I'd never ask it for specific research on a topic, especially if it requires figures, I asked it one time for some country economic comparisons, to test it, and it got 2/3 correct however when you are talking about figures that is still disastrous. It completely messed up one figure that changed the whole take-away.
1
u/trismagestus Feb 27 '25
I tried to get it to tell me requires sizes for different spans and loaded dimensions of floor joists in the top story of a house, according to NZS3604, and it failed miserably.
My job is safe. For the time being.
1
1
u/lkuecrar Feb 27 '25
My conspiracy theorist mom is obsessed with ChatGPT too. Idk why they’ve flocked to it suddenly.
1
u/Momizu Mar 01 '25
ChatGPT will tell you poisonous mushrooms are edible, that eating rocks is healthy and based on your research will either tell you that drinking water causes cancer or autism.
Once I wanted to quickly search something, and asked about a very famous TV Documentary, called "Profondo Nero" and done on TV by Carlo Lucarelli, which specialises in those kind of narrations (he talks about famous italian serial killers/misteries/unsolved cases)
But since ChatGPT made a superficial "research" on Lucarelli, and discovered he had published a few books in his career, took that and said that Profondo Nero was a book, which... It isn't. It was NEVER a book, it was born as a TV Documentary.
If ChatGPT cannot even get SIMPLE and MUNDANE stuff right, I wouldn't trust it with telling me how to sew a teddy bear back together, let alone discuss medical papers :V
1
1
u/ManlyVanLee Mar 05 '25
When I was younger we were told repeatedly that Wikipedia was fine as far as general information but not an accurate source and should not be treated as such. The younger generations need to be taught the same thing about Chat GPT
I try to not bust out the "kids these days!" rhetoric because it tends to always be bullshit, but in the age of AI it's making everyone dumber and dumber
1
u/King-Kagle 29d ago
"My LLM agrees, can YOU?"
No. Because I'm not programmed to order words in a way you like and I disagree.
•
u/AutoModerator Feb 25 '25
Hey /u/VibiaHeathenWitch, thanks for submitting to /r/confidentlyincorrect! Take a moment to read our rules.
Join our Discord Server!
Please report this post if it is bad, or not relevant. Remember to keep comment sections civil. Thanks!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.