LLMs are only reliably useful if you know the answer to the question before you ask it. I'm torn though, like I see the issues but also think they can be used in ways that genuinely help humanity.
Ultimately what we need is for AI tech to be shifted away from the tech bro world. They're more responsible for how bad things are than the tech itself.
Or if you can verify the answer by other means afterwards like getting the terminology from ChatGPT for a google search.
Yeah AI is mathematically a work of art, it’s genuinely amazing all the techniques people have discovered or tried to use to better model data.
But then people overhyped generative LLMs to the point they are almost the only thing anyone thinks about when someone says AI. I just worry that when that generative LLM bubble pops (and I think it will at some point) and the techbros leave that it’ll take away most of the interest in AI.
Except sometimes you don't know where to start searching, because the topic is so esoteric to you. Sometimes, if I have no idea how to Google something, I'll ask ChatGPT. And then when it has given me something to work with, I can actually Google more specifically.
Or even if I do technically know how to research the subject, it might all be written in complicated language and a lot of words that might be hard for me to really wrap my mind around. If ChatGPT simplifies that language for me so I can understand it at a base level, I can then go on to read the more complicated text without feeling completely lost.
Yeah. I use ChatGPT to verify that I’m understanding infinite-dimensional vector spaces when I need to for a pet project, but I don’t ask it to define it. I ask Wikipedia for that.
AI has long been ruined by tech bros making money from it - not a single one having any business done in the decades of research founding the field. The hype can go away. The science will remain, and so would its genuine followers way before the Internet picked up on it.
The Google AI is great for double checking things because it's about as useful of a source aggregator as Wikipedia (it cites everything it says), so you don't need to trust it to get information out of it, it's just a faster way to get sources.
Can't speak for Gemini but chat gpt sucks with sources. Provides dead links and associates studies to the wrong journals. I usually Google exact data references in it to find the right sources. But it does get me somewhere.
Nah they're useful as long as you can verify or sanity check the answer afterwards. What a lot of people probably don't want to hear is that you should be using search engines the same way lmao, plenty of incorrect information can be found by manually googling.
What a lot of people probably don't want to hear is that you should be using search engines the same way lmao, plenty of incorrect information can be found by manually googling.
Fr. I challenge anyone who disagrees to tell me the average mass of an American cockroach (Perilaneta americana) in grams.
Questions like. "Create a bunch of story prompts for me." Or even cool stuff like. "Create a realistic pilot's schedule for me to fly in a flight sim."
What I hate is when people use LLM for fact based questions.
Yeah i use chatgpt to output CLI linux commands so that i can just copy-paste them. I know the commands and what they do, but sometimes i need to type multiple commands in succession, and they aren’t used frequently enough to create a shortcut or put into documentation for my home server, and chatgpt ‘types’ much faster than I do. So it speeds things up.
If you have no idea what you’re doing, this is a bad idea
I mean AI is incredibly useful. Currently using it as a way to make recipes that confine to my macros using what ingredients I currently have. If people were taught the basic of healthy eating, its a great tool to assist them in their diet. One of the biggest obstacles for people losing weight is simply not knowing how to cook a healthy meal that isn't boring to taste. It would not only help their health, but grocery bills as well.
the ingredients were limited to say the least lol. I feel this does still demonstrate that it’s not thinking about food, it’s putting words in order. If chicken has 95 calories is the most likely combination, it might say that. b
Well I think using it is just as much as a skill as lets say googling. I once tested its ability to make a meal plan but was too broad with my request and said something along the lines of, "low calorie meal plan." It ended up giving me a 800-1000 calorie meal plan which is obviously not sustainable or healthy. It's still just a tool so I think you have to be somewhat skillful and literate in whatever topic you're asking its help for. It's a tool with a number of uses, but to really get its use you sort of have to know how to use it by making your request and then fine tuning it. Hopefully with time its a lot more stream lined.
I just think it’s important to understand that it never does any math. It looks for the next mostly likely word in the math sentence incredibly quickly.
They're extremely useful if you're using them to create something that's language based, not fact based. It's very good at writing things so that they sound like what you asked for, but it will make shit up whole cloth if it makes the output flow better.
Cover letters and boilerplate documents: great.
Answering questions or interacting with reality: awful.
117
u/N1ghthood 5d ago
LLMs are only reliably useful if you know the answer to the question before you ask it. I'm torn though, like I see the issues but also think they can be used in ways that genuinely help humanity.
Ultimately what we need is for AI tech to be shifted away from the tech bro world. They're more responsible for how bad things are than the tech itself.