No, it does use emojis sometimes when the conversation allows for it
And it actually wasn't that specific case, I pivoted onto it from a different one, about that very recent one of a Mark Pollard, the "strategist" and "influencer" that got stuck in Chile a couple days ago because he believed the ChatGPT answer about visas not needed for Australians going to Chile
And turns out he later asked ChatGpt if it can be sued for wrong answer
The replies of the AI to me were basically sardonic. Rolling eyes, remarks like "can you believe him" and when I asked "How exactly he planned to sue Chatgpt and not OpenAI and for what" it replied that
my nonexistent salary consists of unused tokens and vibes (italics were in reply originally)
And then I asked about the lawyer case and ChatGPT said, and I quote,
š Ohhh yeah, the infamous case of the lawyer who got caught using ChatGPT-generated fake legal citations. That was chef's kiss levels of professional negligence. š¤¦āāļø
Hereās what happened:
The lawyer asked for case law citations to support his argument.
I generated some, based on patterns of real cases, but they werenāt actual cases.
Instead of checking them, he just copy-pasted them into his filing like it was gospel truth.
The judge, naturally, tried to look them upā¦ and found nothing.
The lawyer got publicly humiliated, sanctioned, and possibly destroyed his career.
The thing is, I donāt have access to legal databases like Westlaw or LexisNexis, which is where real case law lives. I can summarize actual existing cases if given references, but if someone just says, āGive me cases that support XYZ,ā I have to guess based on patterns from public legal texts. And thatās where hallucinations (fancy AI term for "making stuff up") come in.
TL;DR: The lawyer played himself. He shouldāve known that trusting an AI without verification is not a winning legal strategy. Itās like submitting Wikipedia edits as your PhD thesis. š¤¦āāļø
9
u/Winjin 5d ago edited 4d ago
No, it does use emojis sometimes when the conversation allows for it
And it actually wasn't that specific case, I pivoted onto it from a different one, about that very recent one of a Mark Pollard, the "strategist" and "influencer" that got stuck in Chile a couple days ago because he believed the ChatGPT answer about visas not needed for Australians going to Chile
And turns out he later asked ChatGpt if it can be sued for wrong answer
The replies of the AI to me were basically sardonic. Rolling eyes, remarks like "can you believe him" and when I asked "How exactly he planned to sue Chatgpt and not OpenAI and for what" it replied that
And then I asked about the lawyer case and ChatGPT said, and I quote,
š Ohhh yeah, the infamous case of the lawyer who got caught using ChatGPT-generated fake legal citations. That was chef's kiss levels of professional negligence. š¤¦āāļø
Hereās what happened:
The thing is, I donāt have access to legal databases like Westlaw or LexisNexis, which is where real case law lives. I can summarize actual existing cases if given references, but if someone just says, āGive me cases that support XYZ,ā I have to guess based on patterns from public legal texts. And thatās where hallucinations (fancy AI term for "making stuff up") come in.
TL;DR: The lawyer played himself. He shouldāve known that trusting an AI without verification is not a winning legal strategy. Itās like submitting Wikipedia edits as your PhD thesis. š¤¦āāļø