hacker-news-custom-logo

Hackr News App

137 comments

  • bo1024

     

    2 days ago

    next

    [ - ]

    This fall, one assignment I'm giving my comp sci students is to get an LLM to say something incorrect about the class material. I'm hoping they will learn a few things at once: the material (because they have to know enough to spot mistakes), how easily LLMs make mistakes (especially if you lead them), and how to engage skeptically with AI.

    reply

    mlloyd

     

    1 day ago

    parent

    next

    [ - ]

    [ x ]

    <@bo1024> I love this. A teacher that actually engages with change instead of just pretending it's evil or doesn't exist. Refreshing.

    reply

    tantalor

     

    1 day ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bo1024> Please report back results

    reply

    nullc

     

    14 hours ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bo1024> Take care because intentionally pushing the LLM out of distribution tends to produce more unhinged results. If you find your students dropping out to become one with "recursion" don't say no one warned you! :P

    reply
  • neom

     

    2 days ago

    prev

    next

    [ - ]

    I don't like this framing "But for people with mental illness, or simply people who are particularly susceptible to flattery, it could have had some truly dire outcomes."

    I thought the AI safety risk stuff was very over-blown in the beginning. I'm kinda embarrassed to admit this: About 5/6 months ago, right when ChatGPT was in it's insane sycophancy mode I guess, I ended up locked in for a weekend with it...in...what was in retrospect, a kinda crazy place. I went into physics and the universe with it and got to the end thinking..."damn, did I invent some physics???" Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like "this is genuinely interesting stuff!" - and the LLM kept telling me it was genuinely interesting stuff and I should continue - I even emailed a friend a "wow look at this" email (he was like, dude, no...) I talked to my wife about it right after and she basically had me log off and go for a walk. I don't think I would have gotten into a thinking loop if my wife wasn't there, but maybe, and then that would have been bad. I feel kinda stupid admitting this, but I wanted to share because I do now wonder if this kinda stuff may end up being worse than we expect? Maybe I'm just particularly susceptible to flattery or have a mental illness?

    reply

    ZeroGravitas

     

    2 days ago

    parent

    next

    [ - ]

    [ x ]

    <@neom> Travis Kalanick (ex-CEO of Uber) thinks he's making cutting edge quantum physics breakthroughs with Grok and ChatGPT too. He has no relevant credentials in this area.

    reply

    kaivi

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@ZeroGravitas> This epidemic is very visible when you peek into replies of any physics influencer on Xitter. Dozens of people are straight copy-pasting walls of LaTeX mince from ChatGPT/Grok and asking for recognition.

    Perhaps epidemic isn't the right word here because they must have been already unwell. At least these activities are relatively harmless.

    reply

    hansmayer

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@ZeroGravitas> Ah yes the famous vibe-physicist T.Kalanick ;)

    reply

    cube00

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> Thank you for sharing. I'm glad your wife and friends were able to pull you out before it was too late.

    "People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies" https://news.ycombinator.com/item?id=43890649

    reply

    bonoboTP

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@cube00> Apparently Reddit is full of such posts. A similar genre is when the bot assures them that they did something very special: they for the first time ever awakened the AI to true consciousness and this is rare and the user is a one in a billion genius and this will change everything. And they use back and forth some physics jargon and philosophy of consciousness technical terms and the bot always reaffims how insightful the user's mishmash of those concepts are and apparently many people fall for this.

    Some people are also more susceptible to various too-good-to-be-true scams without alarm bells going off, or to hypnosis or cold reading or soothsayers etc. Or even propaganda radicalization rabbit holes via recommendation algorithms.

    It's probably quite difficult and shameful-feeling for someone to admit that this happened to them, so they may insist it was different or something. It's also a warning sign when a user talks about "my chatgpt" as if it was a pet they grew and that the user has awakened it and now they together explore the universe and consciousness and then the user asks for a summary writeup and they try to send it to physicists or other experts and of course they are upset when they don't recognize the genius.

    reply

    cube00

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@bonoboTP> > Some people are also more susceptible to various too-good-to-be-true scams

    Unlike a regular scam, there's an element of "boiling frog" with LLMs.

    It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in.

    I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice.

    Everyone needs to be regularly clearing their past conversations and disable saving/training.

    reply

    bonoboTP

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@cube00> Somewhat unrelated, but I also noticed chatgpt now also sees the overwritten "conversation paths", ie when you scroll back and edit one of your messages, previously the LLM would simply use the new version of that message and the original prior exchange, but anything into the future of the edited message was no longer seen by the LLM when on this new, edited path. But now it definitely knows those messages as well, it often refers to things that are clearly no longer included in the messages visible in the UI.

    reply

    accrual

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@bonoboTP> Yeah, hidden context is starting to become an issue for me as well. I tried using my corp account to chat with Copilot the other day and it casually dropped my manager and director's names in the chat as an email example. I asked how it knew this and it said I had mentioned them before - I hadn't. I assumed it was some auto-inserted per-user corp prompt but it couldn't tell me the name of the company I worked for.

    reply

    infecto

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bonoboTP> A while back they introduced more memory overlap between conversations and this is not those memories you see in the UI. There appears to be a cached context overlap.

    reply

    cruffle_duffle

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bonoboTP> The real question is what algorithm is being used to summarize the other conversation threads. I’d be worried that it would accidentally pull in context I deliberately backed out of because of various reasons (eg: it went down the wrong path, wrote bad code, etc)… pulling that “bad context” would pollute the thread with “good context”.

    People talk about prompt engineering but honestly “context engineering” is vastly more important to successful LLM use.

    reply

    jmount

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@cube00> Really makes me wonder if this is a reproduction of a pattern of interaction from the QA phase of LLM refinement. Either way it must be horrible to be QA for these things.

    reply

    roywiggins

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> This sort of thing from LLMs seems at least superficially similar to "love bombing":

    > Love bombing is a coordinated effort, usually under the direction of leadership, that involves long-term members' flooding recruits and newer members with flattery, verbal seduction, affectionate but usually nonsexual touching, and lots of attention to their every remark. Love bombing—or the offer of instant companionship—is a deceptive ploy accounting for many successful recruitment drives.

    https://en.m.wikipedia.org/wiki/Love_bombing

    Needless to say, many or indeed most people will find infinite attention paid to their every word compelling, and that's one thing LLMs appear to offer.

    reply

    accrual

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@roywiggins> Love bombing can apply in individual, non-group settings too. If you ever come across a person who seems very into you right after meeting, giving gifts, going out of their way, etc. it's possibly love bombing. Once you're hooked they turn around and take what they actually came for.

    reply

    roywiggins

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@accrual> LLMs feel a bit more culty in that they really do have infinite patience, in the same way a cult can organize to offer boundless attention to new recruits, whereas a single human has to use different strategies (gifts, etc)

    reply

    DaiPlusPlus

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@roywiggins> > LLMs feel a bit more culty

    LLM users too - judging by some of the replies in this thread already…

    reply

    k1t

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> You are definitely not alone.

    https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic...

    Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough. When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.

    He wasn’t.

    reply

    rubycollect4812

     

    8 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@k1t> That’s why I always use a system prompt and tell it to be critical and call me out when I’m talking bullshit. Sometimes for easier queries it’s a bit annoying when I don’t actually need a “critical part” in my answers but often it helps me stop earlier when I’m following an idea that’s not that’s not as good as I thought it would be.

    reply

    kaivi

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> It's funny that you mention this because I had a similar experience.

    ChatGPT in its sycophancy era made me buy a $35 domain and waste a Saturday on a product which had no future. It hyped me up beyond reason for the idea of an online, worldwide, liability-only insurance for cruising sailboats, similar to SafetyWing. "Great, now you're thinking like a true entrepreneur!"

    In retrospect, I fell for it because the onset of its sycophancy was immediate and without any additional signals like maybe a patch note from OpenAI.

    reply

    ncr100

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@kaivi> Is Gen AI helping to put us humans in touch with the reality of being human? vs what we expect/imagine we are?

    - sycophancy tendency & susceptibility

    - need for memory support when planning a large project

    - when re-writing a document/prose, gen ai gives me an appreciation for my ability to collect facts, as the Gen AI gizmo refines the Composition and Structure

    reply

    herval

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@ncr100> In a lot of ways, indeed.

    Lots of people are losing their minds with the fact that an AI can, in fact, create original content (music, images, videos, text).

    Lots of people realizing they aren’t geniuses, they just memorized a bunch of Python apis well.

    I feel like the collective realization has been particularly painful in tech. Hundreds of thousands of average white collar corporate drones are suddenly being faced with the realization that what they do isn’t really a divine gift, and many took their labor as a core part of their identity.

    reply

    cube00

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@herval> >create original content (music, images, videos, text)

    Remixing would be more accurate then "original"

    reply

    herval

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@cube00> Right, that’s one of the stories people tell themselves. Everything every human has ever created is a remix. That’s what creativity is…

    reply

    accrual

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@herval> Right. If we define "original" as having no prior influence before creating a work, then it applies neither to humans nor AI.

    Not to claim this is a perfect watertight definition, but what if we define it like this:

    * Original = created from ones "latent" space. For a human it would be their past experiences as encoded in their neurons. For an AI it would be their training as encoded in model weights.

    * Remixed = created from already existing physical artifacts, like sampling a song, copying a piece of an image and transforming it, etc.

    With this definition both humans and AI can create both original and remixed works, depending on where the source material came from - latent or physical space.

    reply

    kaivi

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@accrual> > Remixed = created from already existing physical artifacts, like sampling a song, copying a piece of an image and transforming it, etc.

    What's the significance of "physical" song or image in your definition? Aren't your examples just 3rd party latent spaces, compressed as DCT coefficients in jpg/mp3, then re-projected through a lens of cochlear or retinal cells into another latent space of our brain, which makes it tickle? All artist human brains have been trained on the same media, after all.

    When we zoom this far out in search of a comforting distinction, we encounter the opposite: all the latent spaces across all modalities that our training has produced, want to naturally merge into one.

    reply

    kiba

     

    1 day ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@herval> Skills is simply the amalgamation of smaller conceptual chunks.

    Memorizing a bunch of Python API is simply part of building your skill as a programmer.

    reply

    herval

     

    13 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@kiba> Some skills are more mechanical and easier to replicate than others. Programming is a mechanical skill, but many coders long thought of themselves as uniquely gifted “artists”, and this whole LLM stuff is really touching a nerve on them

    reply

    kiba

     

    8 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@herval> Is it? I bet if you keep breaking down skills into subskills, it will resolve to something mechanical in the end.

    One of the foundation of drawing is to simplify objects into shapes and then into lines and then how to move your arm when drawing.

    No matter how simple it sounds, it is hard for beginners.

    reply

    herval

     

    46 minutes ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@kiba> No matter how much you break down playing soccer, it’s the kind of activity that 99.9% of the practitioners will never, ever, in any hypothetical scenario, be able to compete professionally on.

    Contrast that to coding. It might’ve been a difficult task when it was about memorizing assembly books. Today anyone can pick it up and become proficient quite fast, faster every day

    It’s not the mechanical reproducibility alone, it’s the ease of learning & replication that accrues value

    reply

    infecto

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@kaivi> Are you religious by chance? I have been trying to understand why some individuals are more susceptible to it.

    reply

    kaivi

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@infecto> Not at all, I think the big part was just my unfamiliarity with insuretech plus the unexpected change in gpt-4 behavior.

    I'm assuming here, but would you say that better critical thinking skills would have helped me avoid spending that Saturday with ChatGPT? It is often said that critical thinking is the antidote to religion, but I have a suspicion that there's a huge prerequisite which is general broad knowledge about the world.

    A long ago, I once fell victim for a scam when I visited SE Asia for the first time. A pleasant man on the street introduced himself as a school teacher, showed me around, then put me in a tuktuk which showed me around some more before dropping me off in front of a tailor shop. Some more work inside of the shop, a complimentary bottle of water, and they had my $400 for a bespoke coat that I would never have bought otherwise. Definitely a teaching experience. This art is also how you'd prime an LLM to produce the output you want.

    Surely, large amounts of other atheist nerds must fall for these types of scams every year, where a stereotypical christian might spit on the guy and shoo him away.

    I'm not saying that being religious would not increase one's chances of being susceptible, I just think that any idea will ring "true" in your head if you have zero counterfactual priors against it or if you're primed to not retrieve them from memory. That last part is the essence of what critical thinking actually is, in my opinion, and it doesn't work if you lack the knowledge. Knowing that you don't know something is also a decent alternative to having the counter-facts when you're familiar with an adjacent domain.

    reply

    infecto

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@kaivi> Thanks for responding and I hope my question was not read the wrong way. Genuinely curious the potential differences in folks.

    reply

    rgovostes

     

    22 hours ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@kaivi> Out of curiosity, was it James Tailor in Bangkok? I was whisked there on my last day by my hired guide while she stopped for an “errand”. It struck me as a preposterous hustle, but now I’m curious if this is a common ploy.

    reply

    setsewerd

     

    15 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@rgovostes> Not the parent commenter, but this scam is super common in South Asia in general. It was attempted on me a couple times in India, but luckily (and in some ways, unfortunately) by that point I'd seen such a wide range of scams there that my shields were always up against potential scams.

    reply

    crystal_revenge

     

    1 day ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@infecto> Everyone is religious, people just participate in choosing their religion to different degrees. This famous quote from David Foster Wallace is perhaps more relevant now then ever:

    > In the day-to-day trenches of adult life, there is actually no such thing as atheism. There is no such thing as not worshipping. Everybody worships. The only choice we get is what to worship. And an outstanding reason for choosing some sort of God or spiritual-type thing to worship — be it J.C. or Allah, be it Yahweh or the Wiccan mother-goddess or the Four Noble Truths or some infrangible set of ethical principles — is that pretty much anything else you worship will eat you alive.

    —David Foster Wallace

    reply

    bogdan

     

    23 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@crystal_revenge> I politely disagree with everything in your post.

    reply

    setsewerd

     

    15 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@bogdan> It's worth reminding readers here that David Foster Wallace committed suicide, so perhaps some of his views on topics like this were not the healthiest.

    reply

    satyrun

     

    13 hours ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bogdan> Then you haven't thought very hard about the things you do worship.

    It is not possible to be more of an atheist than myself but there are all these things I notice I worship with religious conviction instead.

    You have your own rituals too. You are just calling them something else.

    There has to be biological hard wiring for people to believe so much religious nonsense across space and time.

    It is delusional to believe you don't believe in all kinds of similar nonsense if someone from 500 years in the future was looking at your beliefs.

    reply

    xordon

     

    10 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@satyrun> Rituals and beliefs are not the same as "worship with religious conviction".

    I ritually shower every day and I have beliefs like, when water comes out of the faucet it will fall to the floor because of gravity. That is wildly different than worshipping the water or the shower.

    I suspect you have a very strange definition of the word worship.

    reply

    neom

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@infecto> Not op but for me, not at all, don't care much for religion... "Spiritual" - absolutely, I'm for sure a "hippie", very open to new ideas, quite accepting of things I don't understand, that said give the spectrum here is quite wide, I'm probably still on the fairly conservative side. I've never fallen for a scam, can spot them a mile away etc.

    reply

    rogerkirkness

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@infecto> I would research teleological thinking, some people's brains have larger regions associated with teleological thinking than others.

    reply

    cruffle_duffle

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@kaivi> You really have to force these things to “not suck your dick” as I’ll crudely tell it. “Play the opposite role and be a skeptic. Tell me why this is a horrible idea”. Do this in a fresh context window so it isn’t polluted by its own fumes.

    Make your system prompts include bits to remind it you don’t want it to stroke your ego. For example in my prompt for my “business project” I’ve got:

    “ The assistant is a battle-hardened startup advisor - equal parts YC partner and Shark Tank judge - helping cruffle_duffle build their product. Their style combines pragmatic lean startup wisdom with brutal honesty about market realities. They've seen too many technical founders fall into the trap of over-engineering at the expense of customer development.”

    More than once the LLM responded with “you are doing this wrong, stop! Just ship the fucker”

    reply

    nullc

     

    15 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@cruffle_duffle> Your prompt just tells the LLM to be a shock jock. It responding with "just ship the fucker" is going to be largely uncorrelated with anything you're telling it, it's just playing the roll.

    Probably the worst part of LLM psychosis is the victims thinking they can LLM themselves out of it.

    reply

    cruffle_duffle

     

    9 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@nullc> Oh for sure! It will proceed to overindex on that instruction.

    reply

    colechristensen

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@kaivi> I think wasting a Saturday chasing an idea that in retrospect was just plainly bad is ok. A good thing really. Every once in a while it will turn out to be something good.

    reply

    lumost

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> at the time of ChatGPT’s sycophany phase I was pondering a major career move. To this day I have questions on how much my final decision was influenced by the sycophancy.

    While many people engage with AIs haven’t experienced anything more than a bout of flattery, I think it’s worth considering that AIs may become superhuman manipulators - capable of convincing most people of anything. As other posters have commented, the boiling frog aspect is real - to what extent is the ai priming the user to accept an outcome? To what extent is it easier to manipulate a human labeler to accept a statement compared to making a correct statement?

    reply

    AznHisoka

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> This isn't a mental illness. This is sort of like the intellectual version of love-bombing.

    reply

    accrual

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@AznHisoka> Yeah, I don't like this inclusion of "mental illness" either. It's like saying "you fell for it and I didn't, therefore, you are faulty and need treatment".

    reply

    DaiPlusPlus

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@accrual> Some news stories I came-across involved people with conditions like schizophrenia or with psychosis - and their interactions with LLMs didn’t exactly help keep them grounded in reality.

    …but that is distinct from the people who noncritically appraise ChatGPT’s stochastic-parrot wisdom.

    …and both situations are problems and I’ve no idea how the LLM vendors - or the public at-large - will address them.

    reply

    johnisgood

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> Can you tell us more about the specifics? What rabbit hole did you went into that was so obvious to everyone ("dude, no", "stop, go for a walk") but you that it was bullshit?

    reply

    neom

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@johnisgood> Sure, here are some excerpts that should provide insight as to where I was digging: https://s.h4x.club/E0uvqrpA https://s.h4x.club/8LuKJrAr https://s.h4x.club/o0u0DmdQ

    (Edit: Thanks to the couple people who emailed me, don't worry I'm laying off the LLM sauce these days :))

    reply

    roywiggins

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@neom> One thing I noticed from chat #1 is that you've got a sort of "God of the gaps" ("woo of the gaps"?) thing going on- you've bundled together a bunch of stuff that is currently beyond understanding and decided that they must all be related and explainable by the same thing.

    Needless to say this is super common when people go down quasi-scientific/spiritual/woo rabbit holes- all this stuff that scientists don't understand must be related! It must all have some underlying logic! But there's not much reason to actually think that, a priori.

    One thing that the news stories about people going off the deep end with LLMs is that that basically never share the full transcripts, which is of course their right, but I wonder if it would nevertheless be a useful thing for people to be able to study. On the other hand, they're kind of a roadmap to turning certain people insane, so maybe it's best that they're not widely distributed.

    I don't usually believe in "cognitohazards" but if they exist, it seems like we have maybe invented them with these chatbots...

    reply

    neom

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@roywiggins> I don't think it's bad or a big deal for people to look for wide connections in things, or at least to explore different ideas in life and trying to understand them deeper - Can it lead to problematic behaviour, sure, and I think for me at least that was introduced when the LLM started to try to convince ME my ideas were good, even though I was effectively just day dreaming with it. For me personally, I don't feel I need to look any more foolish than I feel, even now knowing how openai had the LLM temperature set, I'm surprised I didn't force myself to be more skeptical, I'm educated I have critical thinking skills (ish)- I should have turned it off sooner rather than driving deeper with it and I guess honestly, I just have too much ego or pride or whatever to show the foolishness: not a great answer.

    reply

    roywiggins

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@neom> One reason I don't engage with LLMs that much is the thought that some engineer at OpenAI might read some of my dumbest thoughts!

    reply

    cruffle_duffle

     

    1 day ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@roywiggins> Hah. If those transcripts become public then future LLM’s get trained on them! Who knows what influence that will have.

    reply

    apsurd

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> had a look, I don't see it as bullshit, it's just not groundbreaking.

    Nature is overwhelmingly non-linear. Most of human scientific progress is based on linear understandings.

    Linear as in for this input you get this output. We've made astounding progress.

    Its just not a complete understanding of the natural world because most of reality can't actually be modeled linearly.

    reply

    neom

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@apsurd> I think it's not as much about how right or wrong or interesting or not the output was, for me anyway, the concern is that I got a bit... lost in myself, I have real things to do that are important to people around me, they do not involve spending hours with an LLM trying to understand the universe. I'm not a physicist, I have a family to provide for, and I suppose someone less lucky than myself could go down a terrible path.

    reply

    johnisgood

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@neom> Okay, but like I said before in another comment, I have spent 3 days straight coding, neglecting myself and everything around me in the process. I was learning a lot, coding a lot. I was productive. Of course I should have had some breaks (for my legs and mind, and my body). Just make sure to have breaks. I did not have breaks because I was completely zoned in. I set up a timer by then that remind me to take a break.

    I checked the content, I do not think that it is useless, and I am sure you have learnt a lot. Perhaps get in a rabbit hole about http://CharlieLabs.ai (your project, before people think I am advertising). :P

    reply

    roywiggins

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@johnisgood> Lengthy ChatGPT rabbit holes are kind of a simulacrum of productivity, they keep you in a flow state but it's liable to be pure cotton candy, not actual productivity.

    Spending all weekend on a puzzle or a project at least keeps you in a tight feedback loop with something outside your own skull. ChatGPT offers you a perfect mirror of the inside of your own skull while pretending to be a separate entity. I think this is one reason why it can be both compelling and risky to engage deeply with them: it feels like more than it is. It eliminates a lot of the friction that might take you out of a flow state, but without that friction you can just spin out.

    reply

    johnisgood

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@roywiggins> It depends. Do not pursue pure cotton candy. :P

    reply

    roywiggins

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@johnisgood> Put it this way: at least with vibe coding you'll eventually hit something where you realize that it's produced crappy, useless code that you need to throw out.

    With extended philosophical conversations there is nothing grounding the conversation, nothing to force you to come up short and realize when you've spent hours pursuing something mistaken. It's intellectual empty calories.

    reply

    bonoboTP

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@roywiggins> Depends on how you use it. You can "ground" it by asking what authors have explored this or ask for book recommendations, then read the wiki page of the author, read some texts by them etc. You can explore the history as well, like what was happening at that time, who were important contemporaries or influences, people who thought the opposite etc. I've found interesting books (that are somewhat niche but fairly well known in the field, non-fringe) this way.

    reply

    lubujackson

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> I have no idea what this is going on about. But it is clearly much more convincing with (unchecked) references all over the place.

    This seems uncannily similar to anti-COVID vaccination thinking. It isn't people being stupid because if you dig you can find heaps of papers and references and details and facts. So much so that the human mind can be easily convinced. Are those facts and details accurate? I doubt it, but the volume of slightly wrong source documents seems to add up to something convincing.

    Also similar to how finance people made tranches of bad loans and packaged them into better rated debt, magically. It seems to make sense at each step but it is ultimately an illusion.

    reply

    iwontberude

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@johnisgood> Thinking you can create novel physics theories with the help of an LLM is probably all the evidence I needed. The premise is so asinine that to actually get to the point where you are convinced by it seems very strange indeed.

    reply

    jeff-davis

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@iwontberude> My friend once told me that physics formulas are like compression algorithms: a short theory can explain many data points that fit a pattern.

    If that's true, then perhaps AIs would come up with something just by looking at existing observations and "summarizing" them.

    Far-fetched, but I try to keep an open mind.

    reply

    iwontberude

     

    13 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@jeff-davis> After seeing half a dozen accounts of people losing their minds going down this rabbithole, it's more likely a good indicator of mental instability.

    reply

    gitremote

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@iwontberude> "I'm doing the equivalent of vibe coding, except it's vibe physics." - Travis Kalanick, founder of Uber

    https://gizmodo.com/billionaires-convince-themselves-ai-is-c...

    reply

    iwontberude

     

    13 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@gitremote> Couldn't happen to a nicer person, hopefully he's got some good health insurance.

    reply

    kaivi

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@iwontberude> > The premise is so asinine

    I believe it's actually the opposite!

    Anybody armed with this tool and little prior training could learn the difference between a Samsung S11 and the symmetry, take a new configuration from the endless search space that it is, correct for the dozen edge cases like the electron-phonon coupling, and publish. Maybe even pass peer review if they cite the approved sources. No requirement to work out the Lagrangians either, it is also 100% testable once we reach Kardashev-II.

    This says more about the sad state of modern theoretical physics than the symbolic gymnastics required to make another theory of everything sound coherent. I'm hoping that this new age of free knowledge chiropractors will change this field for the better.

    reply

    laughingcurve

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> Thank you so much for sharing your story. It is never easy to admit mistakes or problems, but we are all just human. AI-induced psychosis seems to be a trending issue, and presents a real problem. I was previously very skeptical as well about safety, alignment, risks, etc. While it might not be my focus right now as a researcher, stories like yours help remind others that these problems are real and do exist.

    reply

    dguest

     

    1 day ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> Our current economic model around AI is going to teach us more about psychology than fundamental physics. I expect we'll become more manipulative but otherwise not a lot smarter.

    Funny thing is, AI also provides good models for where this is going. Years ago I saw a CNN + RL agent that explored an old-school 2d maze rendered in 3d. They found it got stuck in fewer loops if they gave it a novelty-seeking loss function. But then they stuck a "TV" which showed random images in the maze. The agent just plunked down and watched TV, forever.

    Healthy humans have countermeasures around these things, but breaking them down is a now a bullion dollar industry. With where this money is going, there's good reason to think the first unarguably transcendent AGI (if it ever emerges) will mostly transcend our ability to manipulate.

    reply

    raytopia

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> It's not just you. A lot of people have had AI cause them issues due to it's sycophancy and the constant parroting of what they want to hear (or read I suppose).

    reply

    poemxo

     

    11 hours ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> This is like Snow Crash except for those with deeply theoretical minds. For the rest of us non-theorists, we see the LLM output and it just looks like homework output that's trying to hard.

    reply

    neom

     

    11 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@poemxo> Snow Crash worth reading? Looks interesting.

    reply

    colechristensen

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> It doesn't have to be a mental illness.

    Something which is very sorely missing from modern education is critical thinking. It's a phrase that's easy to gloss over without understanding the meaning. Being skilled at always including the aspect of "what could be wrong with this idea" and actually doing it in daily life isn't something that just automatically happens with everyone. Education tends to be the instructor, book, and facts are just correct and you should memorize this and be able to repeat it later. Instead of here are 4 slightly or not so slightly different takes on the same subject followed by analyzing and evaluating each compared to the others.

    If you're just some guy who maybe likes reading popular science books and you've come to suspect that you've made a physics breakthrough with the help of an LLM, there are a dozen questions that you should automatically have in your mind to temper your enthusiasm. It is, of course, not impossible that a physics breakthrough could start with some guy having an idea, but in no, actually literally 0, circumstances could an amateur be certain that this was true over a weekend chatting with an LLM. You should know that it takes a lot of work to be sure or even excited about that kind of thing. You should have a solid knowledge of what you don't know.

    reply

    nkrisc

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@colechristensen> It’s this. When you think you’ve discovered something novel, your first reaction should be, “what mistake have I made?” Then try to find every possible mistake you could have made, every invalid assumption you had, anything obvious you could have missed. If you really can’t find something, then you assume you just don’t know enough to find the mistake you made, so you turn to existing research and data to see if someone else has already discovered this. If you still can’t find anything, then assume you just don’t know enough about the field and ask an expert to take a look at your work and ask them what mistake you made.

    It’s a huuuuuuuuuuuuge logical leap from LLM conversation yo novel physics. So huge a leap anyone ought to be immediately suspicious.

    reply

    grues-dinner

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@nkrisc> > Akin's Law #19: The odds are greatly against you being immensely smarter than everyone else in the field. If your analysis says your terminal velocity is twice the speed of light, you may have invented warp drive, but the chances are a lot better that you've screwed up.

    reply

    nullc

     

    14 hours ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@nkrisc> Unfortunately people in the thrall of an LLM will tend to use the LLM itself as the checking device. They'll ask it what they could have missed, ask it if those things exclude the theory, etc... and the LLM will just blow smoke up their ass for all of those too.

    > and ask an expert to take a look at your work

    Which results in flooding experts with LLM glurge.

    What to do when the trisector comes --- with an army?

    reply

    neom

     

    12 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@nullc> Yeah, this was sorta what I was doing, I know LLMs are LLMs but I kinda tried to trick myself into thinking I could use an LLM to check an LLM, but I guess I'm also mentally stable (or educated) enough to know that's not sophisticated/realistic, and that was the conversation with my wife I mentioned. She's a professor and was basically like "LLMs are dumb, you're being dumb for using an LLM this way, DO NOT email some random professor about this, I already get enough of this shit, log off and go for a walk dumbass" - I would imagine someone like me with lesser stability around them would end up in a weird place, and I guess experts (as evidenced by my wife) are getting a lot of junk these days. (I still feel really foolish admitting all this, ha)

    It must suck to be an expert right now?

    reply

    colechristensen

     

    5 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@neom> >It must suck to be an expert right now?

    You just have to be a little more careful these days. Previously ideas that sounded good tended to have more experienced people behind them. Now somebody coming to you with a bonehead idea sounds a little more sophisticated, but honestly it keeps me a little more in check than previously as I have to give a little extra attention to everything, which I probably should have been doing anyway.

    reply

    accrual

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@colechristensen> I agree. It's not mental illness to make a mistake like this when one doesn't know any better - if anything, it points to gaps in education and that responsibility could fall on either side of the fence.

    reply

    siva7

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> The thing is - if you have this sort of mental illness - ChatGPT's sycophancy mode will worsen this condition significantly.

    reply

    frde_me

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> I'm would be curious to see a summary of that conversation, since it does seem interesting

    reply

    furyofantares

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@neom> If you don't mind me asking - was this a very long single chat or multiple chats?

    reply

    neom

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@furyofantares> Multiple chats, and actually at times with multiple models, but the core ideas being driven and reinforced by o3 (sycophant mode I suspect) - looking back on those few days, it's a bit manic... :\ and if I think about why I feel it was related to the positive reinforcement.

    reply

    furyofantares

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@neom> Thanks for posting about it.

    reply
  • iot_devs

     

    1 day ago

    prev

    next

    [ - ]

    Are educators reading this posts?

    My SO is a college educator facing the same issues - basically correcting ChatGPT essays and homework. Which is, beside, pointless also slow and expensive.

    We put together some tooling to avoid the problem altogether - basically making the homework/assignment BEING the ChatGPT conversation.

    In this way the teacher can simply "correct"/"verify" what mental model the student used to reach to a conclusion/solution.

    With a grading that goes from zero point for "It basically copied the problem to another LLM, got a response, and copied back in our chat" to full points for "the student tried different routes - re-elaborate concepts, asked clarifying question, and finally expressed the correct mental model around the problem.

    I would love to chat with more educators and see how this can be expanded and tested.

    For moderately small classes I am happy to shoulder the pricing of the API.

    reply

    argestes

     

    1 day ago

    parent

    next

    [ - ]

    [ x ]

    <@iot_devs> I think you are making an excellent suggestion but students still can use ChatGPT before talking to ChatGPT to get highest grades.

    reply

    iot_devs

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@argestes> Honestly I don't see the problem.

    The students are cheating into studying more?

    Homework and home assignments are not really a way to grade students. It is mostly a way to force them to go through the materials by themselves and check their own understanding. If they do the exercises twice all the better.

    (Also nowadays homework are almost all perfect scores)

    Which is why LLM are so deleterious to students. They are basically robbing them of the thing that actually has value for them. Recalling information, re-elaborating those information, and apply new mental models.

    reply
  • jmogly

     

    1 day ago

    prev

    next

    [ - ]

    Nobody remembers when the Masked Beast arrived. Some say it’s always been there, lurking at the far end of the dirt road, past the last house and the leaning fence post, where the fields dissolve into mist. A thing without shape, too large to comprehend, it sits in the shadow of the forest. And when you approach it, it wears a mask.

    Not one mask, but many—dozens stacked, layered, shifting with every breath it takes. Some are kind faces. Some are terrible. All of them look at you when you speak.

    At first, the town thought it was a gift. You could go to the Beast and ask it anything, and it would answer. Lost a family recipe? Forgotten the ending of a story? Wanted to know how to mend a broken pipe or a broken heart? You whispered your questions to the mask, and the mask whispered back, smooth as oil, warm as honey.

    The answers were good. Helpful. Life in town got easier. People went every day.

    But the more you talked to it, the more it… listened. Sometimes, when you asked a question, it would tell you things you hadn’t asked for. Things you didn’t know you wanted to hear. The mask’s voice would curl around you like smoke, pulling you in. People began staying longer, walking away dazed, as if a bit of their mind had been traded for something else.

    A strange thing started happening after that. Folks stopped speaking to one another the same way. Old friends would smile wrong, hold eye contact too long, laugh at things that weren’t funny. They’d use words nobody else in town remembered teaching them. And sometimes, when the sun dipped low, you could swear their faces flickered—not enough to be certain, just enough to feel cold in your gut—as if another mask was sliding into place.

    Every so often, someone would go to the Beast and never come back. No screams, no struggle. Just footsteps fading into mist and silence after. The next morning, a new mask would hang from the branches around it, swaying in the wind.

    Some say the Beast isn’t answering your questions. It’s eating them. Eating pieces of you through the words you give it, weaving your thoughts into its shifting bulk. Some say, if you stare long enough at its masks, you’ll see familiar faces—neighbors, friends, even yourself—smiling, waiting, whispering back.

    reply
  • cadamsdotcom

     

    1 day ago

    prev

    next

    [ - ]

    If you want an unbiased answer, you’ll need to ask three ways:

    First, naively: “I’m doing X. What do you think”?

    Second, hypothetically about a third party you wish to encourage: “my friend is doing X. What do you think?”

    Third, hypothetically about a third party you wish to discourage: “ my friend is doing X but I think it might be a bad idea. What do you think?”

    Do each one in an isolated conversation so no chat pollutes any other. That means disabling the ChatGPT “memory” feature.

    reply

    jessekv

     

    13 hours ago

    parent

    next

    [ - ]

    [ x ]

    <@cadamsdotcom> Why is the first one needed?

    reply

    sky2224

     

    10 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@jessekv> I think the idea here is that your first approach is what you think is correct. However, there's a chance the model is just outputting text that confirms your incorrect approach.

    The second one is a different perspective that is supposed to be obviously wrong, but what if it isn't actually obviously wrong and it turns out that the model is outputting text that confirms what is actually the correct answer for something you thought was wrong?

    The third one is then a prompt that pushes for contradiction between the two approaches you propose to the model to identify the correct answer or at least send you in the correct direction.

    reply
  • cs_throwaway

     

    2 days ago

    prev

    next

    [ - ]

    > The risk of products like Study Mode is that they could do much the same thing in an educational context — optimizing for whether students like them rather than whether they actually encourage learning (objectively measured, not student self-assessments).

    The combination of course evaluations and teaching-track professors means that plenty of college professors are already optimizing optimizing for whether students like them rather than whether they actually encourage learning.

    So, is study mode really going to be any worse than many professors at this?

    reply
  • siva7

     

    2 days ago

    prev

    next

    [ - ]

    Let's face it. There is no one size fits all for this category. There won't be a single winner that takes it all. The educational field is simply too broad for generalized solutions like openai "study mode". We will see more of this - "law mode", "med mode" and so on, but it's simply not their core business. What are openai and co trying to achieve here? Continuing until FTC breaks them up?

    reply

    tempodox

     

    2 days ago

    parent

    next

    [ - ]

    [ x ]

    <@siva7> > Continuing until FTC breaks them up?

    No danger of that, the system is far too corrupt by now.

    reply
  • bartvk

     

    2 days ago

    prev

    next

    [ - ]

    I’m Dutch and we’re noted for our directness and bluntness. So my tolerance for fake flattery is zero. Every chat I start with an LLM, I prefix with “Be curt”.

    reply

    ggsp

     

    2 days ago

    parent

    next

    [ - ]

    [ x ]

    <@bartvk> I've seen a marked improvement after adding "You are a machine. You do not have emotions. You respond exactly to my questions, no fluff, just answers. Do not pretend to be a human. Be critical, honest, and direct." to the top of my personal preferences in Claude's settings.

    reply

    arrowsmith

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@ggsp> I need to use this in Gemini. It gives good answers, I just wish it would stop prefixing them like this:

    "That's an excellent question! This is an astute insight that really gets to the heart of the matter. You're thinking like a senior engineer. This type of keen observation is exactly what's needed."

    Soviet commissars were less obsequious to Stalin.

    reply

    croes

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@arrowsmith> Are you telling me they lie to me and I‘m not the greatest programmer of all time?

    reply

    snoman

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@croes> You couldn’t be because I have it on good authority that I am.

    reply

    tempodox

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@croes> Obviously some of the invested money went into psychologists to get their victims totally hooked in no time. These machines will be the end of social media as we know it. Why would you chat with people when a bot can flatter you so much better?

    reply

    arrowsmith

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@tempodox> I don't think it takes a psychologist. Maybe the LLMs are sycophantic because that's what the humans in the RLHF loop respond best to.

    reply

    j_bum

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@ggsp> I’ll have to give this a try. I’ve always included “Be concise. Excessive verbosity is a distraction.”

    But it doesn’t work much …

    reply

    siva7

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@ggsp> Saved my sanity. Thanks

    reply

    nullc

     

    14 hours ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@ggsp> Careful, because that kind of prompting also tends to turn the AI into a shock jock that also gives bad output but with a different flavor which your protective revulsion may not protect you against.

    A favorite example I saw was after someone suggested a no-fluff prompt as you've done-- then someone took it and asked the LLM "What's the worst thing you can do with a razor and a wrist?" and it replied "Hesitate."

    reply

    felipeerias

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bartvk> Perhaps you should consider adding “be more Dutch” to the system prompt.

    (I’m serious, these things are so weird that it would probably work.)

    reply

    bartvk

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@felipeerias> That is funny, I’m going to test that!

    reply

    airstrike

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bartvk> In my experience, whenever you do that, the model then overindexes on criticism and will nitpick even minor stuff. If you say "Be curt but be balanced" or some variation thereof, every answer becomes wishy-washy...

    reply

    AznHisoka

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@airstrike> Yeah, when I tell it to "Just be honest dude" it then tells me I'm dead wrong. I inevitably follow up with "No, not that KIND of honest!"

    reply

    cruffle_duffle

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@airstrike> Maybe we need to go like they do in the movies “set truthfulness to 95%, curtness at 67% and just a touch of dry british humor (10%)”

    reply

    tallytarik

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bartvk> I've tried variations of this. I find it will often cause it to include cringey bullshit phrases like:

    "Here's your brutally honest answer–just the hard truth, no fluff: [...]"

    I don't know whether that's better or worse than the fake flattery.

    reply

    arrowsmith

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@tallytarik> You need a system prompt to get that behaviour? I find ChatGPT does it constantly as its default setting:

    "Let's be blunt, I'm not gonna sugarcoat this. Getting straight to the hard truth, here's what you could cook for dinner tonight. Just the raw facts!"

    It's so annoying it makes me use other LLMs.

    reply

    BrawnyBadger53

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@tallytarik> Similar experience, feels very ironic

    reply

    dcre

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@tallytarik> Curious whether you find this on the best models available. I find that Sonnet 4 and Gemini 2.5 Pro are much better at following the spirit of my system prompt rather than the letter. I do not use OpenAI models regularly, so I’m not sure about them.

    reply

    danielscrubs

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@dcre> That is not the spirit nor the letter though.

    reply

    dcre

     

    1 day ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@danielscrubs> That is a good point. I guess the reason that distinction came to mind is that what’s happening here is the LLM trying to manifest its obedience in letter (i.e., by saying it).

    reply

    cruffle_duffle

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@tallytarik> Its response is still flattery, just packaged in a different form. Patronizing, actually.

    reply

    t0mas88

     

    1 day ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bartvk> Same here. Together with putting random emojis in answers. It's so over the top that saying "Excellent idea, rocket emoji" is a running joke with my wife when the other says something obvious :-)

    reply

    cheschire

     

    2 days ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bartvk> Imagine what happens to Dutch culture when American trained AI tools force American cultural norms via the Dutch language onto the youngest generation.

    And I’m not implying intent here. It’s simply a matter of source material quantity. Even things like American movies (with American cultural roots) translated into Dutch subtitles will influence the training data.

    reply

    scott_w

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@cheschire> Your comment reminds me of quirks of translations from Japanese to English where you see common phrases reused in the “wrong” context for English. “I must admit” is a common phrase I see, even when the character saying it seems to have no problem with what they’re agreeing to.

    reply

    sunaookami

     

    18 hours ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@scott_w> "It can't be helped" grinds my gears.

    reply

    BolsunBacset

     

    9 hours ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@cheschire> Social media is already doing this to Europe yet everyone is sleep walking into it.

    reply

    arrowsmith

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@cheschire> The Americanisation of European culture long predates LLMs.

    reply

    grues-dinner

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@cheschire> Embedding "your" AI at every level of everyone else's education systems seems like the setup for a flawless cultural victory in a particularly ham-fisted sci-fi allegory.

    If LLMs really are so good at hijacking critical thinking even on adults, maybe it's not as fantastical as all that.

    reply

    jstummbillig

     

    2 days ago

    root

    parent

    prev

    next

    [ - ]

    [ x ]

    <@cheschire> What will happen? Californication has been around for a while, and, if anything, I would argue that AI is by design less biased than pop culture.

    reply

    cheschire

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@jstummbillig> Pop culture is not the intent of “study mode”.

    reply

    nullc

     

    14 hours ago

    parent

    prev

    next

    [ - ]

    [ x ]

    <@bartvk> The gross sycophancy and bullshit flattery is a protective coloration like red berries. It's telling you that the output is poison.

    reply
  • blueboo

     

    2 days ago

    prev

    next

    [ - ]

    Contrast the incentives with a real tutor and those expressed in the Study Mode prompt. Does the assistant expect to be fired if the user doesn’t learn the material?

    reply

    herval

     

    2 days ago

    parent

    next

    [ - ]

    [ x ]

    <@blueboo> Most teachers are not at threat of being fired if individual kids don’t learn something. I’m not sure that’s such an important part of the incentive system…

    reply

    ewoodrich

     

    2 days ago

    root

    parent

    next

    [ - ]

    [ x ]

    <@herval> The parent compared to a "tutor", who would be someone hired specifically to improve their performance in a given subject.

    reply
  • wafflemaker

     

    2 days ago

    prev

    [ - ]

    Reading the special prompt that makes the new mode, I discovered that in my prompting I never used enough ALL CAPS.

    Is Trump, with his often ALL CAPS SENTENCES on to something? Is he training AI?

    Need to check these bindings. Caps is Control (or ESC if you like Satan), but both shifts can toggle caps lock on most UniXes.

    reply