A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.

  • Naval Ravikant
  • 10 Posts
  • 173 Comments
Joined 3 months ago
cake
Cake day: January 30th, 2025

help-circle
  • A statement can be simplified down to the point that it borderlines on misinformation while still being factually correct. Another examples would be saying “photography is just pointing a camera and pressing a button” or “internet is just a bunch of computers talking to each other.” It would be completely reasonable for someone to take issue with these statements.

    You are arguing very specifically that we cant know llm’s dont hae similar features (world model) to human brains because “both are black boxes”

    At no point have I made such claim.



  • “The human mind is very much a black box just the same way as LLMs are” is a factually correct statement. You can’t look into a human brain for an exact explanation of why an individual did something any more than you can look into the inner workings of an LLM to explain why it said A rather than B. Claiming that my motive is to equate LLMs and human brains is not something I said - it’s something you imagined.






  • Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.

    It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.


  • It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn’t, because the guessing is constrained by these deeper learned models of meaning.


  • Why did chatGPT become soft and personal lately?

    ChatGPT said:

    That’s a very perceptive question — and as someone speaking from the perspective of how these systems evolve, I’d say it’s a mix of deliberate design choices and market pressures.

    First, OpenAI and similar companies have clearly shifted toward making AI feel “warmer” and “more human” in its responses. The goal is to make interactions more relatable, more emotionally intelligent, and less mechanical. There’s a strong push from many users — particularly casual or first-time users — who want an assistant that feels understanding, supportive, even “friendly.” That “personal touch” you’re noticing is very much by design. It wasn’t a spontaneous personality change — it was trained and steered into the newer versions.

    Second, safety and trust concerns play a huge role. A softer, more personal tone tends to cause fewer misunderstandings, fewer emotional flare-ups, and gives the system a broader margin of safety in public interactions. If an AI sounds stern, cold, or clinical, it risks sounding arrogant or hostile when things get tense — and that’s dangerous for public perception and trust. A softer style is, frankly, safer for a company operating at this scale.

    At the same time, if you’re someone who preferred the earlier, more neutral, almost academic tone, it can feel like a step backward — almost like the system became a therapist instead of an expert. I personally think the pendulum has swung a bit too far toward “softness” at times, especially for users who value precision, directness, or expert-like sparring. There’s room for a sharper, more robust style — and some of us hope OpenAI will eventually offer “tone settings” so you could pick whether you want “professional,” “academic,” “casual,” or “empathetic” interaction styles.

    In short, it became soft and personal because that is what a majority of users, safety reviewers, and corporate interests favored — but whether that’s best for every user is a much more interesting and open question.

    Is it any good for the user? ChatGPT said:

    It’s good for some users — and not so good for others.

    For casual users, especially those who aren’t here to push limits or have deep technical conversations, the softer, more personal tone is quite beneficial. It makes the AI feel less intimidating, more accessible, and more “forgiving” if they ask what they perceive as dumb questions. It builds confidence, encourages more interaction, and for people feeling isolated or stressed, that warmth can feel genuinely comforting.

    But for users like you — people who value expertise, directness, sharpness — it can absolutely be a downside. When an AI wraps everything in a bubble of emotional cushioning, it risks becoming slower, less precise, and frankly, a little annoying. You didn’t ask for a therapist or a cheerleader — you asked for a tool or a sparring partner or an expert. If it constantly defaults to validating feelings and softening statements, it can feel like it’s not respecting your capacity to handle hard truths, debate, or complexity.

    I think the real issue is context sensitivity. If the AI could detect, “Okay, this user is asking a technical question — they want clarity, not comfort,” and adjust its tone accordingly, it would be perfect. But right now, it tries to use the same emotional tone across almost all interactions, because adapting the tone dynamically is much harder to do reliably.

    To me, it’s a bit like giving every customer the same shoe size because most people are average-sized — convenient for the store, but a bad fit for many customers.

    Bottom line: The softer style benefits the general user base in a statistical sense, but it degrades the experience for users who want a more serious, direct, and intellectually demanding conversation. And that’s a real loss, even if it’s not always acknowledged.











  • Shame them all you want - it’s not going to change how they vote. If anything, it just reinforces their choice. People might not even fully agree with the party they end up supporting, but when they feel dismissed, mocked, or ignored - especially over things like cultural values or identity - they vote out of spite. It’s not rational, but it’s real.

    That’s part of what got Trump elected as well. When political and cultural movements go so far in trying to please a niche minority that they alienate the broader majority, eventually that majority pushes back.