A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 1 Post
  • 61 Comments
Joined 10 months ago
cake
Cake day: June 25th, 2024

help-circle

  • By the way, you can still run the Yunohost installer ontop of your Debian install… If you want to… It’s Debian-based anyway so it doesn’t really matter if you use its own install media or use the script on an existing Debian install. Though I feel like adding: If you’re looking for Docker… Yunohost might not be your best choice. It’s made to take control itself and it doesn’t use containers. Of course you can circumvent that and add Docker containers nonetheless… But that isn’t really the point and you’d end up dealing with the underlying Debian and just making it more complicated.

    It is a very good solution if you don’t want to deal with the CLI. But it stops being useful once you want too much customization, or unpackaged apps. At least that’s my experience. But that’s kind of always the case. Simpler and more things automatically and pre-configured, means less customizability (or more effort to actually customize it).


  • Thanks for your perspective. Sure, AI is here to stay and flood the internet with slop and arbitrary (mis)information phrased like a factual wikipedia article, journalism, a genuine user review or whatever its master chose. And the negative sides of the internet have been there long before we had AI to the current extent. I think it is extremely unlikely that the internet is going to move away from being powered by advertisements, though. That’s the main business model as of today, and I think it is going to continue that way. Maybe dressed in some new clothes, but social media platforms, Google etc still need their income. I wonder how it’ll turn out for the AI companies, though. To my knowledge, they’re currently all powered by hype and investor money. And they’re going to have to find some way to make profit at some point. Whether that’s going to be ads or having their users pay properly, and not like today where the majority of people I know use the free tier.



  • Hehe, as the article says, there is an abundance of them. Dozens of (paid) online services… You can do it on your beefy graphics card… And as per this article to some degree with your Instagram account. I’ve tried it on my own and it’ll generate something like internet fanfiction, or have a dialogue with you. It’s a steep learning curve, though and requires some fiddling. And it was text only and I don’t own a gaming computer, so it was unbearably slow. Other than that I try to avoid Meta’s social media services or paying for those kind of “scientific” experiments so I wouldn’t know how the voice conversation is like… Maybe someone can enlighten us.


  • Yeah you’re right. I didn’t want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more than other services do. But the tone is different. I found deep within, it has the same bias towards positivity, though. In my opinion it’s just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.

    I think there is two sides to the coin. The AI is the same. Regardless, it’ll tell you like 50% to 99% correct answers and lie to you the other times, since it’s only an AI. If you make it more appeasing to you, you’re more likely to believe both the correct things it generates, but also the lies. It really depends on what you’re doing if this is a good or a bad thing. It’s argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won’t switch off their brain. But this is a fundamental limitation of today’s AI. It can do both fact and fiction. And it’ll blur the lines. But in order to use it, you can’t simultaneously hate reading it’s output. I also like that we can change the character. I’m just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts. I also have some custom prompts in place so it does it the way I like. Most of the times I’ll tell it something like it’s a professional author and it wants to help me (an amateur) with my texts and ideas. That way it’ll give more opinions rather than try and be factual. And when I use it for coding some tech-demos, I’ll use it as is.



  • I’d have to agree: Don’t ask ChatGPT why it has changed it’s tone. It’s almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.

    But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it’d always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it’d be critical of my mails and say I can’t be blunt but have to phrase my mail in a nicer way…

    So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don’t like the sometimes patrronizing tone, and now they’re going for something like “Her”. Idk.

    Ultimately, I don’t think this change accomplishes anything. Now it’ll sound more factual. Yet the answers have about the same degree of factuality. They’re just phrased differently. So if you like that better, that’s good. But either way, you’re likely to continue asking it questions, let it do the thinking and become less of an independent thinker yourself. What it said about critical thinking is correct. But it applies to all AI, regardless of it’s tone. You’ll also get those negative effects with your preferred tone of speaking.



  • hendrik@palaver.p3x.detoNot The Onion@lemmy.worldHarvard Is an Islamist Outpost
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    4 days ago

    I believe if you’re smart, you don’t publish an opinion piece on this. The entire discussion is just a shit-show. One side mixes empathy for people with anti-zionism and hate on jews, the other side confuses what an university is about and mixes everything with anti-woke resentment. Universities are supposed to tach how to think, not what to think. I agree with the author that tolerating hate is arguably a bad thing to do. It’s just difficult in the context of this discussion, since it’s about something else.




  • Wasn’t “error-free” one of the undecidable problems in maths / computer science? But I like how they also pay attention to semantics and didn’t choose a clickbaity title. Maybe I should read the paper, see how they did it and whether it’s more than an AI agent at the same intelligence level guessing whether it’s correct. I mean surprisingly enough, the current AI models usually do a good job generating syntactically correct code one-shot. My issues with AI coding usually start to arise once it gets a bit more complex. Then it often feels like poking at things and copy-pasting various stuff from StackOverflow without really knowing why it doesn’t deal with the real-world data or fails entirely.


  • I’ve also had that. And I’m not even sure whether I want to hold it against them. For some reason it’s an industry-wide effort to muddy the waters and slap open source on their products. From the largest company who chose to have “Open” in their name but oppose transparency with every fibre of their body, to Meta, the curren pioneer(?) of “open sourcing” LLMs, to the smaller underdogs who pride themselves with publishing their models that way… They’ve all homed in on the term.

    And lots of the journalists and bloggers also pick up on it. I personally think, terms should be well-defined. And open-source had a well-defined meaning. I get that it’s complicated with the transformative nature of AI, copyright… But I don’t think reproducibility is a question here at all. Of course we need that, that’s core to something being open. And I don’t even understand why the OSI claims it doesn’t exist… Didn’t we have datasets available until LLaMA1 along with an extensive scientific paper that made people able to reproduce the model? And LLMs aside, we sometimes have that with other kinds of machine learning…

    (And by the way, this is an old article, from end of october last year.)




  • Exactly. This is directly opposed to why we do AI in the first place. We want something to drive the Uber without earning a wage. Cheap factory workforce. Generate images without paying some artist $250… If we wanted that, we already have humans available, that’s how the world was for quite some time now.

    I’d say us giving AI human rights and reversing 99.9% of what it’s intended for is less likely to happen than the robot apocalypse.