• 8 Posts
  • 71 Comments
Joined 1 year ago
cake
Cake day: September 29th, 2024

help-circle
  • This would do two things. One, it would (possibly) prove that AI cannot fully replace human writers. Two (and not mutually exclusive to the previous point), it would give you an alternate-reality version of the first story, and that could be interesting.

    this is just “imagine if chatbots were actually useful” fan-fiction

    who the hell would want to actually read both the actual King story and the LLM slop version?

    at best you’d have LLM fanboys ask their chatbot to summarize the differences between the two, and stroke their neckbeards and say “hmm, isn’t that interesting”

    4 emdashes in that paragraph, btw. did you write those yourself?


  • This is an inflammatory way of saying the guy got served papers.

    ehh…yes and no.

    they could have served the subpoena using registered mail.

    or they could have used a civilian process server.

    instead they chose to have a sheriff’s deputy do it.

    from the guy’s twitter thread:

    OpenAI went beyond just subpoenaing Encode about Elon. OpenAI could (and did!) send a subpoena to Encode’s corporate address asking about our funders or communications with Elon (which don’t exist).

    If OpenAI had stopped there, maybe you could argue it was in good faith.

    But they didn’t stop there.

    They also sent a sheriff’s deputy to my home and asked for me to turn over private texts and emails with CA legislators, college students, and former OAI employees.

    This is not normal. OpenAI used an unrelated lawsuit to intimidate advocates of a bill trying to regulate them. While the bill was still being debated.

    in context, the subpoena and the way in which it was served sure smells like an attempt at intimidation.


  • If it had the power to do so it would have killed someone

    right…the problem isn’t the chatbot, it’s the people giving the chatbot power and the ability to affect the real world.

    thought experiment: I’m paranoid about home security, so I set up a booby-trap in my front yard, such that if someone walks through a laser tripwire they get shot with a gun.

    if it shoots a UPS delivery driver, I am obviously the person culpable for that.

    now, I add a camera to the setup, and configure an “AI” to detect people dressed in UPS uniforms and avoid pulling the trigger in that case.

    but my “AI” is buggy, so a UPS driver gets shot anyway.

    if a news article about that claimed “AI attempts to kill UPS driver” it would obviously be bullshit.

    the actual problem is that I took a loaded gun and gave a computer program the ability to pull the trigger. it doesn’t really matter whether that computer program was 100 lines of Python running on a Raspberry Pi or an “AI” running on 100 GPUs in some datacenter somewhere.



  • Why TF do Kindles and the like even need to exist? I read on my iPhone while the audiobook is playing.

    if you prefer to read on your phone, by all means read on your phone.

    but making the jump from that to “e-readers should not exist” is fucking stupid.

    Do Not Disturb and self control are a thing and have never been a problem for me.

    congratulations. would you like a gold star.

    This isn’t rocket science.

    I have ADHD. regulating my attention sometimes is rocket science.

    obviously that’s not the only reason, I have neurotypical friends and family who love their e-readers, and I’m sure there are people with ADHD who prefer reading on their phones.

    remember that there are 8 billion people in the world, and not all of them have the exact same preferences as you do. that isn’t rocket science.



  • “Nurses and medical staff are really overworked, under a lot of pressure, and unfortunately, a lot of times they don’t have capacity to provide engagement and connection to patients,” said Karen Khachikyan, CEO of Expper Technologies, which developed the robot.

    tapping the sign: every “AI” related medical invention is built around this assumption that there’s too few medical staff and they’re all overworked and changing that is not feasible. so we have to invest millions of dollars into hospital robots because investing millions of dollars in actually paying workers would be too hard. (also, robots never unionize)

    Robin is about 30% autonomous, while a team of operators working remotely controls the rest under the watchful eyes of clinical staff.

    30%…according to the company itself. they have a strong incentive to exaggerate. and they’re not publishing any data of how they arrived at that figure so that it could be independently verified.

    it sounds like they took one of the telepresence robots that’s been around for 10+ years and slapped ChatGPT into it and now they’re trying to fundraise on the hype of being an “AI” company. it’s a good grift if you can make it work.


  • Asshole cars for mostly assholes

    from the article:

    Some firms have reportedly already laid off staff, with the Unite union claiming that workers in the JLR supply chain “are being laid off with reduced or zero pay.” Some have been told to “sign up” for government benefits, the union claims.

    JLR, which is owned by India’s Tata Motors, is one of the UK’s biggest employers, with around 32,800 people directly employed in the country. Stats on the company’s website also claim it supports another 104,000 jobs through its UK supply chain and another 62,900 jobs “through wage-induced spending.”

    regardless of your opinion about the cars or the people who drive them…thousands of people getting furloughed or laid off suddenly is bad.



  • “In other words, these conversations with a social robot gave caregivers something that they sorely lack – a space to talk about themselves”

    so they’re doing a job that’s demanding, thankless, often unpaid (in the case of this study, entirely unpaid, because they exclusively recruited “informal” caregivers)

    and…it turns out talking about it improves their mood?

    yeah, that’s groundbreaking. no one could have foreseen it.

    if you did this with actual humans it’d be “lol yeah that’s just therapy and/or having friends” and you wouldn’t get it published in a scientific paper.

    it’s written up as a “robotics” story but I’m not sure how it being a “robot” changes anything compared to a chatbot. it seems like this is yet another “discovery” of “hey you can talk to an LLM chatbot and it kinda sorta looks like therapy, if you squint at it”.

    (tapping the sign about why “AI therapy” is stupid and trying to address the wrong problem)



  • I haven’t. It was omitted from the article in question. I stand corrected.

    keep standing…because here’s the 5th paragraph of the article:

    Political analyst Matthew Dowd was fired from MSNBC on Wednesday after speaking about Kirk’s death on air. During a broadcast on Wednesday following the shooting, anchor Katy Tur asked Dowd about “the environment in which a shooting like this happens,” according to Variety. Dowd answered: “He’s been one of the most divisive, especially divisive younger figures in this, who is constantly sort of pushing this sort of hate speech or sort of aimed at certain groups. And I always go back to, hateful thoughts lead to hateful words, which then lead to hateful actions. And I think that is the environment we are in. You can’t stop with these sort of awful thoughts you have and then saying these awful words and not expect awful actions to take place. And that’s the unfortunate environment we are in.”


  • a contributor who made an unacceptable and insensitive comment about this horrific event

    have you read the actual statement that got him fired?

    from wikipedia:

    On September 10, 2025, commenting on the killing of Charlie Kirk, Dowd said on-air, “He’s been one of the most divisive, especially divisive younger figures in this, who is constantly sort of pushing this sort of hate speech or sort of aimed at certain groups. And I always go back to, hateful thoughts lead to hateful words, which then lead to hateful actions. And I think that is the environment we are in. You can’t stop with these sort of awful thoughts you have and then saying these awful words and not expect awful actions to take place. And that’s the unfortunate environment we are in.” Dowd also speculated that the shooter may have been a supporter.

    you can agree or disagree with the decision to fire him (I’m not shedding any tears, Dowd was the chief strategist for the 2004 Bush re-election campaign, it’s ludicrous that he was working for a supposedly “progressive” network like MSNBC in the first place)

    but characterizing that statement as “celebrating murder” is just bullshit.



  • My best guess is that you were going for “hypothetical.”

    no, if I meant hypothetical I would have said hypothetical. notice that I gave two hypotheticals - Brinnon-Redmond and Tacoma-Redmond. only the Brinnon one was pathological.

    let’s go back to 9th grade Advanced English and diagram out my comment. that sentence is in a paragraph, the topic of which is “some shit about Seattle’s geography that people who’ve never lived here probably don’t know”. notice I’m talking about geography. I wasn’t saying anything about Brinnon’s population, or the likelihood of its residents working at Microsoft. that was entirely words you put into my mouth and then decided you disagreed with.

    if you think pathological is the wrong word choice there, then no I don’t think you actually understand what it means, at least not in the context I was using it. from wikipedia:

    In computer science, pathological has a slightly different sense with regard to the study of algorithms. Here, an input (or set of inputs) is said to be pathological if it causes atypical behavior from the algorithm, such as a violation of its average case complexity, or even its correctness.

    there’s crow-flies distance and there’s driving distance, and obviously driving distance is always longer, but usually not that much longer. playing around with Google Maps again, Seattle-Tacoma is 25 miles crow-flies but 37 miles driving, for a ratio of 1.5. that seems likely to be about average. the Brinnon-Redmond distance, without the ferry, gives you a ~3.7 ratio. that’s an input that causes significantly worse performance than the average case. it’s pathological.

    the closest synonym to pathological in this context would be “worst-case”, but that would be subtly incorrect, because then I would be claiming that Brinnon is the longest driving distance out of all possible commutes to Redmond within a 50 miles crow-flies bubble. you’d need some fancy GIS software to find that, not just me poking around for a few minutes in Google Maps.

    (and this is the technology sub-lemmy, in a thread about something that will mostly affect software engineers, and planning out a driving commute is a classic example of a pathfinding algorithm…using “pathological” from the computer science context here is actually an extremely cromulent word choice)

    there seems to be a recurring pattern of you responding to me, making up shit I didn’t actually say, and then nitpicking about it. recently you accused me of “trying to both-sides Nazis”. please stop doing that.