![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/c0ed0a36-2496-4b4d-ac77-7d2fd7f2b5b7.png)
I assume that’s what was being referred to.
I assume that’s what was being referred to.
Oh man Garak is one of the best characters in Trek. And that’s a competitive list.
A similar phenomenon is knowing you’re going to need to go back and update some older section of code and when you finally get around to it, it turns out you wrote it that way to begin with. It’s like… I didn’t think I knew about this approach before…
Yeah but the first season of most shows, especially sitcoms, is usually rough. If it feels like it has any potential at all, I think you should give it at least a second season.
“I am gonna get you so many lizards!” Whenever my wife has already done a chore/task I was intending to do.
I’ve never been able to watch past Rita dying. Colin Hanks as a villain sounds amazing, but whenever I get to her death it just feels too shitty and I can’t bring myself to start that season.
99; I’d call that closest without going over.
Softly. With their words.
Ah yes, the year of “There Will Be Old Men”. But seriously, I agree, but probably in reverse order. No Country may actually be my all-time favorite now that I think about it.
Oh yes. The characters are so great. John C. Reilly’s character especially. And Tom Cruise was never more appropriately cast.
Man if a movie was ever prescient…
Yeah that’s how I feel about ads targeting children (even when the products are intended for children): they are not yet equipped to look at the ads critically and recognize when they’re being manipulated.
Do you have any theories as to why this is the case? I haven’t gone anywhere near it, so I have no idea. I imagine it’s tied up with the way it processes things from a language-first perspective, which I gather is why it’s bad at math. I really don’t understand enough to wrap my head around why we can’t seem to combine LLM and traditional computational logic.
Katamari Damacy is the first one.
Oh man that’s… Well done, well done!
Points for “sassy robot.” But you could have described it worse. This was the first one I could identify.
I’m pretty sure I encountered a similar issue on Connect, but I’ve also noticed it on Sync. Might be the result of some Lemmy quirk.
My sense in reading the article was not that the author thinks artificial general intelligence is impossible, but that we’re a lot farther away from it than recent events might lead you to believe. The whole article is about the human tendency to conflate language ability and intelligence, and the author is making the argument both that natural language does not imply understanding of meaning and that those financially invested in current “AI” benefit from the popular assumption that it does. The appearance or perception of intelligence increases the market value of AIs, even if what they’re doing is more analogous to the actions of a very sophisticated parrot.
Edit all of which is to say, I don’t think the article is asserting that true AI is impossible, just that there’s a lot more to it than smooth language usage. I don’t think she’d say never, but probably that there’s a lot more to figure out—a good deal more than some seem to think—before we get Skynet.
I was raised Catholic, but I’ve been an atheist for—oh fuck I’m old—more than half my life. But… Monastic life seems pretty dope. Why can’t there be a secular order that’s just devoted to knowledge/contemplation for its own sake (or the betterment of humanity). I know it kind of sounds like I’m describing a university, but I mean with the personal discipline, strong communal bond, and simple lifestyle.
I feel compelled to point out that “back door man” was already a common expression in blues lyrics.