• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • and your source measured the effects of one single area that cathartic theory is supposed to apply to, not all of them.

    your source does in no way support the claim that the observed effects apply to anything other than aggressive behavior.

    i understand that the theory supposedly applies to other areas as well, but as you so helpfully pointed out: the theory doesn’t seem to hold up.

    so either A: the theory is wrong, and so the association between aggression and sexuality needs to be called into question also;

    or B: the theory isn’t wrong after all.

    you are now claiming that the theory is wrong, but at the same time, the theory is totally correct! (when it’s convenient to you, that is)

    so which is it now? is the theory correct? then your source must be wrong irrelevant.

    or is the theory wrong? then the claim of a link between sexuality and aggression is also without support, until you provide a source for that claim.

    you can’t have it both ways, but you’re sure trying to.







  • not necessarily, but it can be a good idea to have a distributed, tamper proof ledger of transactions.

    that way anyone can provide proof for basically anything to do with the service: payment, drive, location, etc.

    it might also have advantages from a security perspective for riders and drivers.

    there are advantages, they’re not entirely necessary, but they may well be the best option for a distributed network (i.e.: no central server infrastructure, at least not beyond some simple software repository for downloads/updates)






  • i looked it over and … holy mother of strawman.

    that’s so NOT related to what I’ve been saying at all.

    i never said anything about the advances in AI, or how it’s not really AI because it’s just a computer program, or anything of the sort.

    my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.

    my argument isn’t even related to algorithms, programs, or machines.

    what these tools do is not intelligence: it’s mimicry.

    that’s the correct word for what these systems are capable of. mimicry.

    intelligence has properties that are simply not exhibited by these systems, THAT’S why it’s not AI.

    call it what it is, not what it could become, might become, will become. because that’s what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.

    the wiki talks about people using shifting goal posts in order to “dismiss the advances in AI development”, but that’s not what this is. i haven’t changed what intelligence means; you did! you moved the goal posts!

    I’m not denying progress, I’m denying the claim that the goal has been reached!

    that’s an entirely different argument!

    all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.

    calling what we have currently AI is wrong, by definition; it’s like saying a single neuron is a brain, or that a drop of water is an ocean!

    just because two things share some characteristics, some traits, or because one is a subset of the other, doesn’t mean that they are the exact same thing! that’s ridiculous!

    the definition of AI hasn’t changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that’s not ME moving goal posts, it’s you.

    you said a definition of 70 years ago is “old” and therefore irrelevant, but that’s a laughably weak argument for anything, but even weaker in a scientific context.

    is the Pythagorean Theorem suddenly wrong because it’s ~2500 years old?

    ridiculous.


  • just because the marketing idiots keep calling it AI, doesn’t mean it IS AI.

    words have meaning; i hope we agree on that.

    what’s around nowadays cannot be called AI, because it’s not intelligence by any definition.

    imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:

    “this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!”

    would you go:

    “oh, wow, i guess i need to reconsider what a wheel is, because that’s what the salesperson said is the future!”

    or would you go:

    “that’s idiotic. this obviously isn’t a wheel and this guy’s a scammer.”

    if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven’t invented intelligence, you’re just lying to people. that’s all it is.

    the current mess of calling every fancy spreadsheet an “AI” is purely idiots in fancy suits buying shit they don’t understand from other fancy suits exploiting that ignorance.

    there is no conspiracy here, because it doesn’t require a conspiracy; only idiocy.

    p.s.: you’re not the only one here with university credentials…i don’t really want to bring those up, because it feels like devolving into a dick measuring contest. let’s just say I’ve done programming on industrial ML systems during my bachelor’s, and leave it at that.


  • perceptual learning, memory organization and critical reasoning

    i mean…by that definition nothing currently in existence deserves to be called “AI”.

    none of the current systems do anything remotely approaching “perceptual learning, memory organization, and critical reasoning”.

    they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.

    so OPs original question remains:

    why is it called “AI”, when it plainly is not?

    (my bet is on the faceless suits deciding it makes them money to call everything “AI”, even though it’s a straight up lie)


  • actually, the law leaves remarkably little room for interpretation in this case.

    here’s the law in full, emphasis mine:

    Strafgesetzbuch (StGB) § 202a Ausspähen von Daten (1) Wer unbefugt sich oder einem anderen Zugang zu Daten, die nicht für ihn bestimmt und die gegen unberechtigten Zugang besonders gesichert sind, unter Überwindung der Zugangssicherung verschafft, wird mit Freiheitsstrafe bis zu drei Jahren oder mit Geldstrafe bestraft. (2) Daten im Sinne des Absatzes 1 sind nur solche, die elektronisch, magnetisch oder sonst nicht unmittelbar wahrnehmbar gespeichert sind oder übermittelt werden.

    the text is crystal clear, that security measures need to be “overcome” in order for a crime to have been committed.

    it is also obvious that cleartext passwords are NOT a “security measure” in any sense of the word, but especially in this case, where the law specifically says that the data in question has to have been “specially secured”. this was not the case, as evident by the fact that the defendant had easy access to the data in question.

    this is blatant misuse of the law.

    the data law makes no attempt to take into account the intent of the person, quite differently from when it comes to physical theft, which is immediately and obviously ridiculous.

    you mentioned snooping around in a strangers car, and that’s a good comparison!

    you know what you definitely couldn’t be charged with in the example you gave? breaking and entering!

    because breaking and entering requires (in germany at least) that you gained access through illegal means (i.e.: literally broke in, as opposed to finding the key already in the lock).

    but that’s essentially what is happening in this case, and that is what’s wrong with this case!

    most people agree he shouldn’t have tried to enter the PW.

    what has large parts of the professional IT world up in arms is the way the law was applied, not that there was a violation of the law. (though most in IT, like i am, think this sort of “hacking” shouldn’t be punishable, if it is solely for the purpose of finding and reporting vulnerabilities, which makes a lot of sense)


  • actually, that’s not what the law says.

    the law says that “overcoming” security measures is a crime. nothing was “overcome”.

    plaintext is simply not a “security measure” and the law was applied wrong.

    there may have been some form of infringement in regards to privacy or sensitive data or whatever, but it definitely wasn’t “hacking” of any kind.

    just like it isn’t “hacking” to browse someone’s computer files when they leave a device unlocked and accessible to anyone. invasion of privacy? sure. but not hacking.

    and the law as written (§202a StGB) definitely states that security measures have to be circumvented in order to be applied.

    that’s the problem with the case!

    not that the guy overstepped his bounds, but that the law was applied blatantly wrong and no due diligence was used in determining the outcome of the case.