Wow you just shined a ton of light on a problem my company had. We wanted to implement a medical imaging system from one of their subsidiaries, and it took an average of 3 months for the salesperson to respond to EACH of our emails
Proms were around for ~50 years before we started seeing “promposals”, where guys would ask girls out with 3 minute-long choreographed dances in the middle of the quad for the whole school to see & record for social media. I’m not saying it’s stupid to put effort into asking someone, it can definitely be cute, but it can also be ultra cringe if you take it too far
Dude I experience this PLUS my mind just has to have a song playing internally constantly, so even if I don’t actually play the song, my mind will beat it to death if it’s catchy enough
I know Gnome is in your less important list, but Wayland is in your important list, so I’ll recommend KDE Neon. It’s Ubuntu without snaps and moronic auto updates, so it really just feels like a more desktop-ready Debian
the concentration of assholes is always going up
True, but this isn’t a natural phenomenon, it’s a result of engagement-based ranking algorithms. Assholes attract engagement by starting flame wars and the like, so front page algorithms push them to the top.
Before social media, forums were popular and their sorting was simply by most recently updated. I think this is part of what made the internet more fun: instead of websites trying to guess what you would like most, you were given a practically random, diverse view of everything.
Why I oughta
Contrary to what CNN says, 4chan is not an alt-right community. In fact, most of those idiots have left since 2016.
If you give it a chance and check out a range of boards, you will find it’s fairly diverse. Like any other online community, it comes down to finding which boards suit you best
Idk if this has been proven, but I’m certain that the current desktop versions of Office apps are just Electron-style wrappers for the web versions. I switched from Windows to Linux about a year ago and have found the web apps to be perfectly sufficient
Audiobooks from your local library 🤓
The government has already stepped in several times. If you’re in the mood to get mad, read up on the results of these interventions. Basically, Boeing was almost forced to deal with actual oversight, but was able to convince the government at the last minute that they could handle the oversight themselves internally (thanks to the wonderful process of lobbying of course)
Work.
Early in my career, I made the mistake of revealing to my employers that I’m competent at my job. More and more work flowed onto my plate and before long, I was assigned tasks that were supposed to go to seniors. So, the seniors received almost double my salary while they enjoyed more open schedules since I was doing my work + some of theirs.
It’s simply not worth it to go above and beyond at work, unless it’s your own business.
Ok but before you go, just want to make sure you know that this statement of yours is incorrect:
In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications
Actually, they are not the distinct, mutually exclusive fields you claim they are. ML is a subset of AI, and Deep Learning is a subset of ML. AI is a very broad term for programs that emulate human perception and learning. As you can see in the last intro paragraph of the AI wikipedia page (whoa, another source! aren’t these cool?), some examples of AI tools are listed:
including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics
Some of these - mathematical optimization, formal logic, statistics, and artificial neural networks - comprise the field known as machine learning. If you’ll remember from my earlier citation about artificial neural networks, “deep learning” is when artificial neural networks have more than one hidden layer. Thus, DL is a subset of ML is a subset of AI (wow, sources are even cooler when there’s multiple of them that you can logically chain together! knowledge is fun).
Anyways, good day :)
When you want to cite sources like me instead of making personal attacks, I’ll be here 🙂
https://en.m.wikipedia.org/wiki/Large_language_model
LLMs are artificial neural networks
https://en.m.wikipedia.org/wiki/Neural_network_(machine_learning)
A network is typically called a deep neural network if it has at least 2 hidden layers
Sorry, it’s just that I work in a field where making distinctions is based on math and/or logic, while you’re making a distinction between AI- and non-AI-based image interpolation based on opinion and subjective observation
Interesting example, because tickets issued by automated cameras aren’t enforced in most places in the US. You can safely ignore those tickets and the police won’t do anything about it because they know how faulty these systems are and most of the cameras are owned by private companies anyway.
“Readable” is a subjective matter of interpretation, so again, I’m confused on how exactly you’re distinguishing good & pure fictional pixels from bad & evil fictional pixels
Normie, layman… as you’ve pointed out, it’s difficult to use these words without sounding condescending (which I didn’t mean to be). The media using words like “hallucinate” to describe linear algebra is necessary because most people just don’t know enough math to understand the fundamentals of deep learning - which is completely fine, people can’t know everything and everyone has their own specialties. But any time you simplify science so that it can be digestible by the masses, you lose critical information in the process, which can sometimes be harmfully misleading.
Both insert pixels that didn’t exist before, so where do we draw the line of how much of that is acceptable?
Everyone uses the word “hallucinate” when describing visual AI because it’s normie-friendly and cool sounding, but the results are a product of math. Very complex math, yes, but computers aren’t taking drugs and randomly pooping out images because computers can’t do anything truly random.
You know what else uses math? Basically every image modification algorithm, including resizing. I wonder how this judge would feel about viewing a 720p video on a 4k courtroom TV because “hallucination” takes place in that case too.
Considering how much Google has entrenched itself into the Internet (see manifest v3 fiasco), I would argue that creating a new browser is a fork of the web