![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/q98XK4sKtw.png)
I’m fairly sure it’s deficiencies in StatCounter’s measurement that’s accounting for it. Statistical noise, basically.
I’m fairly sure it’s deficiencies in StatCounter’s measurement that’s accounting for it. Statistical noise, basically.
Yeah that’s what I mean - it’s not that the file size reduction is minimal, but that the benefits of that are fine, but not earth-shattering.
Oh! Note that in Settings under Network, there’s also a VPN setting that allows you to manually configure a VPN. It has an “Import from file…” option, so presumably, there’s a way to obtain a config file that should make it work. If not, knowing which options to set might work as well.
A VPN is definitely an example of software you should use rpm-ostree to install.
I think it’s fine if you use rpm-ostree for it, but it’s not necessarily required. I recently found out that the Mozilla VPN developers are experimenting (!) with building a Flatpak, and having tried it myself, it works very well.
What is it with this obsession with JPEG-XL? I keep seeing it mentioned on lots of threads, but as a user, the benefits seem marginal? Like: would be nice, but I’d expect more significant benefits from something that’s brought up this often - so which benefits am I missing?
It’s also clearly still in development and doesn’t really work well yet, so while fun, probably not something you’ll want to use yet. It’s not even at the point where reporting bugs makes sense.
Yes, every browser caches resources that multiple pages of the same site use, unless the site instructs them not too.
It is also the case that almost every modern browser does not share those caches between different websites, to avoid providing a mechanism for them to share data. This means that for websites, it is no longer beneficial to use CDNs, if it ever was - in practice, it was also the case that only very few CDN resources were actually shared between different websites (since they all depended on different versions or different CDNs).
The best thing you can do is not mess with the settings and leave them at the defaults, otherwise the mere fact of some data not being available already makes you stand out, in addition to breaking some websites.
And as for taking another crack at it, this time hopefully in a way that won’t confuse non-users, here’s some interesting followup looking for input: https://connect.mozilla.org/t5/discussions/how-can-firefox-create-the-best-support-for-web-apps-on-the/m-p/60561
The AMA was June 13th, the acquisition news was posted June 16h.
It’s on the roadmap, though I imagine doing it properly is going to take a while - the test build was very rough, just to verify whether it was even realistic.
As @denschub@schub.social always emphasises: make sure to file a report at https://webcompat.com!
We ask everyone to file their reports, because all reports are really useful. Even if we don’t respond to every single thing you report, it’s a signal that we’re processing in many different ways. (…) please, keep reporting all issues you see, because every single blip counts!
https://www.reddit.com/r/firefox/comments/1de7bu1/comment/l8ghtr2/
Yeah, and the main difference to me is that that’s not going to sway elections or disclose a journalist’s sources or expose a human rights activist or something like that.
As I understand it, the way it works is that the aggregate categories are defined beforehand, e.g. "these sites are part of the “animals” category. So then if you visit any of those sites, your local install will match them against that list, and then share the aggregation outcome (i.e. “you visited an ‘animals’ site”), without having to share the specific site you viewed - which thus Mozilla can’t even know.
Was also asked about and answered in the recent AMA on reddit:
It’s a bit of a stretch to turn “may also” into “main purpose is”, but you’re right - that shows that indeed it’s not a big leap to use it for advertising.
But no, as I understand it, this isn’t extracting sensitive data from users and then only keeping it in anonymised aggregate form - the sensitive data is handled on your device and never reaches Mozilla, and the anonymised aggregate form (i.e. the high-level category derived from that data) is the only thing that’s actually sent.
And again, it’s always been an ad platform, it’s still the only proven way to fund development.
I won’t comment on this acquisition, cause I have no idea what this company does.
Oh wow, am I dreaming? Is this someone on the internet saying they’re wrong? You’re a rare breed! ❤️
Ah right, we’re talking different definitions of “Firefox users”. I meant that they’re not collecting data on specific users, i.e. there’s nothing on Mozilla servers that says anything about me specifically. The post is talking about Firefox users as a collective, i.e. “this many Firefox users are searching for animals”. Which is something it’s done for ages, albeit not for what websites people are loading. (But it is known, for example, which menu items are most used.)
I’ll also note that that post is not about advertising but about what features to develop, but I’ll grant that it’s not a big leap to use it to serve more granular advertisements as well.
How to free the rest of the web from advertising is not Mozilla’s problem.
It kinds is though, the reason it exists is to ensure the internet is a healthy global public resource.
Some of the many hundreds millions of dollars they’re paid annually in excess of what it costs to maintain a web browser
AFAIK Mozilla nets about $500 million a year from Google being the default search engine, which is roughly the entire budget, and is lower than what Google and Apple spend to maintain their web browsers. So your numbers seem optimistic to me.
Trying to collect data about Firefox users in order to better target ads at them
I haven’t seen that happening, or at least, not “collect” in the sense of “Mozilla has data about Firefox users in order to better target ads at them”. Possibly that the user’s own local device has that data.
Again, Mozilla has always been an ad-funded operation. But also always without doing surveillance.
(I do 100% agree that it is a risky business to be in and that I’d hate to see it cross the line, but I’m withholding judgment until I actually see that happening.)
It’s hard to tell, as there are so many things that influence it. A huge factor is selection bias, as only a small number of website embed StatCounter, and that’s very likely to not be a representative sample. I’d bet that the influence of that is magnitudes larger than of user agent spoofing.