Didn’t pursue codification into law in his first hundred days j
As (again) a non-american, doesn’t that require both chambers to support the legislation?
Didn’t pursue codification into law in his first hundred days j
As (again) a non-american, doesn’t that require both chambers to support the legislation?
I’m not even american, so I’m not sure what you arw on about right now. All I asked was how Roe v. Wade being repealed was Biden’s fault, and the answer apparently is that he did not pack the court.
How genocide fits into Roe v. Wade, or how callling me names somehow helps I’m still unsure of.
Never let it be forgotten that Roe v. Wade was struck down during a Democrat administration
Ok, but what does that have to do with said denocrat administration? What say did they have in the matter? What could they have done to change the outcome?
They can, and are being made. E.g. the state of accessibility on Gome.
Yes, and what I’m saying is that it would be expensive compared to not having to do it.
Doing OCR in a very specific format, in a small specific area, using a set of only 9 characters, and having a list of all possible results, is not really the same problem at all.
How many billion times do you generally do that, and how is battery life after?
Cryptographically signed documents and Matrix?
I think you are replying to the wrong person?
I did not say it helps with accuracy. I did not say LLMs will get better. I did not even say we should use LLMs.
But even if I did, non of your points are relevant for the Firefox usecase.
Wikipedia is no less reliable than other content. There’s even academic research about it (no, I will not dig for sources now, so feel free to not believe it). But factual correctness only matters for models that deal with facts: for e.g a translation model it does not matter.
Reddit has a massive amount of user-generated content it owns, e.g. comments. Again, the factual correctness only matters in some contexts, not all.
I’m not sure why you keep mentioning LLMs since that is not what is being discussed. Firefox has no plans to use some LLM to generate content where facts play an important role.
At horrendous expense, yes. Using it for OCR makes little sense. And compared to just sending the text directly, even OCR is expensive.
The issue is not sending, it is receiving. With a fax you need to do some OCR to extract the text, which you then can feed into e.g an AI.
What do you mean “full set if data”?
Obviously you can not train on 100% of material ever created, so you pick a subset. There is a a lot of permissively licensed content (e.g. Wikipedia) and content you can license (e.g. Reddit). While not sufficient for an advanced LLM, it certainly is for smaller models that do not need wide knowledge.
I’d say the main differences are at least
Feel free to assume that, but don’t claim an assumption as a fact.
You recommended using native package managers. How many of them have been audited?
You know what else we shouldn’t assume? That that it doesn’t have a security feature. And we additionally then shouldn’t go around posting that incorrect assumption as if it were a fact. You know, like you did.
There is no general copyright issue with AIs. It completely depends on the training material (if even then), so it’s not possible to make blanket statements like that. Banning technology, because a particular implementation is problematic, makes no sense.
I’m confused why you think it would be anything else, and why you are so dead set on this. Repos include a signing key. There is an option to skip signature checking. And you think that signature checking is not used during downloads, despite this?
Ok, here are a few issues related to signatures being checked by default, when downloading: https://github.com/flatpak/flatpak/issues/4836 https://github.com/flatpak/flatpak/issues/5657 https://github.com/flatpak/flatpak/issues/3769 https://github.com/flatpak/flatpak/issues/5246 https://askubuntu.com/questions/1433512/flatpak-cant-check-signature-public-key-not-found https://stackoverflow.com/questions/70839691/flatpak-not-working-apparently-gpg-issue
Flatpak repos are signed and the signature is checked when downloading.
It’s OK to be wrong. Dying on this hill seems pretty weird to me.
From the page:
It is recommended that OSTree repositories are verified using GPG whenever they are used. However, if you want to disable GPG verification, the --no-gpg-verify option can be used when a remote is added.
That is talking about downloading as well. Yes, you can turn it off, but so can you usually do it with native package managers, e.g. pacman: https://wiki.archlinux.org/title/Pacman/Package_signing
But isn’t it obvious that if a presidential candidate promises some legislation, that it is contingent on the legislative branch?