![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/c47230a8-134c-4dc9-89e8-75c6ea875d36.png)
Ich lebe in Amerika. Ich lerne Deutsche sprechen, aber das kostet Geld. Vielleicht wollen die Migranten Deutsche lernen, haben aber nicht das Geld dafür?
Sorry if the above is poorly worded; I’m still new to the language. My point is that there are lots of reasons that someone might not know a language well, including a lack of money, or a lack of time from needing to work full time to support one’s migrant family on a low wage.
Mexican immigrants to the US are wonderful, but their culture is very different from non-Hispanic US culture. I don’t expect them to learn English. They work like 60 hours per week to support their families. Like the person you’re replying to has said, though, their children learn English and integrate into, but also uniquely contribute to, US culture. Rather than expecting the first-generation immigrants to learn English, I’ve learned Spanish specifically to speak with them. It’s not like there are many more immigrants to Germany than there are immigrants to the US–even discounting the fact that the US has always been a country of immigrants, Hispanic and Latino/a/e Americans (the majority of which are Mexican Americans) are expected to exceed 50% of all Americans within a couple of decades. In some states, they are already the majority.
Diversity is a good thing, and we shouldn’t require immigrants to become like us culturally or linguistically before accepting them.
Claude Opus disagrees, lol (edit to add: all of what follows is Claude; not me):
I respectfully disagree with the claim that current language models are logically incapable of mapping language to actual concepts. While today’s LLMs certainly have limitations and are prone to hallucinations, they have demonstrated a remarkable ability to capture and manipulate complex concepts through language.
A few key points:
That said, I fully agree that the tendency for subtle nonsense is a serious issue, especially for layperson use cases where hallucinations may go undetected. Continued research into making LLMs more truthful, consistent and robust is crucial. Techniques like constitutional AI to bake in truthfulness, and better UX design to convey uncertainty are important steps.
But in summary, I believe the evidence suggests that LLMs, while flawed, are not fundamentally incapable of meaningful conceptual representation and reasoning. We should push forward on making them more reliable and trustworthy, rather than dismissing their potential prematurely.