IngeniousRocks (They/She)

Don’t DM me without permission please

  • 0 Posts
  • 295 Comments
Joined 11 months ago
cake
Cake day: December 7th, 2024

help-circle

  • Its a call to be present.

    There is nothing inherently wrong with wearing headphones on the train, but ask yourself why you’re doing it.

    If you put on Headphones to keep people from talking to you, you’re making the choice to opt out of the human experience.?Make that choice every day on a 45 minute commute and after only a week 7.5 hours where you’ve opted out of chance encounter, conversation, possibly meeting a new friend or partner. It might not be a bad idea to make the choice to NOT disconnect, actively choosing to engage in the world around us makes a huge difference in how we percieve it, and how it percieves us.

    An experiment I’d suggest, if you’re the type to default to using your phone as an idle activity:

    Next time you’re idle and get the urge to pull out your phone, instead look around you and find the most interesting thing you can see. Why is it interesting? Is there anything abnormal about it? Is it’s place significant? Take that and note it in your mind, have a conversation with a coworker about it later. Then take note, how did this pointless conversation make me feel?

    Being present by choice, especially if done often, will create chances to engage with the World, and its inhabitants.

    The other day someone told me life was boring. Put the phone down, make more than the 2 meter cone you can see from around your phone visible, and you’ll find the World has a lot of engagement to offer.






  • When/If you do, a RTX3070-lhr (about $300 new) is just about the BARE MINIMUM for gpu inferencing. Its what I use, it gets the job done, but I often find context limits too small to be usable with larger models.

    If you wanna go team red, Vulkan should still work for inferencing and you have access to options with significantly more VRAM, allowing you to more effectively use larger models. I’m not sure about speed though, I haven’t personally used AMDs GPUs since around 2015.



  • No no no that’s not what I meant. I ABSOLUTELY see what you mean though.

    I was more speaking on the idea that people with similar ideas tend to congregate, but when you’ve got such different places as Florida and California for example, there is bound to be significant infighting. That’s like the UK sharing a tent with Iraq. They’re separated by like 2500 miles, these are people’s with different priorities, different lifestyles, different cultures. Its not to say they can’t get along, but it should be under a federation more similar to the EU, a coalition of independent smaller Nations. NOT directly under the same national flag, and forced to be under the same code of laws.

    Like, I get that this is skating the line of being xenophobic, but my intention isn’t to prevent people from coming into a space, it is to prevent the space from expanding to contain people who don’t want to be in it.

    Edit: my distance was wrong, its still the same frlm place to place though


  • If you’re planning on using LLMs for coding advice, may I recommend selfhosting a model and adding the documentation and repositories as context?

    I use a a 1.5b qwen model (mega dumb) but with no context limit I can attach the documentation for the language I’m using, and attach the files from the repo I’m working in (always a local repo in my case) I can usually explain what I’m doing, what I’m trying to accomplish, and what I’ve tried to the LLM and it will generate snippets that at the very least point me in the right direction but more often than not solve the problem (after minor tweaks because dumb model not so good at coding)