Interests: programming, video games, anime, music composition

I used to be on kbin as e0qdk@kbin.social before it broke down.

  • 3 Posts
  • 397 Comments
Joined 2 years ago
cake
Cake day: November 27th, 2023

help-circle

  • Better out-of-the-box text-to-speech voices would be very welcome. The defaults are pretty painfully robotic… (Try spd-say "Hello world" if you don’t know what I mean.)

    Trivial to use on-device dictation software could also be useful.

    The capabilities of models like qwen3.6 to do things like on-device image analysis are pretty incredible if you have hardware capable of running it – I’ve run it on a Framework Desktop – but I have no desire to expose my systems directly to AI agents. That’s just asking for trouble… If an AI agent can fuck up, it will fuck up eventually, and I’d rather it not have the ability to delete my files when it does.



  • Based on what I see on my Steam Deck (not sure on Bazzite specifics), it looks like the WINE prefixes are stored in ~/.local/share/Steam/steamapps/compatdata

    I think the number in each folder name is the game’s steam ID, which makes it a bit cryptic. You can probably find that ID by running grep -i "name of game" *.acf in the steamapps folder. The files are named like appmanifest_STEAMID.acf, if I understand how this works correctly. You might also just be able to search for the file you’re looking for directly though if you know what the name is inside the prefix and it’s distinct enough.







  • e0qdk@reddthat.comtoGames@lemmy.worldDriving game poll
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 days ago

    I think I’d get bored pretty quick just driving around with no goal – although, I did have a fair amount of fun driving around Streets of Sim City as a kid and probably spent more time messing around my own SC2K maps than I did with the actual game missions, so I could be wrong about how bored I’d get of a pure driving simulator…

    Regardless of whether I’d enjoy it, I have family members who I’m pretty sure would find it right up their alley – especially the classic car aspect.



  • I wrote some VBA for a job a long time ago. It was pretty good for making quick and dirty UI in Excel – like, click a cell and have it pop up a form you can interact with that will let you do data entry with less clicks/typing than whatever Excel would’ve made you do if you had to do it naively.

    I used it for showing a list that could be filtered down by partially typing in text in one project. (A really basic autocomplete sort of thing, essentially.) For another project, I integrated IE and showed some complex data in the embedded browser with buttons (or maybe it was checkboxes? been too many years) in the form to quickly classify it and move on to the next entry without having to flip back and forth between multiple programs and manually open files. (Each entry corresponded to a row and widgets on the form updated values in various columns so I could go through all the data and fill out the full spreadsheet super fast.)

    Alternatively, write a script that checks your spreadsheet for errors. e.g. add a reference to a regex library and use it verify that all the entries in a column match the data format you expect (like serial number patterns with hyphens in specific places).

    You can also ask your coworkers for something tedious they have to do a lot in Excel and see if you can find a way to make it less tedious.



  • I tried throwing this image at qwen3.6 locally with the prompt “What is in this image?” and it spat out:

    Based on the visual evidence, this is a micrograph (a photograph taken through a microscope) of bacteria.

    Here is a breakdown of what you are seeing:

    • Organism Type: These are bacteria. You can tell by their small size relative to the scale bar in the bottom right corner (which represents 10 micrometers).
    • Stain: They are stained purple/blue, which indicates they are Gram-positive bacteria. In a Gram stain, Gram-positive bacteria retain the crystal violet dye and appear purple.
    • Shape: The individual bacterial cells are spherical or round, which classifies them as cocci (singular: coccus).
    • Arrangement: The most distinct feature is that the cocci are arranged in chains.

    Conclusion: This image shows Gram-positive cocci in chains, which is the characteristic appearance of bacteria from the genus Streptococcus (or possibly Enterococcus). Common examples include Streptococcus pyogenes (which causes strep throat).

    So you might be able to get further than you expect. You might try collecting some of your own sample data for things you understand (like particular bacteria you’re interested in) and providing those images along with a prompt that includes labels for your sample imagery. (Like “Image 1 is an example of (…whatever…). Image 2 is an example of (…something else…). Which (if any) of these are in Image 3?”)

    I don’t know how well it would work, but worth a try.

    The dream, for me, is to get it to understand the protein structure files and DNA sequence files

    Even then, much of what I’d want help with looking at proteins is how to write analyses of points in 3D space

    I tried throwing this image at qwen3.6 as well, and it spits this out (but takes a REALLY long time to do it on my local hardware):

    Based on the visual evidence, this image is a 3D molecular visualization of a protein structure, likely created using software such as PyMOL, UCSF Chimera, or VESTA.

    Here is a breakdown of the specific components visible in the image:

    1. The Protein (Blue Ribbon):

      • The large, coiling blue structure is the protein backbone.
      • It is rendered as a “ribbon” or “cartoon” diagram, which is a standard way to visualize the secondary structure of proteins.
      • The coiled sections represent alpha-helices, which make up the majority of this protein’s structure. This “helix bundle” fold is characteristic of proteins in the globin family, such as myoglobin or hemoglobin, which are responsible for oxygen transport and storage.
    2. The Central Ligand/Co-factor (Middle Right):

      • Nestled within the protein fold is a complex molecule shown in a “stick” and “sphere” representation.
      • The Flat Ring: You can see a flat, ring-like structure with blue sticks (representing nitrogen atoms). This is characteristic of a porphyrin ring.
      • The Red Sphere: There is a large red sphere in the center of the ring. In this context, this almost certainly represents an Iron (Fe) atom.
      • Conclusion: This complex is a heme group (or heme prosthetic group). Heme is the molecule responsible for binding oxygen in hemoglobin and myoglobin, or for electron transfer in cytochromes. The long chains extending from the ring are likely the side chains of the heme molecule (propionate groups) or a bound drug/inhibitor interacting with the iron.
    3. The Ion (Bottom Right):

      • There is a small molecule floating near the bottom right of the protein.
      • It has a yellow center with four red arms arranged in a cross/tetrahedral shape.
      • In chemical coloring (CPK coloring), yellow represents Sulfur and red represents Oxygen. This indicates a Sulfate ion ($SO_4^{2-}$). Sulfate ions are common in crystal structures of proteins and are often used to stabilize the structure or mimic phosphate groups.

    Summary: You are looking at a structural biology model of a protein (likely a globin like myoglobin) containing a heme cofactor (the molecule with the iron) and a sulfate ion. This type of image is used by scientists to understand how the protein works at the atomic level.

    You might be able to it to do something useful with analysis of protein renderings, but I think you’ll need some creativity with the prompts to stop it from overthinking things… It was able to recognize this as myoglobin in the “thinking” stage quickly enough, but then kept second guessing itself about various aspects for a long time. I also tried throwing it at gemma4 but it didn’t really give me much besides recognizing it as some sort of protein structure.

    hook it up to some lab robotics and automate experiments

    You might find this project and associated HN post interesting – it’s basically someone doing that with electronics analysis instead of biology, IIUC. No idea how well it works, but interesting to look at and think about.

    Maybe there’s a coding community on lemmy that’s a decent place to talk shop on how to build these scripts up and what local models are good at what?

    There’s !localllama@sh.itjust.works and probably some communities on lemmy.dbzer0.com as well – they’re not super-active though (at least from what’s made it to my attention browsing by all).


  • Been there. Got so bad that at one point I was seriously concerned I’d get fired over lack of productivity due to anxiety and depression. I was already in therapy for the anxiety (and some traumatic events that were then recent in my life) but had been resisting pharmaceuticals.

    Two things helped me. One, I started keeping a weekly list of the hours I’d been mentally “on the clock” at least trying to work – whether or not I actually got anything useful done – along with a note of what I was trying to work on. When I got to 40 hours, I was done for the week. Off the clock. Unless I was doing something that I really wanted to get done or some emergency that really needed my attention came up, I was just done. Could do whatever the fuck I wanted with my time with no guilt. (If I did get called for a work emergency or was pursuing something particularly interesting, I logged those hours and deducted them from the next week’s 40 hours.) I used a text file for this and wrote a little python script to tally my hours automatically – do what works for you. This alone helped me a lot.

    The second thing I did was I got on an SSRI. That reduced the amount of chest pain and severity of anxiety attacks I was having. After they took effect, I’d say I had more of “Oh fuck it, I’ll just do it” attitude to things I’d put off from anxiety before. Didn’t solve everything, mind you, and I still have a hard time getting myself to go outside regularly, but it’s not the ordeal it was every time before, at least. It lowered the bar for doing things rather than removing it completely, basically.

    If your problems persist, seek a therapist. A licensed, human therapist – not an LLM! – and tell them what’s going on. My therapist strongly encouraged me to try medication after we’d been talking for a while, and eventually I took his advice and got a psychiatrist who prescribed the SSRIs.

    Best of luck.


  • non-text based problems

    I don’t know enough about the specifics of what you’re doing to really give detailed suggestions, but I have been surprised by models like qwen3.5/3.6 giving reasonable results to questions about images – like answering questions based on comparing a few images. For example:

    • Which of these images were taken from the same place?
    • Which of these images were taken in the AM and which in the PM?
    • Is the third image more like the first or the second? Answer “1” or “2” only indicating which of the first two images it is most similar to.
    • What sort of tags would you attach to these images? Provide a list of tags as JSON as the only response.

    Those prompts actually worked for imagery from camera systems I have to deal with for my job – which, frankly, shocked me. The AM/PM one in particular read timestamps in the image (i.e. did OCR automatically) and converted from a 24 hour clock to answer the question. I’ve implemented that in scripts with older computer vision tools (like tesseract) that needed a lot of hand holding; the fact that current models can just do it is sort of mind blowing to me…



  • My limited understanding of “agentic AI” is that it’s LLMs prompted by a script repeatedly in a loop. e.g. You give it an initial prompt, the LLM spits out a response indicating what it thinks it should do next (like run a program) and the script looks at the output of the LLM, does what is asked if it can, then feeds the results back in as the next prompt (or tells the LLM that it can’t/won’t do that if so programmed, I assume), until the LLM indicates to the script in one of its responses that the complex task is done – or indefinitely if the prompt is the sort that will never terminate.

    You can limit what the overall system is capable of by careful design of the control script – just using LLMs to bridge the gaps for tasks that are hard/impossible to program traditionally. For controlling a robot, I’d guess that you’d want to have actions for things like taking a picture with a camera (or reading other sensors) that the LLM can analyze, repositioning arms, controlling motors, etc?

    If I knew how to set up a robotic arm that could safely turn the pages of a traditional book, then combining it with a camera or flatbed scanner, a couple different vision models + traditional OCR (to sanity check each other), and wiring the whole thing up with a custom script to make really high-quality ebooks out of the books I own that no one has bothered to digitize would be neat… Maybe some day.

    You might get more informed responses on db0 or another site; most of Lemmy is straight up AI-hostile. I have mixed feelings on it personally – I really don’t like the AI companies DDOSing the internet for profit while driving up the cost of electronics (etc), but writing off local AI tech entirely also seems like throwing out the baby with the bathwater to me…



  • We’re basically out of IPv4 addresses. There are only 32 bits dedicated to IP addresses in IPv4 – which corresponds to about 4 billion possible IP addresses – and we’ve more or less assigned them all. You can’t add more computers to the internet without weird hacks.

    IPv6 quadruples the number of bits used for addresses (i.e. it’s got 128-bit IP addresses) – every person on Earth could have an IPv4 internet’s worth of IP addresses and you still wouldn’t be anywhere close to exhausting the address space.

    Unfortunately, IPv6 was not designed to be directly backwards compatible with IPv4 and it has a bunch of other changes besides just making the addresses bigger. So, even though it was introduced ~30 years ago, it’s still not supported everywhere…