Fully retired now and one of the things I’d like to do is get back into hobby programming through the exploration of new and new-to-me programming languages. Who knows, I might even write something useful someday!

  • 3 Posts
  • 74 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle

  • That’s what I worked through this morning. I learned elsewhere in these comments that users have both names and IDs and that docker references IDs.

    I’ve changed ownership of the files and folders a few times. First to match the default setting in docker-compose.yaml, then as I tried different user IDs. Always the same message.

    I did additional research and found references to something known as “mounting volumes”, but have not yet had a chance to explore that angle further. It’s not mentioned in the GTS documentation that I can see, so I just assumed (I know…) that the .yaml file was taking care of it.

    At this point, I suspect that there is something else going on, possibly with ports. I had to do a bit of fiddling with ports to kill a bind error resulting from the fact that there is another service hooked up to ports 80 and 443. I’m only guessing, but maybe it’s unable to create the database because it needs to do so via those ports. That doesn’t sound quite right to me, but it’s not like I have any real clue!

    One thing I noticed is that docker-compose is recommended by GTS, so I installed it and that really blew up in my face, so I went back to docker compose as I’ve used elsewhere.

    Research continues…







  • That is actually my point. I may not have made it clear in this thread, but my claim is not that our brains behave like LLMs, but that they are LLMs.

    That is, our LLM research is not just emulating our mental processes, but showing us how they actually work.

    Most people think there is something magic in our thinking, that mind is separate from brain, that thinking is, in effect, supernatural. I’m making the claim that LLMs are actual demonstrations that thinking is nothing more than the statistical rearrangement of that which has been ingested through our senses, our interactions with the world, and our experience of what has and has not worked.

    Searles proposed a thought experiment called the “Chinese Room” in an attempt to discredit the idea that a machine could either think or understand. My contention is that our brains, being machines, are in fact just suitably sophisticated “Chinese Rooms”.


  • Thanks! I’ve been working on this idea for quite a while. I post summaries and random thoughts occasionally hoping to refine my thinking to the point at which I’ll feel comfortable writing a proper essay.

    I like the name you’ve given the overarching system. That’s been a bit of a struggle for me, so you’ve given me a better concept to work with. “Large Sensory Input Model” captures my thoughts better than my own “the brain is just a kind of LLM.” That it’s abbreviation “LSIM” also conjures connections to “simulation” is a bonus for me, because that also addresses my thoughts on how we understand some things and other people.

    There is a fairly old hypothesis that something called “Theory of Mind” is basically our brain modelling and simulating other brains as a way to understand and predict the behaviour of others. That has explanatory power: empathy, stereotypes, in/out groups, better accuracy with closer relationships, “living on” through powerful simulations of those closest to us who have died, etc.

    Thanks for the feedback!


  • Soon kids will start talking like LLMs.

    Always have, always will.

    My pet hypothesis is that our brains are, in effect, LLMs that are trained via input from our senses and by the output of the other LLMs (brains) in our environment.

    It explains why we so often get stuck in unproductive loops like flat Earth theories.

    It explains why new theories are treated as “hallucinations” regardless of their veracity (cf Copernicus, Galileo, Bruno). It explains why certain “prompts” cause mass “hallucination” (Wakefield and anti-vaxers). It explains why the vast majority of people spend the vast majority of their time just coasting on “local inputs” to “common sense” (personal models of the world that, in their simplicity, often have substantial overlap with others).

    It explains why we spend so much time on “prompt engineering” (propaganda, sound bites, just-so stories, PR “spin”, etc) and so little on “model development” (education and training). (And why so much “education” is more like prompt engineering than model development.)

    Finally, it explains why “scientific” methods of thinking are so rare, even among those who are actually good at it. To think scientifically requires not just the right training, but an actual change in the underlying model. One of the most egregious examples is Linus Pauling, winner of the Nobel Prize in chemistry and vitamin C wackadoodle.



  • I didn’t suggest otherwise. I was merely pointing at a couple of examples where some pretty smart, pretty experienced people used Go to successfully implement entire collections of algorithms in some very performance-sensitive systems. It’s just by coincidence that I chose those examples because that is where my study is right now. Ask me in a year and I might point to your project as an example when the next person is asking for similar advice.

    If Go isn’t going to be fast enough to perform your task, then you’re probably going to be sorely disappointed when you finally get the performance you’re after and then have to stick it at the end of a wire with all kinds of stuff between you and your end users:

    Operating systems, databases, hardware, virtual machines, containers, webservers, firewalls, routers, HTML/CSS/whatever, DNS, certificate authorities, more routers and firewalls, ISPs, modems, more routers and firewalls, WiFi connected machines of all kinds, and random browsers implementing any of several different rendering engines.

    Quite frankly I can’t imagine a language that won’t offer enough performance to meet your needs in that environment.


  • The CSS also came, with the idea that HTML should focus on text information while CSS should do so on the visual design.

    My biggest beef with CSS is that it’s on the wrong end of the wire. What ever happened to the idea that the client is in charge of rendering?

    Or maybe it’s that the clients have abdicated their responsibility: the browser included with OS/2 Warp had a settings page that let me set the display characteristics of every tag in the spec. Thus, every site looked approximately the same: my font, my sizes, my indents, my spacing, whether images displayed (or even downloaded, I think) and whether text split at an image or wrapped around it. And it’s not like I had to customize everything for each site: if you used a tag my browser recognized, my browser took over.




  • That IT subject matter like cybersecurity and admin work is exactly the same as coding,

    I think this is the root cause of the absolute mess that is produced when the wrong people are in charge. I call it the “nerd equivalency” problem, the idea that you can just hire what are effectively random people with “IT” or “computer” in their background and get good results.

    From car software to government websites to IoT, there are too many people with often very good ideas, but with only money and authority, not the awareness that it takes a collection of specialists working in collaboration to actually do things right. They are further hampered by their own background in that “doing it right” is measurable only by some combination of quarterly financial results and the money flowing into their own pockets.


  • I’ve always thought the best way to kill a hobby was to turn it into a job.

    100%

    I tried turning my hobby of programming into my job. On the surface, I was reasonably successful, but the most enjoyable aspects of my hobby had to be set aside in favour of actual productivity.

    Worse, the fact that I actually got pleasure from my work left me open to exploitation. When I finally woke up to that, I ditched programming in favour of “just a job” that paid the bills and was about a million times happier as a result. It’s only recently, 15 years after leaving the field, that I find myself once again drawn back to programming.


  • In the spirit of “-10x is dragging everyone else down” I offer my take on +10x:

    It’s not about personal productivity. It’s about the collective productivity that comes from developing and implementing processes that take advantage of all levels of skill, from neophyte to master, in ways that foster the growth of others, both in skill and in their ability to mentor, guide, and foster the growth of others. The ultimate goal is the “creation” of more masters and “multipliers” while making room for those whose aptitudes, desires, and ambitions differ from your own.



  • But typically when a field becomes more affordable, it goes up in demand, not down, because the target audience that can afford the service grows exponentially.

    I’ve always been very up front with the fact that I could not have made a career out of programming without tools like Delphi and Visual Basic. I’m simply not productive enough to have to also transcribe my mental images into text to get useful and productive UIs.

    All of my employers and the vast majority of my clients were small businesses with fewer than 150 employees and most had fewer than a dozen employees. Not a one of them could afford a programmer who had to type everything out.

    If that’s what happens with AI tooling, then I’m all for it. There are still far too many small businesses, village administrators, and the like being left using general purpose office “productivity” software instead of something tailored to their actual needs.