"Buy Me A Coffee"

  • 3 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Yes it would. In my case though I know all of the users that should have remote access snd I’m more concerned about unauthorized access than ease of use.

    If I wanted to host a website for the general public to use though, I’d buy a VPS and host it there. Then use SSH with private key authentication for remote management. This way, again, if someone hacks that server they can’t get access to my home lan.


  • Their setup sounds similar to mine. But no, only a single service is exposed to the internet: wireguard.

    The idea is that you can have any number of servers running on your lan, etc… but in order to access them remotely you first need to VPN into your home network. This way the only thing you need to worry about security wise is wireguard. If there’s a security hole / vulnerability in one of the services you’re running on your network or in nginx, etc… attackers would still need to get past wireguard first before they could access your network.

    But here is exactly what I’ve done:

    1. Bought a domain so that I don’t have to remember my IP address.
    2. Setup DDNS so that the A record for my domain always points to my home ip.
    3. Run a wireguard server on my lan.
    4. Port forwarded the wireguard port to the wireguard server.
    5. Created client configs for all remote devices that should have access to my lan.

    Now I can just turn on my phone’s VPN whenever I need to access any one of the services that would normally only be accessible from home.

    P.s. there’s additional steps I did to ensure that the masquerade of the VPN was disabled, that all VPN clients use my pihole, and that I can still get decent internet speeds while on the VPN. But that’s slightly beyond the original ask here.



  • Correct. As I can only provide links to posts that are on your selected home instance. Eventually I’ll change this but you’ll get a 404 page for links that aren’t on your home instance, but see my P.S. below.

    P.s. there have been changes to the Lemmy API that have prevented me from getting updates for about a month now. So most of the results you’re seeing are from old posts only. Until I can rebuild the crawler or find a new API there won’t be any new content.


  • This is the same reason I had to turn off my search engines crawler.

    There were changes made to the API to ignore any page > 99. So if you ask for page 100 or page 1_000_000_000 you get the first page again. This would cause my crawler to never end in fetching “new” posts.

    lemm.ee on the other hand made a similar change but anything over 99 returns an empty response. lemm.ee also flat out ignores sort=Old, always returning an empty array.

    Both of these servers did it for I assume the same reason. Using a high page number significantly increases the response time. It used to be (before they blocked pages over 99) that responses could take over 8-10 seconds! But asking for a low page number would return in 300ms or less. So because it’s a lot harder to optimize the existing queries, and maybe not possible, for now the problematic APIs were just disabled.



  • Yep that’s the new idea. The sad part is that with this method there’s no way to get historical data. Only new posts. So if a server goes down, gets DDOSd etc… I’ll lose posts forever.

    Also building an ActivityPub implementation from scratch isn’t trivial either. So that’ll take some time.

    I’ve got a few other ideas I’m playing with as well. Like just assuming that internal post IDs are all sequential and literally fetching them one by one. Or maybe some combination of both?








  • I’m also running Ubuntu as my main machine at home. (I have a Mac and do Android development for my day job).

    But at home, I do a lot of website and backend dev.

    1. Code in VSCode
    2. Build using docker buildx
    3. Test using a local container on my machine
    4. Upload the tested code to a feature brach on git (self hosted server)
    5. Download that same feature branch on a RaspberryPi for QA testing.
    6. Merge that same code to develop 6a. That kicks off a CI build that deploys a set of docker images to DockerHub.
    7. Merge that to main/master.
    8. That kicks off another CI build.
    9. SSH into my prod machine and run docker compose up -d

  • That looks like 8.8.8.8 actually responded. The ::1 is ipv6’s localhost which seems odd. As for the wong ipv4 I’m not sure.

    I normally see something like requested 8.8.8.8 but 1.2.3.4 responded if the router was forcing traffic to their DNS servers.

    You can also specify the DNS server to use when using nslookup like: nslookup www.google.com 1.1.1.1. And you can see if you get and different answers from there. But what you posted doesn’t seem out of the ordinary other than the ::1.

    Edit just for shits and giggles also try nslookup xx.xx.xx.xx where xx.xx… is the wrong up from the other side of the world and see what domain it returns.


  • Another thing that can be happening is that the router or firewall is redirecting all port 53 traffic to their internal DNS servers. (I do the same thing at home to prevent certain devices from ignoring my router’s DNS settings cough Android cough)

    One way you can check for this is to run “nslookup some.domain” from a terminal and see where the response comes from.



  • There is a public API now. While I won’t support sorting, you can process and do what you will with the results as-is. Currently I only support Posts and Communities for now.

    When you search for posts you’re just matching against the title or body. For communities it’s searching the posts within that community.

    There’s also more filters now with: instance/community/author/since/until and a safe-search option.

    So I’m not sure how close this comes to your idea but I thought I’d share.