Short version of the situation is that I have an old site I frequent for user written stories. The site is ancient (think early 2000’s), and has terrible tools for sorting and searching the stories. Half of the time, stories disappear from author profiles. Thousands of stories and you can only sort by top, new, and 30-day top.

I’m in the process of programming a scraper tool so I can archive the stories and give myself a library to better find forgotten stories on the site. I’ll be storing tags, dates, authors, etc, as well as the full body of the text.

Concerning the data, there are a few thousand stories- ascii only, and various data points for each story with the body of many stores reaching several pages long.

Currently, I’m using Python to compile the data and would like to know what storage solution is ideal for my situation. I have a little familiarity with SQL, json, and yaml, but not enough to know what might be best. I am also open to any other solutions that work well with Python.

  • towerful@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    6 days ago

    I see no reason you can’t use yaml.
    Yaml and json are essentially identical for basic purposes.

    Once the scraper has been confirmed working, are you going to be doing a lot of reading/editing of the raw data? It might as well be a binary blob (which is a bad idea as it couples the raw data to your specific implementation)

    • Bubs@lemm.eeOP
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      I’m not entirely sure yet, but probably yes to both. The story text will likely stay unchanged, but I’ll likely experiment with various ways to analyze the stories.

      The main idea I want to try is assigning stories “likely tags” based on the frequency of keywords. So castle and sword could indicate fantasy while robot and ship could indicate sci-fi. There are a lot of stories missing tags, so something like this would be helpful.