• 2 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle







  • Luckily, our e2e tests are pretty stable. And unfortunately we are not given the time to write integration tests as you describe. The good thing would be that with these mocks we were then also be able to load test single services instead of the whole product.

    We merge multiple times a day and run only those e2e tests we think are relevant. Of course, this is not optimal and it is not too rare that one of the teams merges a regression, where one team or more talented at that than the others.

    You see, we have issues and we realize we have them. Our management just thinks these are not important enough to spend time on writing integration tests. I think money and developer time are two of the reasons, but the lack of feature documentation, the lack of experts for parts of the codebase (some already left for another employer), and the amount of spaghetti code and infrastructure we have are other important reasons.




  • I think I was 11 or 12 when I started plaxing Tibia (a very early MMORPG). I really enjoyed it. At some point I found out that somebody has leaked the source code. You could host your own Tibia server. You could create new map segments or introduce new quests by Lua scripting. There was a huge community for “Open Tibia”, hundreds of servers with thousands of players. First, I got into mapping, then I got into scripting and loved it.







  • Following up on the other comment.

    The issue is that widely available speech models are not yet offering the quality that is technically possible. That is probably why you think we’re not there yet. But we are.

    Oh, I’m looking forward to just translate a whole audiobook into my native language and any speaking style I like.

    Okay, perhaps we would still have difficulties with made up fantasy words or words from foreign languages with little training data.

    Mind, this is already possible. It’s just that I don’t have access to this technology. I sincerely hope that there will be no gatekeeping to the training data, such that we can train such models ourselves.