• 0 Posts
  • 943 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • We’re talking about fingerprinting stuff coming in via HDMI, not stuff being played by the “smart” part of the TV itself from some source.

    You would probably not need to actually sample images if it’s the TV’s processor that’s playing something from a source, because there are probably smarter approaches for most sources (for example, for a TV channel you probably just need to know the setting of the tuner, location and the local time and then get the data from available Program Guide info (called EPG, if I remember it correctly).

    The problem is that anything might be coming over HDMI and it’s not compressed, so if they want to figure out what that is, it’s a much bigger problem.

    Your approach does sound like it would work if the Smart TV was playing some compressed video file, though.

    Mind you, I too am just “thinking out loud” rather that actually knowing what they do (or what I’m talking about ;))


  • Well that makes sense but might even be more processor intensive unless they’re using an SOC that includes an NFU or similar.

    I doubt it’s a straight forward hash because a hash database for video which includes all manner of small clips and has to somehow be able to match something missing over 90% of frames (if indeed the thing is sampling it at 2 fps, then it only sees 2 frames out of every 25) would be huge.

    A rough calculation for a system of hashes for groups of 13 frames in a row (so that at least one would be hit if sampling at 2 fps on a 25 fps system) storing just one block of 13 frame hashes per minute in a 5 byte value (so large enough to have 5 trillion distinctive values) would in 1GB store enough hashes for 136k 2h movies in hashes alone so it would be maybe feasible if the system had 2GB+ of main memory, though even then I’m not so sure the CPU speed would be enough to search it every 500ms (though if the hashes are ordered by value in a long array and there’s a matching array of clip IDs, it might be doable since there are some pretty good algorithms for that).


  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    22 hours ago

    First a fair warning: I learned this stuff 3 decades ago and I’ve actually been working as a programmer since then. I do believe the example I’ll provide still applies up to a point, though CPUs often implement strategies to make this less of a problem.

    =====

    CPU’s are internally like an assembly line or a processing pipeline, were the processing of an assembly instruction is broken down into a number of steps. A rough example (representative but not exactly for any specific CPU architecture) would be:

    • Step 1: fetch assembly instruction from memory
    • Step 2: fetch into the CPU data in memory that the instruction requires (if applicable).
    • Step 3: execute arithmetic or binary operation (if applicable).
    • Step 4: evaluate conditions (if applicable)
    • Step 5: write results to memory (if applicable)

    Now, if the CPU was waiting for all the steps to be over for the processing of an assembly opcode before starting processing of the next, that would be quite a waste since for most of the time the functionality in there would be available for use but not being used (in my example, the Arithmetic Processing Unit, which is what’s used in step #3, would not be used during the time when the other steps were being done).

    So what they did was get CPUs to process multiple opcodes in parallel, so in my example pipeline you would have on opcode on stage #1, another that already did stage #1 and is on stage #2 and so on, hence why I also called it an assembly line: at each step a “worker” is doing some work on the “product” and then passing it to the next “worker” which does something else on it and they’re all working at the same time doing their thing, only each doing their bit for a different assembly instruction.

    The problem with that technique is: what happens if you have an opcode which is a conditional jump (i.e. start processing from another point in memory if a condition is valid: which is necessary to have to implement things like a “for” or “while” loop or jumping over of a block of code in an “if” condition fails)?

    Remember, in the my example pipeline the point at which the CPU finally figures out if it should jump or not is almost at the end of the pipeline (step #4), so everything before that in the pipeline might be wrong assembly instructions being processed because, say, the CPU assumed “no-jump” and kept picking up assembly instructions from the memory positions after that conditional-jump instruction but it turns out it does have to jump so it was supposed to be processing instructions from somewhere else in memory.

    The original naive way to handle this problem was to not process any assembly instructions after a conditional jump opcode had been loaded in step #1 and take the processing of the conditional jump through each step of the pipeline until the CPU figured out if the jump should occur or not, by which point the CPU would then start loading opcodes from the correct memory position. This of course meant every time a conditional jump appeared the CPU would get a lot slower whilst processing it.

    Later, the solution was to do speculative processing: the CPU tried to guess if it would the condition would be true (i.e. jump) or false (not jump) and load and start processing the instructions from the memory position matching that assumption. If it turned out the guess was wrong, all the contents of the pipeline behind that conditional jump instruction would be thrown out. This is part of the reason why the pipeline is organised in such a way that the result of the work only ever gets written to memory at the last step - if it turns out it was working in the wrong instructions, it just doesn’t do the last step for those wrong instructions. This is in average twice as fast as the naive solution (and better guessing makes it faster still) but it still slowed down the CPU every time a conditional jump appeared.

    Even later the solution was to do the processing of both branches (i.e. “jump” and “no-jump”) in parallel and then once the condition had been evaluated throw out the processing for the wrong branch and keep doing the other one. This solved the speed problem but at the cost of having double of everything, plus had some other implications on things such as memory caching (which I’m not going to go into here as that’s a whole other Rabbit Hole)

    Whilst I believe modern CPUs of the kind used in PCs don’t have this problem (and probably also at least ARM7 and above), I’ve actually been doing some Shader programming of late (both Computing and Graphics Shaders) and if I interpreted what I read correctly a version of this kind of problem still affected GPUs not that long ago (probably because GPUs work by having massive numbers of processing units which work in parallel, so by necessity they are simple) though I believe nowadays it’s not as inadvisable to use “if” when programming shaders as it used to be a few years ago.

    Anyways, from a programming point of view, this is the reason why C compilers have an optimization option of doing something called “loop unrolling” - if you have a “for” loop with a fixed number of iterations known at compile time - for example for(int i = 0; i < 5; i++){ /* do stuff */ } - the compiler instead of generating in assembly a single block of code with the contents of the “for” loop and a conditional jump at the end, will instead “unroll the loop” by generating the assembly for the body of the loop as many times as the loop would loop - so in my example the contents of that “for” loop would end up as 5 blocks in assembly each containing the assembly for the contents, one after the other, the first for i=0, the next for i=1 and so on.

    As I said, it’s been a long time since I’ve learned this and I believe nowadays general CPUs implement strategies to make this a non-problem, but if you’re programming microcontrollers or doing stuff like Compute Shaders to run on GPUs, I believe it’s actually the kind of thing you still have to take in account if you want max performance.

    Edit: Just remembered that even the solution of doing the parallel execution of both branches doesn’t solve everything. For example, what if you have TWO conditional jump instructions one after the other? Theoretically would need almost 4 or everything to handle parallel execution for it. How about 3 conditional jumps? “Is you nested for-loop screwing your performance? More news at 11!”. As I said, this kind of stuff is a bit of a Rabbit Hole.


  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    30
    ·
    edit-2
    1 day ago

    Well, I have an EE Degree specialized in Digital Systems - pretty much the opposite side of Electronic Engineering from the High Power side - and I would be almost as clueless as that guy when it comes to testing a 10,000V fence for power.

    On the other hand I do know a lot of interesting things about CPU design ;)


  • If I’m not mistaking the buzz is because it’s AC hence the buzz frequency is the same as the AC’s.

    Certainly it makes sense that the high voltage would be generated from mains power using a big fat transformer since that’s probably the simplest way to do it.



  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 day ago

    When it comes to an engineer doing a dangerous job in a domain other than his or her own, I would say that all the engineer knows is how bad things can be fucked up when one is trying to do expert stuff outside one’s own domain, because they’ve been in a position were they were the experts and some non-expert was saying things and trying stuff for their expert domain.

    After seeing others do it in one’s own expert domain one generally realizes that “maybe, just maybe, that’s exactly how I look outside my domain of expertise to the experts of that domain when I open my big fat mouth”.


  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I’m genuinelly curious were you got that from.

    I actually went and checked the minimum air gap to avoid arcing at 10,000V at standard sea level air pressure and it’s actually measured in millimeters.

    Further, is the voltage differential there between parallel conducting lines or is it between the lines and the ground?

    I’m really having trouble seing how a dry stick would cause arcing between two of those lines short of bringing them nearer than 4 mm in the first case, much less between one of the lines and the ground in the second case if its being held at chest level.

    PS: Mind you, it does make sense with a stick which is not dry - since the water in it makes it conductive - but then the guy himself would be part of the conductive circuit, which kinda defeats the point of using a stick.


  • Well, as the guy falling from the top of the Empire State Building was overheard saying on his way down: “well, so far so good”.

    Or as the common caveat given to retail investors goes: past performance is no predictor of future results.

    “So far” proves nothing because it can be “so far” only because the conditions for something different haven’t yet happenned or it simply hasn’t been in their best interest yet to act differently.

    If their intentions were really the purest, most honest and genuine of all, they could have placed themselves under a contractual obligation to do so and put money aside for an “end of life plan” in a way such that they can’t legally use it for other things, or even done like GoG and provided offline installer to those people who want them.

    Steam have chosen to maintain their ability to claw back games in your library whilst they could have done otherwise as demonstrated by GoG which let you download offline installers - no matter what they say, their actions to keep open the option of doing otherwise say the very opposite.


  • To add to your point, it’s amazing that so many people are still mindless fanboys, even of Steam.

    Steam has restrictions on installing the games their customers supposedly own, even if it’s nothing more than “you can’t install it from a local copy of the installer and have to install it from the Steam servers” - it’s not full ownership if you can’t do what you want with it when you want it without the say so of a 3rd party.

    That’s just how it is.

    Now, it’s perfectly fair if one says “yeah, but I totally trust them” which IMHO is kinda naive in this day and age (personally, almost 4 decades of being a Techie and a gamer have taught me to distrust until there’s no way they can avoid their promises, but that just me), or that one knows the risks but still thinks that it’s worth it to purchase from Steam for many games and that the mere existence of Steam has allowed many games to exists that wouldn’t have existed otherwise (mainly Indie ones) - which is my own posture at least up to a point - but a whole different thing is the whole “I LoVe STeaM And tHeY CaN DO NotHInG wrONg” fanboyism.

    Sorry but they have in place restrictions on game installation and often game playing which from the point of view of Customers are not needed and serve no purpose (they’re not optional and a choice for the customer, but imposed on customers), hence they serve somebody else than the customer. It being a valid business model and far too common in this day and age (hence people are used to it) doesn’t make those things be “in the interest of Customers” and similarly those being (so far) less enshittified than other similar artificial restrictions on Customers out there do not make them a good thing, only so far not as bad as others.

    I mean, for fuck’s sake, this isn’t the loby of an EA multiplayer game and we’re supposed to be mostly adults here in Lemmy: lets think a bit like frigging adults rather than having knee-jerk pro-Steam reactions based on fucking brand-loyalty like mindless pimply-faced teen fanboys. (Apologies to the handful of wise-beyond-their-years pimply faced teens that might read this).


  • It’s even more basic than that: if there’s no escrow with money for that “end of life” “plan” and no contractual way to claw back money for it from those getting dividends from Valve, then what the “Valve representatives” said is a completelly empty promised, or in other words a shameless lie.

    Genuine intentions actually have reliable funding attached to them, not just talkie talkie from people who will never suffer in even the tinyest of ways from not fulfulling what they promised.

    In this day and age, we’ve been swamped with examples that we can’t simply trust in people having a genuine feeling of ethical and moral duty to do what they say they will do with no actual hard consequences for non-compliance or their money on the line for it.

    PS: And by “we can’t trust in people” I really mean “we can’t trust in people who are making statements and promises as nameless representatives of a company”. Individuals personally speaking for themselves about something they control still generally are, even in this day and age, much better than people acting the role of anonymous corporate drone.




  • I was curious enough to check and with 2KB SRAM that thing doesn’t have anywhere enough memory to process a 320x200 RGB image much less 1080p or 4K.

    Further you definitelly don’t want to send 2 images per-second down to a server in uncompressed format (even 1080p RGB with an encoding that loses a bit of color fidelity to just use two bytes per pixel, adds up to 4MB uncompressed per image), so its either using something with hardware compression or its using processing cycles for that.

    My expectation is that it’s not the snapshoting itself that would eat CPU cycles, it’s the compression.

    That said, I think you make a good point, just with the wrong example - I would’ve gone with: a thing capable of handling video decoding at 50 fps - i.e. one frame per 20ms - (even if it’s actually using hardware video decoding) can probably handle compressing and sending over the network two frames per second, though performance might suffer if they’re using a chip without hardware compression support and are using complex compression methods like JPEG instead of something simpler like LZW or similar.


  • Liberals are just pro-Oligarchy - they think Money should be above the one power which is led by elected leaders: the State - which is against Democracy just like the Fascists, just with a different and more subtle mechanism determining those whose power is above the power of the vote.

    They’re just a different kind of Far-Right from the Fascists, which is why it is so easy for them to support Zionists - which are ethno-Fascists, the same sub-type of Fascism as the Nazis - even while they commit a Genocide.

    People with even the slightest shred of Equalitarian values wouldn’t ever support those commiting ethnic cleansing.


  • It seems to me they’re a country built on 19th century white colonialist values (Jewish white colonialism is no better than the once much more common Christian kind) and which has never evolved from those values but rather kept going until reaching the natural conclusion: Genocide.

    (It’s not by chance that Israelis keep claiming that they have “Western Values” - it’s really just a politically correct way of saying “white values”)

    Israel is similar to South-Africa, except that they were never forced to stop and just kept doubling down on the racism and violent oppression of the ethnicity they victimize.

    I blame mainly the US and Germany for the continued support of Israel’s white colonialism and it’s natural outcome of Genocide.


  • I’ll hazard a guess that your circle is one mainly of highly educated city folk.

    Quite independently of Religion, Education and one’s level of exposure to all sorts of people and complex social environments (which normally comes with big city life) seem to be the biggest deciding factors about people having or not “traditional values” (read: conservative) and the excessive and blind tribalism that makes them more likely to find excuses to support Genocide along ethnic lines “when our side does it”.


  • Server-side checks cost processing power and memory hence they need to spend more on servers.

    Client side kernel-level anti-cheat only ever consumes resources and cause problems to the actual gamers, not directly to Rockstart’s bottom line (and if it makes the game comms slightly slower on the client side it might even reduce server resource consumption).

    If Rockstar’s management theory is that gamers will endure just about any level of shit and keep on giving them money (a posture which, so far, has proven correct for just about every large game maker doing that kind of shit) then they will logically conclude that their bottom line won’t even suffer indirectly from making life harder for their existing clients whilst it will most definitelly suffer if they have more server costs due to implementing server side checks for cheating.


  • I played WoW right when it came out, on a PvP server.

    There was already a subset of the crowd just like there back then - some people rushed game progression to have higher levels as soon as possible only to then hang out in beginner areas and “pwn” significantly lower level players.

    That’s around the time when the term “griefer” was coined.

    In these things the real difference is how the servers are structured rather than the human beings: if the architecture is designed so that there is some way to filter players (smaller servers with moderation or some kind of kick voting system that bans repeat offenders), griefers end up in their own griefer instances griefing each other and the rest can actually play the game, otherwise you get a deeply beginner (or people with less time, such as working adults) unfriendly environment.

    As somebody else pointed out environments were people run their own servers tend create those conditions at least for some cases (basically if there’s some kind of moderation) whilst massive world centralized server environments tend to give free right to people whose pleasure in a multiplayer games derives mostly from making it unpleasent for others (in game-making, griefing is actually recognized as one of the 4 core types of enjoyment - along with achiving, exploring and socializing - people can derived from multiplayer games)


  • By a curious turn of life, I have enough technical expertise in the right areas to be able to design the software and most of the hardware turn a lot of my home smart like that in a safe way were I’m fully in control of it all (no 3rd party involved) … and I can’t be arsed, for very much those reasons.

    I mean at one point when I was playing around with microcontrollers I was looking for ideas of things to do with some neat microcontrollers which are cheap and have built-in WiFi support and I just couldn’t find anything worth the trouble, for pretty much the kind of reasons you list.

    Sure, lots of things can be done which are “cool ideas”, just not stuff were the whole “remote controlled from my tablet” actually significantly reduces the effort in doing something without introducing new problems (i.e. it would be a whole lot of work to get my apartment door to automatically open when my face is detected outside and then the thing has a non-zero rate of failure even I I train the AI really well, so when it fails I would be stuck outside hence I would still need to carry a key around, so in the end it’s really just less hassle not do it and to keep opening the door with my key), plus often the problem is that once you add “remote control” to a device’s design you just make it consume a lot more power, so now it has to run from mains power rather than run from some batteries that will last for a year or so.

    The maximum home automation I ended up doing it is automated plant watering and that stuff has been designed without remote access exactly because it can run from 3xAAA batteries for a year even though it actually has to power a water pump which when it’s running does consume a fair bit of power (but it only runs when the soil on the vase is not humid enough, which is so seldom it averages out to very little power). Sure, it would be “cool” to read the humidity sensor from my tablet and activate watering remotely, but that doesn’t actually achieve the point of of automated plant watering - making sure my plants don’t die of thirst because I forgot to water them - whilst overall making the design worse because now it needs a lot more power and I don’t have a design anymore where I can just replace the batteries once a year or so.