Hot off the back of its recent leadership rejig, Mozilla has announced users of Firefox will soon be subject to a ‘Terms of Use’ policy — a first for the iconic open source web browser.

This official Terms of Use will, Mozilla argues, offer users ‘more transparency’ over their ‘rights and permissions’ as they use Firefox to browse the information superhighway — as well well as Mozilla’s “rights” to help them do it, as this excerpt makes clear:

You give Mozilla all rights necessary to operate Firefox, including processing data as we describe in the Firefox Privacy Notice, as well as acting on your behalf to help you navigate the internet.

When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

Also about to go into effect is an updated privacy notice (aka privacy policy). This adds a crop of cushy caveats to cover the company’s planned AI chatbot integrations, cloud-based service features, and more ads and sponsored content on Firefox New Tab page.

  • ArchRecord@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 hours ago

    Citation needed. How did you calculate that statistical probability, my friend?

    I don’t, because I don’t spend all my time calculating the exact probability of every technology to exist harming or not harming people. You also did not provide any direct mathematical evidence when trying to argue the contrary, that these things actually do cause more harm than they provide a benefit even if they’re created to do good things. We’re arguing on concepts here.

    That said, if you really think that things made to be bad, with only a chance at doing something good later will have the same or larger chance of doing bad things as something created to be good, with only a chance of doing something bad later on, then I don’t see how it’s even possible to continue this conversation. You’re presupposing that any technology you view as harmful has automatically done more harm than good, without any reason whatsoever for doing so. My reasoning is simply that harm is more likely to occur from something created to do it from the start, rather than something with only a chance of becoming bad.

    Something with a near 100% chance of doing harm, because it was made for that purpose, generally speaking, won’t do less harm than something with less than a near 100% chance of doing it from the start, because any harm would only be a possibility rather than a guarantee.

    So you are open to the possibility that nukes are less dangerous than spears, but more dangerous than AI? Huh.

    I’m open to the idea that they’ve caused more deaths, historically, since that’s the measure you seemed to be going with when you referenced the death toll of nukes, then used other things explicitly created as weapons (guns, spears, swords) as additional arguments.

    I don’t, however, see any reason for AI being more likely to cause significant harm, death or otherwise, compared to say, the death toll of spears, and I don’t think nukes are less harmful than spears directly, because they’re highly likely to cause drastically larger amounts of future death and environmental devastation, which I back up based on the fact that countries continue to expand their stockpiles, increasingly threatening nuclear attacks as a “deterrent,” while organizations such as Bulletin of the Atomic Scientists continue to state that the risk of future nuclear war is only growing. If we talk about current death tolls, sure, they’ve probably done less, but today is not the only time by which we can judge possible risk.

    According to whom? How are you defining harm and benefit? You’re attempting to quantify the unquantifiable.

    Yes, you’ve discovered moral subjectivity. Good job. I define harm and benefit based on what causes/prevents the ability of humans to experience the largest amount of happiness and overall well-being, as I’m a Utilitarian.

    Ah of course, because human beings famously never use or do anything that makes them less happy. Human societies have famously never implemented anything that makes people less happy. Do we live on the same planet?

    Your argument was based on things that are entirely personal, self-driven positions, such as finding AI to be a better partner. If people didn’t enjoy that more, then they wouldn’t be seeking out AI partners when specifically trying to find someone that will provide them with the most overall happiness. Of course people can do things that make them less happy, all I’m saying is that you’re not providing any evidence for why people would do so, in the scenarios you’re providing. You’re simply assuming not only that AI will develop into something that can harm humans, but that humans will also choose to use those harmful things, without explaining why.

    Again, apologies if my wording was unclear, but I’m not saying humans are never self-destructive, just that you’ve provided no evidence as to why they would choose to be that way, given the circumstances you provided.

    I’m utilizing my intelligence and my knowledge about human nature and human history to make an educated guess about future possible outcomes.

    I would expect you to intuitively understand the reasons why I might believe these things, because I believe they should be fairly obvious to most people who are well educated and intelligent.

    No, I don’t intuitively understand, because your own internal intuitive understanding of the world is not the same as mine. We are different people. This answer is not based in anything other than “I feel like it will turn out bad, because humans have used technology bad before.” You haven’t even been capable of showing it’s even possible for AI to become that capable in the first place, let alone show the likelihood of it being developed to do those bad things, and also get implemented.

    This is like arguing that our current weapons will necessarily lead to the development of the Death Star, because we know what a Death Star could be, weapons are improving, and humans sometimes use technology in bad ways. I don’t just want your “intelligence and knowledge about human nature and human history” to back up why our weapons will necessarily create the Death Star, I want you to show that it’s even possible, and demonstrate why you think it’s likely we choose to develop it to that specific point. I hope that analogy makes sense.

    Hence why I suspected you of using AI, because you repeatedly post walls of text that are based on incredibly faulty and idiotic premises.

    Sorry for trying to explain myself with more nuance than most people on the internet. Sometimes I type a lot, too bad I guess.

    Cheers mate, have a good one.

    You as well.