• artaxadepressedhorse@lemmyngs.social
    link
    fedilink
    arrow-up
    40
    ·
    edit-2
    11 months ago

    I am sort of curious, bc I don’t know: of all the types of sexual abuse that happens to children, ie being molested by family or acquaintances, being kidnapped by the creep in the van, being trafficked for prostitution, abuse in church, etc etc… in comparison to these cases, how many cases deal exclusively with producing imagery?

    Next thing I’m curious about: if the internet becomes flooded with AI generated CP images, could that potentially reduce the demand for RL imagery? Wouldn’t the demand-side be met? Is the concern normalization and inducing demand? Do we know there’s any significant correlation between more people looking and more people actually abusing kids?

    Which leads to the next part: I play violent video games and listen to violent aggressive music and have for many years now and I enjoy it a lot, and I’ve never done violence to anybody before, nor would I want to. Is persecuting someone for imagining/mentally roleplaying something that’s cruel actually a form of social abuse in itself?

    Props to anybody who asks hard questions btw, bc guaranteed there will be a lot of bullying on this topic. I’m not saying “I’m right and they’re wrong”, but there’s a lot of nuance here and people here seem pretty quick to hand govt and police incredible powers for… I dunno… how much gain really? You’ll never get rights back that you throw away. Never. They don’t make 'em anymore these days.

      • artaxadepressedhorse@lemmyngs.social
        link
        fedilink
        arrow-up
        10
        ·
        11 months ago

        How often does tracking child abuse imagery lead to preventing actual child abuse? Out of all the children who are abused each year, what percentage of their abusers are tracked via online imagery? Aren’t a lot of these cases IRL/situationally based? That’s what I’m trying to determine here. Is this even a good use of public resources and/or focus?

        As for how you personally feel about the imagery, I believe that a lot of things humans do are gross, but I don’t believe we should be arbitrarily creating laws to restrict things that others do that I find appalling… unless there’s a very good reason to. It’s extremely dangerous to go flying too fast down that road, esp with anything related to “terror/security” or “for the children” we need to be especially careful. We don’t need another case of “Well in hindsight, that [war on whatever] was a terrible idea and hurt lots and lots of people”

        And let’s be absolutely clear here: I 100% believe that people abusing children is fucked up, and the fact that I even need to add this disclaimer here should be a red flag about the dangers of how this issue is structured.

          • CorruptBuddha@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            5
            ·
            11 months ago

            Okay… So correct me if I’m wrong, but being abused as a child is like… one of the biggest predictors of becoming a pedophile. So like… Should we preemptively go after these people? You know… To protect the kids?

            How about single parents that expose their kids to strangers when dating. That’s a massive vector for kids to be exposed to child abuse.

          • artaxadepressedhorse@lemmyngs.social
            link
            fedilink
            arrow-up
            5
            ·
            11 months ago

            I appreciate you posting the link to my question, but that’s an article written from the perspective of law enforcement. They’re an authority, so they’re incentivized to manipulate facts and deceive to gain more authority. Sorry if I don’t trust law enforcement but they’ve proven themselves untrustworthy at this point

          • PelicanPersuader@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 months ago

            It already is outlawed in the US. The US bans all depictions precisely because of this. The courts anticipated that there would come a time when people could create images which are indistinguishable from reality so allowing any content to be produced wasn’t permissible.

      • Nollij@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        Of all the problems and challenges with this idea, this is probably the easiest to solve technologically. If we assume that AI-generated material is given the ok to be produced, the AI generators would need to (and easily can, and arguably already should) embed a watermark (visible or not) or digital signature. This would prevent actual photos from being presented as AI. It may be possible to remove these markers, but the reasons to do so are very limited in this scenario.

          • Nollij@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 months ago

            I was actually specifically avoiding all of those concerns in my reply. They’re valid, and others are discussing them on this thread, just not what my reply was about.

            I was exclusively talking about how to identify if an image was generated by AI or was a real photo.

            • abhibeckert@beehaw.org
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              11 months ago

              I was exclusively talking about how to identify if an image was generated by AI or was a real photo.

              These images are being created with open source / free models. Whatever watermark feature the open source code has will simply be removed by the criminal.

              Watermarking is like a lock on a door. Keeps honest people honest… which is useful, but it’s not going to stop any real criminals.

              • evranch@lemmy.ca
                link
                fedilink
                arrow-up
                6
                ·
                11 months ago

                In this specific scenario, you wouldn’t want to remove the watermark.

                The watermark would be the only thing that defines the content as “harmless” AI-generated content, which for the sake of discussion is being presented as legal. Remove the watermark, and as far as the law knows, you’re in possession of real CSAM and you’re on the way to prison.

                The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented, but I would assume the goal in marketing the real thing could only be to sell it to the worst of the worst, people who get off on the fact that children were abused to create it. And in that case, if AI is indistinguishable from the real thing, how do you sell criminal content if everyone thinks it’s fake?

                Anyways, I agree with other commenters that this entire can of worms should be left tightly shut. We don’t need to encourage pedophilia in any way. “Regular” porn has experienced selection pressure to the point where taboo is now mainstream. We don’t need to create a new market for bored porn viewers looking for something shocking.

                • abhibeckert@beehaw.org
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  11 months ago

                  The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented,

                  It wouldn’t be expensive, you could do it on a laptop in a few seconds.

                  Unless, of course, we decide only large corporations should be allowed to generate images and completely outlaw all of the open source / free image generation software - that’s not going to happen.

                  Most images are created with a “diffusion” model where you take an image, and run an algorithm that slightly modifies it. Over and over and over until you get what you want. You don’t have to (and commonly don’t - for the best results) start with a blank image. And you can run just a single pass, with the output being almost indistinguishable from the input.

                  This is a hard problem to solve and I think catching abuse after it happens is increasingly going to be more difficult. Better to focus on stopping the abuse from happening in the first place. E.g. by flagging and investigating questionable behaviour by kids in schools. That approach is proven and works well.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      I respect your boldness to ask these questions, but I don’t feel like I can adequately answer them. I wrote a 6 paragraph essay but using GPT-4 as a sensitivity reader, I don’t think I can post it without some kind of miscommunication or unintentional hurt. Instead, I’ll answer the questions directly by presenting non-authoritative alternate viewpoints.

      1. No idea, maybe someone else knows
      2. That makes sense to me; I would think there would be a strong pressure to present fake content as real to avoid getting caught but they’re already in deep legal trouble anyway and I’m sure they get off to it too. It’s hard to know for sure because it’s so stigmatized that the data are both biased and sparse. Good luck getting anyone to volunteer that information
      3. I consider pedophilia (ie the attraction) to be amoral but acting on it to be “evil”, ala noncon, gore, necrophilia, etc. That’s just from consistent application of my principles though, as I haven’t humanized them enough to care that pedophilia itself is illegal. I don’t think violent video games are quite comparable because humans normally abhor violence, so there’s a degree of separation, whereas CP is inherently attractive to them. More research is needed, if we as a society care enough to research it.
      4. I don’t quite agree, rights are hard-won and easy-lost but we seem to gain them over time. Take trans rights to healthcare for example - first it wasn’t available to anyone, then it was available to everyone (trans or not), now we have reactionary denials of those rights, and soon we’ll get those rights for real, like what happened with gay rights. Also, I don’t see what rights are lost in arguing for the status quo that pedophilia remain criminalized? If MAPs are any indication, I’m not sure we’re ready for that tightrope, and there are at least a dozen marginalized groups I’d rather see get rights first. Unlike gay people for instance, being “in the closet” is a net societal good because there’s no valid way to present that publicly without harming children or eroding their protections.
    • jivandabeast@lemmy.browntown.dev
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Points about real stuff hiding in a sea of fake stuff aside, because these ais would likely have been trained on images of real children and potentially real abuse material, each new generated image could be considered a re-exploitation of that child.

      Of course, i don’t think that’s true in a legal sense but definitely in an emotional and moral sense. I mean look at the damage deepfakes have done to the mentals for so many celebrities and other victims, then imagine literally a minor trying to move past one of the most traumatic things that could have happened to them

      • Krauerking@lemy.lol
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        I really don’t think it would actually be trained on that specific data to be able to create it. If it can figure out a blueberry dog “child naked” seems pretty boring.

  • Draedron@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    26
    ·
    11 months ago

    Isnt it better the are AI generated than real? Pedophiles exist and wont go away and no one can control it. So best they watch AI images than real ones or worse

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          Images, yes, but mixing concepts is a mixed bag. Just because the model can draw, say, human faces and dog faces doesn’t mean it has the understanding necessary to blend those concepts. Without employing specialised models (and yes of course the furries have been busy) the best you’ll get is facepaint. The pope at a beach bar doesn’t even come close to exercising that kind of capability: The pope is still the pope and the beach bar is still the beach bar, and a person is still sitting there slurping a caipirinha.

          • Amju Wolf@pawb.social
            link
            fedilink
            English
            arrow-up
            10
            ·
            11 months ago

            I mean if you train a model on porn with adult actors and on regular photos with children, it shouldn’t be hard to generate the combination.

            You probably wouldn’t even need any fancy training data but if you really wanted you could pick adult actors that look young or in other ways similar to the children to help the process.

            • barsoap@lemm.ee
              link
              fedilink
              arrow-up
              3
              ·
              11 months ago

              Knowing what a nude adult looks like doesn’t mean that the model knows what a nude child looks like. I’m quite sure it’s easy to generate disturbing images like that, but actual paedophiles I think won’t be satisfied with child faces on small adult bodies.

              Ordinary deepfakes actually have a very similar problem: Sure you can take a picture of a celebrity and tell the AI to undress them – but it won’t be their actual body. The AI is going to be able to approximate their overall build but it’s going to be a generic adult body, not the celebrity’s body. Or, differently put, AI models aren’t any better at undressing people with their eyes than teenagers.

              • Amju Wolf@pawb.social
                link
                fedilink
                arrow-up
                3
                ·
                11 months ago

                I see where you’re coming from but that’s a technical issue that will probably be solved in time.

                It’s also really not a black and white; sure maybe you can see it isn’t perfect but you’d still prefer it to content where you know no one was actually harmed.

                Despite what reputation people like that have (due to the simple fact of how reporting works), most are harmless like me and you and don’t actually want to see innocent people suffer and would never act on their desires. So having a safe and harmless outlet might help.

                • barsoap@lemm.ee
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  11 months ago

                  I see where you’re coming from but that’s a technical issue that will probably be solved in time.

                  You cannot create information from nothing.

                  So having a safe and harmless outlet might help.

                  Psychologists/Psychiatrists are still on the fence on that one, I wouldn’t be surprised if it depends on the person. And yes the external harm produced by AI images is definitely lower than that produced from actual CSAM, doubly so newly produced CSAM, but that doesn’t mean that therapy, even in its current early stages, couldn’t do even better.

                  Differently put: We may be again falling into the trap of trying to find technological solutions to societal problems (well, this is /c/technology…). Which isn’t to say that we shouldn’t care at all about models trained on CSAM, but that’s addressing symptoms, not causes. Ultimately addressing root causes is more important: The vast majority of paedophiles are not exclusive paedophiles, often they’re not even really attracted to kids at all beyond having developed a fetish, they’re rapists focussing on the most vulnerable, often due to having been victims of sexual abuse themselves.

              • artaxadepressedhorse@lemmyngs.social
                link
                fedilink
                arrow-up
                2
                ·
                11 months ago

                I dunno, you seen the stats on popularity of shemale porn? Pretty sure the human brain isn’t that picky. It goes: “boobs check. Cock insertion check.”

            • barsoap@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              11 months ago

              That’s not concept mixing, also, it’s not proper origami (paper doesn’t fold like that). The AI knows “realistic swan” and “origami swan”, meaning it has a gradient from “realistic” to “origami”, crucially: Not changing the subject, only the style. It also knows “realistic human”, now follow the gradient down to “origami human” and there you are. It’s the same capability that lets it draw a realistic mickey mouse.

              It having understanding of two different subjects, say, “swan” and “human”, however, doesn’t mean that it has a gradient between the two, much less a usable one. It might be able to match up the legs and blend that a bit because the anatomy somehow matches, and well a beak is a protrusion and it might try to match it with the nose. Wings and arms? Well it has probably seen pictures of angels, and now we’re nowhere close to a proper chimera. There’s a model specialised on chimeras (gods is that ponycat cute) but when you flick through the examples you’ll see that it’s quite limited if you don’t happen to get lucky: You often get properties of both chimera ingredients but they’re not connected in any reasonable way. Which is different from the behaviour of base sdxl, which is way more prone to bail out and put the ingredients next to each other. If you want it to blend things reliably you’ll have to train a specialised model using appropriate input data, like e.g. this one.

    • Coffee Junky ❤️@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      11 months ago

      Yeah exactly, I don’t want to see it but the same goes for a lot of weird fetishes.

      As long as no one is getting hurt I don’t really see the problem.

      • Barry Zuckerkorn@beehaw.org
        link
        fedilink
        arrow-up
        15
        ·
        11 months ago

        As long as no one is getting hurt I don’t really see the problem.

        It’d be hard to actually meet that premise, though. People are getting hurt.

        Child abuse imagery is used as both a currency within those circles to incentivize additional distribution, which means there is a demand for ongoing and new actual abuse of victims. Extending that financial/economic analogy, seeding that economy with liquidity, in a financial sense, might or might not incentivize the creation of new authentic child abuse imagery (that requires a child victim to create). That’s not as clear, but what is clear is that it would reduce the transaction costs of distributing existing child abuse imagery, which is a form of re-victimizing those who have already been abused.

        Child abuse imagery is also used as a grooming technique. Normalization of child sexual activity is how a lot of abusers persuade children to engage in sexual acts. Providing victimless “seed” material might still result in actual abuse happening down the line.

        If the creation of AI-generated child abuse imagery begins to give actual abusers and users of real child abuse imagery cover, to where it becomes more difficult to investigate the crime or secure convictions against child rapists, then the proliferation of this technology would make it easier to victimize additional children without consequences.

        I’m not sure what the latest research is on the extent to which viewing and consuming child porn would lead to harmful behavior down the line (on the one hand, maybe it’s a less harmless outlet for unhealthy urges, but on the other hand, it may feed an addictive cycle that results in net additional harm to society).

        I’m sure there are a lot of other considerations and social forces at play, too.

        • Amju Wolf@pawb.social
          link
          fedilink
          English
          arrow-up
          10
          ·
          11 months ago

          I mean you could also go with a more sane model that still represses the idea while allowing some controlled environment for people whom it can really help.

          You could start by not prosecuting posession, only distribution. So it would still be effectively “blocked” everywhere like it’s (attempted to be) now, but distributing models for generation would be fine.

          Or you could create “known safe” (AI generated) ‘datasets’ to distribute to people, while knowing it was ethically created.

          is used as both a currency within those circles to incentivize additional distribution, which means there is a demand for ongoing and new actual abuse of victims

          A huge part of the idea is that if you create a surplus of supply it cannot work as a currency and actual abuse material will be drowned out and not wort it to create for the vast majority of people - too risky and irrelevant if you have a good enough alternative.

          You’re definitely right though that there would have to be more considerations.

          • ParsnipWitch@feddit.de
            link
            fedilink
            arrow-up
            7
            ·
            11 months ago

            You seem to think it’s some kind of human right and people are entitled to have fapping material provided for them. No one is hurt if people don’t have fapping material.

            • Amju Wolf@pawb.social
              link
              fedilink
              arrow-up
              6
              ·
              edit-2
              11 months ago

              There is an argument to be made that allowing people with unhealthy desires a safe and harmless outlet, they will be less compelled to go with the harmful option.

              And, actually, I kinda want to disagree with the premise too. Even if it was provably true that noone gets hurt if there wasn’t porn, you can flip the question; why should it be banned if it doesn’t hurt anyone? Do you want to live in a world where anything that’s perceived as bad is just outright banned without much thought?

              • ParsnipWitch@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                11 months ago

                You are already making assumptions about whether or not producing artificial CP is harmful. But in truth nobody knows. And studies have shown that media indeed does influence us. It’s quite naive to assume that somehow just porn doesn’t.

                • Amju Wolf@pawb.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  11 months ago

                  Artificial or not, this isn’t really a new idea. A similar argument can be made for existing CSAM and providing it under controlled conditions.

                  And yeah, “nobody knows”, in huge part because doing such a study would be highly illegal under current CSAM laws in most parts of the world. So, paradoxically, you can’t even legally study how to help those people, even if they actively want to be helped and want to help you do research on it.

                  Edit: Also, I’m not really making any assumptions; I literally said “there is an argument to be made”. I’m not making that argument because I don’t actually know enough. Just saying that it’s an option that should be explored.

        • fox_the_apprentice@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          11 months ago

          A lot of the comments in here seem a little bit too sympathetic

          It is a mental illness. If fake images result in less real-world abuse then that’s a good thing.

            • Coffee Junky ❤️@beehaw.org
              link
              fedilink
              arrow-up
              6
              ·
              11 months ago

              Do you think people that are gay are mentally ill? Do you think those people choose specifically to be attracted to people from the same sex? A lot of the same things can de said about people that are attracted to kids.

              I’m not trying to say we should in any shape or form tolerate child abuse. But it’s important that we recognize that there are people like this and they didn’t choose to be that way. People have no problem to talk about punishment, but don’t like to also accept that they are also victims in a way.

            • CorruptBuddha@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              11 months ago

              This took me 2 seconds to google.

              Perhaps the most serious accusation against pornography is that it incites sexual aggression. But not only do rape statistics suggest otherwise, some experts believe the consumption of pornography may actually reduce the desire to rape by offering a safe, private outlet for deviant sexual desires.

              “Rates of rapes and sexual assault in the U.S. are at their lowest levels since the 1960s,” says Christopher J. Ferguson, a professor of psychology and criminal justice at Texas A&M International University. The same goes for other countries: as access to pornography grew in once restrictive Japan, China and Denmark in the past 40 years, rape statistics plummeted. Within the U.S., the states with the least Internet access between 1980 and 2000—and therefore the least access to Internet pornography—experienced a 53 percent increase in rape incidence, whereas the states with the most access experienced a 27 percent drop in the number of reported rapes, according to a paper published in 2006 by Anthony D’Amato, a law professor at Northwestern University.

              https://www.scientificamerican.com/article/the-sunny-side-of-smut/#:~:text=Perhaps the most serious accusation,outlet for deviant sexual desires.

                • CorruptBuddha@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  11 months ago

                  Not saying it is impossible, but the simple fact that internet exists or not is absolutely not indicative of porn having a positive or negative effect. It is a pretty weak article to use as evidence against what I am saying when it states clearly that these are only associations and correlations and essentially guesswork.

                  It’s more than you’ve provided.

        • Amju Wolf@pawb.social
          link
          fedilink
          arrow-up
          6
          ·
          11 months ago

          A “weird fetish” is, quite literally a paraphilia, just like pedophilia. We only care about the latter because it has the potential to hurt people if acted upon. There’s no difference, medically speaking.

          A lot of the comments in here seem a little bit too sympathetic.

          When you want to solve an issue you need to understand the people having it and have some compassion, which tends to include stuff like defending people who didn’t actually do anything harmful from being grouped with the kind who do act on their urges.

          • artaxadepressedhorse@lemmyngs.social
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            Humans also tend to possess an abusive tendency, where, once they can justify labeling somebody as “bad” they can justify being cruel to them. I see people doing it all the time.

    • hh93@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      I guess it depends on what pedophilia is in the end of how it’s developed.

      If it’s more like a sexual preference then it’s probably there already when someone is born and not changeable, but if it’s more like a fetish then those are (afaik) related to experiences and exposures while growing up and actually can change and develop over time - and in that case it could be really dangerous to have that kind of material floating around freely.

  • 4dpuzzle@beehaw.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 months ago

    Now that CSAM content is generated by bigcos with deep pockets, politicians don’t want to scan their servers or take any other action. These are the same demagogues who wanted to kill end-to-end encryption and scan ordinary people’s devices in the name of CSAM. Greedy and hypocritical vermin.

    • SmoochyPit@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      I can’t believe how hard it is to avoid drawn or generated cp on there— and you can only ignore one tag without premium, so it’s not viable to manually make a blocklist :(

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    NEW YORK (AP) — The already-alarming proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Tuesday.

    In a written report, the U.K.-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.

    In a first-of-its-kind case in South Korea, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to create 360 virtual child abuse images, according to the Busan District Court in the country’s southeast.

    What IWF analysts found were abusers sharing tips and marveling about how easy it was to turn their home computers into factories for generating sexually explicit images of children of all ages.

    While the IWF’s report is meant to flag a growing problem more than offer prescriptions, it urges governments to strengthen laws to make it easier to combat AI-generated abuse.

    Users can still access unfiltered older versions of Stable Diffusion, however, which are “overwhelmingly the software of choice … for people creating explicit content involving children,” said David Thiel, chief technologist of the Stanford Internet Observatory, another watchdog group studying the problem.


    Saved 78% of original text.

  • jsdz@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    vastly expands the pool of potential victims

    I’m not brave enough at the moment to say it isn’t some kind of crime, but creating such images (as opposed to spamming them everywhere, using them for blackmail, or whatever) doesn’t seem to be a crime that involves any victims.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I’m brave enough to say what I am sure some people are thinking.

      If a pedophile can have access to a machine that generates endless child porn for them, completely cutting off the market for the “real thing”, then maybe that’s a step in a positive direction. Very far from perfect but better than the status quo.

      The ideal ultimate solution is to develop a treatment that pedophiles can use to just stop being pedophiles entirely. I bet most pedophiles would jump on such a thing. But until that magical day maybe let’s explore options that reduce the harm done to actually real children in the immediate term.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Some psychologists agree with you. Others say it would only make the problem worse, making them want to escalate. Definitely one that I’m letting the professionals debate on and I’ll go with their opinion

    • SmoochyPit@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      My bigger concern is the normalization of and exposure to those ideas and concepts (sexualization of children). That’s also why I dislike loli/shota media, despite it being fictional.

      That said, I still think it’s a much better alternative to CSAM and especially to actually harming a child for those who have those desires due to trauma or mental illness. Though I’m not sure if easy, open access is entirely safe, either.

      • ono@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        My bigger concern is the normalization of and exposure to those ideas and concepts

        The same concern has been behind attempts to restrict/ban violent video games, and films before that, and books before that. Despite generations of trying, I don’t think a causal link has ever been established.