AI might be able to solve racism

  • Brain waves can be decoded and AI can learn to detect what kinds of thoughts people are having. If it detects people having racists thoughts it will restrict their access to certain things the same way people who were unvaccinated were restricted. Such as being unable to travel. They would have to get educated and change their thoughts until their thinking was no longer racist.


    https://www.weforum.org/events/world-economic-forum-annual-meeting-2023/sessions/ready-for-brain-transparency/

  • Big if true. Maybe AI should also decode the brain waves of toxic K-pop fans and restrict their access to the Internet and K-pop events. These people would have to get educated and change their way of thinking before they're allowed to participate in real K-pop fan activity.

  • why are we policing thoughts again?


    thinking something =/= actions


    and like superyeah mentioned toxic kpop fans - and I would add why does one limit it to racism - there's lots of bad thoughts people have


    we as a society do not and should not concern ourselves with what one thinks

  • You reminded me of this anime series called Psycho-Pass. In the anime, citizens of a dystopian society are monitored via sensors for their crime potential. The system measures biometrics of citizens' brains and mentalities. Once the measurement exceeds a certain threshold, law enforcement officers attempt to either arrest or use lethal force on the individual. The system is shown to be flawed because some of the individuals do not have intent to commit any crime despite the activity in their minds.

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • we are beeing fed all those thoughts.

    Those that can rise above and actually ask the question: WHY!

    They are the people worth talking to.

  • Precog laws and technology based on it is a bad idea and extremely dangerous. That's an existential crisis level event and anyone endorsing such a thing should be heavily scrutinized and dare I say even seen an enemy or dang near the same level as a terrorist. This type of stuff is beyond dangerous to implement, the consequences of such things is immeasurable and should never ever be allowed to happen

  • Wait...... so humans became so lazy we have to rely on AI now to do homework??

    WTF???


    This is the beginning of the end for us.

    The existence of Terminators is just a matter of time.

    :pepe-firing::pepe-firing:

    "You, me, or nobody is gonna hit as hard as life. But it ain't bout how hard you hit. It's about how hard you can get hit, and keep moving forward. How much you can take and keep moving forward." ~ Rocky Balboa

  • You reminded me of this anime series called Psycho-Pass. In the anime, citizens of a dystopian society are monitored via sensors for their crime potential. The system measures biometrics of citizens' brains and mentalities. Once the measurement exceeds a certain threshold, law enforcement officers attempt to either arrest or use lethal force on the individual. The system is shown to be flawed because some of the individuals do not have intent to commit any crime despite the activity in their minds.

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    Predictive policing has been used in the Netherlands where teenagers were profiled as likely to commit crimes as a supposed preventive measure and were subsequently monitored by the police. AI is among the technologies used and unsurprisingly targets minorities, which led to many human rights concerns.

    The result seems to have been a self-fulfilling prophecy that this type of profiling led these "Top600 likely to commit crimes", who were aware of their status, to commit crimes because they were already stereotyped.

    Here is some information if you are interested:

    https://www.reuters.com/article/world/pushback-against-ai-policing-in-europe-heats-up-over-racism-fears-idUSKBN2HA1G1/


    Innocent Until AI Says Otherwise. How Predictive Policing in the Netherlands Raises Concerns for Human Rights - Issuu
    WHAT IF CRIME COULD BE PREDICTED EVEN BEFORE IT HAPPENED? THIS POSSIBILITY HAS OFTEN BEEN EXPLORED IN SCIENCE FICTION AND, AS THE SELF-FULFILLING PROPHECY IT…
    issuu.com


  • Precog laws and technology based on it is a bad idea and extremely dangerous. That's an existential crisis level event and anyone endorsing such a thing should be heavily scrutinized and dare I say even seen an enemy or dang near the same level as a terrorist. This type of stuff is beyond dangerous to implement, the consequences of such things is immeasurable and should never ever be allowed to happen

    Google can see everything you do online (even in "private" mode). Airport body scanners can see everything in your body. Police have devices that can see through walls and if they wanted could record all you do in private. Every electronic communication is monitored. AI algorithms already know a lot about your personality and tastes, that's what the modern internet is built on. So you are already living in the dystopian techno world. We could actually use this technology for good rather than make corporations richer, and it only involved spotting hateful thought patterns, and the entire world agrees that racism is bad, unless you think otherwise?

  • Google can see everything you do online (even in "private" mode). Airport body scanners can see everything in your body. Police have devices that can see through walls and if they wanted could record all you do in private. Every electronic communication is monitored. AI algorithms already know a lot about your personality and tastes, that's what the modern internet is built on. So you are already living in the dystopian techno world. We could actually use this technology for good rather than make corporations richer, and it only involved spotting hateful thought patterns, and the entire world agrees that racism is bad, unless you think otherwise?

    All of the things that you've mentioned are things that you have done or are doing:

    - Google can see what you're doing on private mode (if you use Chrome or Google search, which you shouldn't if you care about privacy or results that aren't totally cooked by SEO)

    - Metal detectors detect things that you already have on you

    - Not every electronic communication is monitored (if you know what you're doing)


    What you're recommending is using AI to police though crimes (racism isn't even a crime in many countries anyway, and what constitutes racism varies). Specifically, it's profiling. We don't need AI to profile. We can already profile, and it's already looked down on as something we shouldn't do (in America anyway) in regards to the state towards its populace (and in many cases institutions to the populace).


    This is one of those things that sounds great if its in the hands of people that you agree with. The second this gets into the hands of someone you don't agree with, you'll realize that it was a bad idea the entire time.

  • So you are already living in the dystopian techno world

    Fallacy of Appealing to Futility.
    Just because we're already in this dystopian techno world doesn't mean we must accept it and give up.

    We could actually use this technology for good rather than make corporations richer, and it only involved spotting hateful thought patterns, and the entire world agrees that racism is bad, unless you think otherwise?

    That is quite the series of ambitious assumptions. This is assuming that 1. this will be completely protected from corporations, and prevents them from using it either notoriously or secretively. 2. The disjunction between thoughts and actions is neglected. 3. The probability that the entire world considers racism a bad thing or even agrees what racism is (normalizing for differences in language too) is near zero. Even one significant disagreement would invalidate the assumed homogeneity of agreement that racism is bad.

    I believe in the fundamental right to privacy just like the right to protect private information ( like passwords, for example) and the right to informational and intellectual property. Because this proposal necessarily requires the monitoring of all information of the brain to be able to detect the subset of racist thoughts, I would be laid bare, including all of my sensitive identification information, thoughts that never made it out of myself, as well as, indeed, potentially racist thoughts. this would be a crucial blade that savages the right to personal thought. Your freedom of speech, religion, and other fundamental rights (hopefully enshrined by your country) would be put in jeopardy because they all might lead to racist thoughts, due to the heavy interconnectivity of the brain.

    In fact I consider the modern day to be *too* transparent and myself entirely unprepared to protect my privacy of thought as it is already.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!