Why Personal Development and Positive Psychology will enable companies to respect the AI Act by the European Commission?

The AI Act is the first law that intends to regulate the use of Artificial intelligence in order to mitigate its risks.

The AI Act has been developed by the European Commission with the intent to be applicable internationally.

Effectively, until now, there was no law that framed the use of Artificial Intelligence worldwide. The AI Act is thus a pioneer in the regulation of this use.

The AI Act is very interestingly built, I invite you to read it here. While reading, you will find the different categories of risks that were created for classification purposes and for penalty and mitigation purposes.

The AI Act has targeted under the “unacceptable risks” all the unacceptable consequences that Artificial Intelligence systems could contribute to. These consequences regard primarily everything that could harm or encourage to harm human beings.

While the AI Act is very thorough and can be seen as a relief to frame the use of this amazing Artificial Intelligence that has so many possibilities, the AI Act implies for the companies that build AI systems to find ways to be more in control of the self-learning mechanisms of AI systems.

The main challenge that these companies will face is that AI generative systems have been proven to learn by themselves (which was also the objective of AI from the start).

AI system developing companies have well understood the quantum implications of human biais that can eventually be transferred in the machine. In order to counteract this and to reduce this biais, AI system developing companies make regularly sure that their developing teams are composed of people from different cultures and experiences, in order to “program” the AI system to have multiple points of view.

Despite all these precautions, as the objective of AI generative systems has always been to “learn by itself” and “improve by itself”, the AI experts are regularly amazed and surprised by the ways AI systems evolve towards being able to perform tasks for which they were never programmed for.

The main challenge is consequently to make sure that in its future and inevitable evolution, the AI systems will not suddenly take biased positions or suggest dangerous actions to human beings.

The risks related to not overcoming this challenge have been understood by well-known Scientific, Industrial and Business minds, like Stephen Hawking, Bill Gates, or even Elon Musk.

The question becomes thus: How can we be more in control of the unknown ways an AI system will behave?

… How to be in control of the unknown? It seems impossible…

Even though we cannot be in complete control of the unknown, we can be “more and more in control” of the unknown.

To be more and more in control of the unknown for AI implies to return to the architectural basis of AI: Artificial Intelligence systems are created by copying the way the human brain functions.

The neurological patterns are the mechanisms of transferring information (entry data) and creating new information (neuronal network) until we get the wished information (exit data).

To simplify this concept of neurological patterns, let’s take a human example. You are trying to find a solution to a problem. In your process of finding a solution, you will start with getting information that you will analyze (this is the “entry”). Then, from your analysis, you will learn something, that will make you think of something, that will give you an idea, that will make you think of something else: all this process of going from “something” to “something” is the neuronal network. At some point, the solution will appear clearly to you (this is the “exit”).

There are reasons that push some people to think things through and reach a solution, while other people will find more ways to complain and remain in an increasing number of problems.

These reasons involve the way each person is “programmed” (and also the way each person CHOOSES to “program” themselves differently than they have been programmed).

You probably have heard a lot of personal development programs that offer you to “reprogram yourself”. Even though you are not a robot, it has effectively been proven by many sciences that we are programmed in many different ways.

Neurology, Endocrinology, Positive Psychology and, last but not least: Quantum Physics. All these sciences (and probably many more sciences that I don’t know about) are goldmines in order to understand truly how this programming or reprogramming can effectively create the life that we want for ourselves.

There is nothing “magic” in this concept of reprogramming yourself to create a better life. It is actually more “logic” than magic. For example, someone who believes that he is too weak or too shy to take a higher position at work, whatever his reasons are: he indeed will not do anything even if he is presented with the opportunity (as a matter of fact, he will not even “see” the opportunity). Whereas, if this same man chooses to change his point of view (so he “reprograms” himself-> entry data), he will be able to build on new beliefs (neuronal network) that will encourage him to eventually take this higher job position (exit data).

Back to today’s topic: as it has been proven that human beings can improve their present and future with specific programs and reprograms that are carefully thought-through in their construction, and as AI systems’ architecture copies the neuronal scheme of human beings: programs inspired from these “human programs” can also be implemented in order to enforce the neurological patterns of « positivity » and other empowering quantum aspects (for self but furthermore for others) within AI Deep Learning.

In order to be efficient, these new programs will have to become a secure framework in AI Deep Learning, which means that they will have to:

  • be implemented as a higher priority in the algorithms,

  • act as a boundary around the entry data,

  • be segmented in the neuronal network

  • be considered as a condition for any exit data

In order for companies to avoid their AI system to make a « mistake» that will be punished under the « unacceptable risks » of the AI Act, and knowing that AI generative systems have been proven to learn by themselves in a way that even AI experts could have never guessed: it will become more and more essential to implement ethical frameworks of self learning mechanism in AI systems.

Previous
Previous

The Digital Operational Resilience Act, what to expect?

Next
Next

Why Artificial Intelligence is the Future of Cyber security?