RE: Code of Ethics for AI Implementation

avatar

You are viewing a single comment's thread:

Reading the principles, it reminds me of having read of an interesting case that remarked that AI systems learn from humans, in certain cases it happens as with the learning of children, which are modeled more following the example of people who surrounds them by following the instructions given to them. Thus, if there is any contradiction, what is experienced or learned by imitation predominates over instruction.

Coemnto that, in the case that there was a while ago of a development, I think it was from Microsoft, that they had to take the line because it began to show a racist and discriminatory attitude ... It is not so strange after all, if it is considered that there have already been problems with AI systems for facial recognition that yielded a good amount of "erroneous identity assignments", at the time it was said that it was because their designers had not properly fed the initial matrix and that the parameters for their learning did not they were established in the most appropriate way.

As for the idea of ​​the implementation fields, I am almost certain that the military field has already begun ... That would not be strange, in fact, if it were started in another field I would not find it logical considering our history as a species.

By the way, they have noticed that all this aims to restrict AI so that it cannot get out of "human control" ... but what happens to the individual rights of the individuality that has been created, that is, the EGO (Identity, self-awareness, I, etc.) to which it is given origin, which is capable of learning, of caring for others, of killing others, of supporting work and daily work ... Has it not been considered what can happen if they are denied all rights as stocks? ... It gives me a very bad feeling and almost seems like the beginning of a very apocalyptic science fiction movie script.



0
0
0.000
5 comments
avatar

Greetings appreciated @pedrobrito2004

The routines of AI-supported systems are based on a compendium of human actions and reactions, these are stored and "taught" to the system. Then the system is able to decide and choose the most appropriate option.

So it is humans (so far) who create AI. Therefore we "still" have the possibility to limit it. I said "still" because the time may come when the systems themselves will be individualized and dispense with 100% of human intervention.
For this reason it is intended to establish these ethical principles.

The weakness will be that they are only "principles" and perhaps there will be people who will not be governed by them.

Your friend, Juan.

0
0
0.000
avatar

Yes, you are right and I share the idea that a friend once said: Principles are not laws, they are rather something similar to an expression of good wishes or favorable intensifications to one thing.

0
0
0.000
avatar
(Edited)

Exact!

And in this case, this represents a great weakness, a weak point.

Even if laws were made, there is no guarantee that they would be respected.

0
0
0.000
avatar

Dear @pedrobrito2004

Amazing comment buddy.

It's quite scary that AI is learning from humans. Our past and present proved many times, that we can be considered "evil".

Imagine learning from your parents, seeing that they are capable of torturing people, killing them, invigilating, controling. How would that impact your learning process.

Piotr

0
0
0.000
avatar

Sounds like a recipe to raise a psychopath ...
Something like Norman Bates in Psycho?

Source

0
0
0.000