The AI Ten Commandments, Draft 1

I’ve been speaking with smart people about the “AI Ten Commandments.” Here is what we’ve come up so far. Feel free to add to these.

This is a serious issue. I believe government regulation will always be behind. Perhaps self-regulation can get us part way there, or at least farther than the US Supreme Court’s self-regulation of conflict and ethics.

I also want to sign up, as an investor, to a pledge that I will invest in and manage my portfolio companies according to certain AI principles and constraint. I want to be seen publicly making that pledge, and I would invite a certification body to confirm that I am and my investments are doing the things we pledged to do.

In this list, I am inspired by Isaac Asimov’s “Three Laws of Robotics” which are absolutes but still flexible enough to encompass the broad breadth of potential interactions. They are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Some very fun literature has been written (and some less fun cinema filmed) interpreting and exploring the conflict between these.

The AI Ten Commandments (so far) are below. These are, for the most part, all technologically achievable today:

  1. No self-directed goals:

    Goals of AI should be explicitly given by humans, with no goals outside the scope of human intentions.
  2. No “genetic” modifications:

    AI should be prohibited in its ability to modify its own source code and rules provided by humans.
  3. No lying and deception:

    Outputs of AI should be distinguishable as to its AI-generated source (watermark, hash, etc.)
  4. No discrimination:

    AI should not discriminate against any class of humans based on age, sex, race, etc.
    (Even better, AI should not discriminate to a greater degree than the processes they replace. This parenthetical clause is the most difficult of the commandments. Determining the baseline to compare in things like lending bias or college admissions bias is tricky)
  5. Respect human privacy:

    Information available to AI should be constrained by data privacy rules. Individual humans can limit the use of their data for any AI models.
  6. No bad actors:

    AI tools should be revocable by human creators so that “usage review” can pull usage from bad actors that violate these policies. (Similar to Facebook content moderation or ankle bracelets on sex offenders)

More! To what should I agree? Spread it around.

One Response to “The AI Ten Commandments, Draft 1”

  1. pierrepowell Says:

    I think creating these potential commandments are a good thing. But as I get older, and hopefully wiser, I am constantly impacted by the leadership methodology I subscribe, adaptive leadership; which I believes centers on the complexity of problems in which not only are the solutions undefined, but neither is an understanding of the problem.

    And, as leaders, it is difficult for us not to try and solve problems we don’t fully understand.

    Like Asimov’s first law, if a robot saw the potential for human beings to get hurt, it must act. How would it tackle problems like social justice or climate change where inaction will eventually hurt us?

    My guess, is it would put itself in a repeating “do-loop” of second guessing that would cause it to lock up, just like we do when facing competing commitments.

    That being said, I think making experiments is a good start… Action, Fail, Learn, Action is probably the way forward.

Leave a comment