09 August 2018 by Louise Crouch

AI: Where Are The Boundaries?

AI: Where Are The Boundaries?

It’s undeniable: we are balancing on an AI tipping point, a precarious edge see-sawing somewhere between an AI dreamland and an AI nightmare. Are we headed for a world that is enhanced by Artificial Intelligence or destroyed by it? This powerful technology is raising just as powerful questions: do we have a responsibility to shape AI’s development to be beneficial, not detrimental, to humanity? Queue a very serious call to action: we need a code of ethics.

Let us take the story of Dr Frankenstein and his monster; an experimental doctor who creates a grotesque creature over which he has no control. This is exactly the fear that many us have, that leaders in AI will create something that will have its own mind at the expense of us as human beings. Using Frankenstein as an example is nothing but misleading as it serves us with the notion that there will be a concise moment in time that ‘the monster’ comes alive and wreaks havoc, whilst its creators watch helplessly, albeit guiltily. AI is continually being constructed and developed by humans which surely means that we have sole responsibility for it and the part it plays in our lives.


As leaders in AI, Google have identified this responsibility and published ‘AI at Google: our principles’; a detailed rundown of the principles that will govern their research and development of AI technology. These are:

  • Be socially beneficial.
  • Avoid creating or reinforcing unfair bias.
  • Be built and tested for safety.
  • Be accountable to people.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.
  • Be made available for uses that accord with these principles.

Within this declaration, Google also outline the applications that they will not pursue:

  • Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

And whilst I believe Google’s acceptance of responsibility is to be applauded, history has shown us that ethics do not usually lead from the front but tend to hang on the tailcoats of the latest technology. Ensuring fairness, justice and integrity in machine decision-making is paramount, and we can only hope that this is at the forefront of AI development. In the words of TechUK’s chief executive Julian David, “Accountability is key. The people behind the AI must be accountable.”

For now at least, let us worry more about the good Dr Frankenstein than the monster he created.


Share