Despite the increasing uncertainty with regards to the financial hit to industries, Russell Group have modelled the exposure for companies and countries. In the first of these articles, we take a look at the impact on shipping.
Can Regulators deal with Artificial Intelligence?
15 November 2017 | Blog Post
Despite the proliferation of reports, articles and books on the risks and opportunities of Artificial Intelligence (A.I.), many people still associate A.I. with Skynet, the fictional artificial general intelligence system, from The Terminator movie franchise.
This is an association that fails to comprehend the growing everyday use of A.I. Rather, it is better to approach A.I. as former President Barack Obama did in an interview with Wired last year. Obama argued that A.I. could be divided between generalised A.I. (the Terminator scenario) and the more specialised A.I. variety (robotic surgeons, driverless cars etc).
While we are a long way off the Generalised A.I. (despite the numerous Terminator reboots) specialised A.I. is becoming more embedded in the everyday lives of consumers, organisations and even governments - a process that, according to estimates, will add £630bn to the UK Economy by 2035.
So, the future seems bright for A.I. or, so it would seem. For there is a growing chorus of voices that believe A.I. will foment a growing regulatory crisis. According to a new series of articles on A.I. in The Guardian, A.I. experts believe regulators need to take the threat of A.I. seriously and make sure that safeguards are built within the new potentially disruptive technology.
Currently, the safety and ethical dilemmas posed by A.I. lie exclusively within the compass of the Silicon Valley giants that are locked in an arms race to produce the technology and do not share the same interests or concerns as national governments. An approach reflected in the lack of standardised testing or a sector watchdog like the FCA (Financial Conduct Authority).
Furthermore, according to the article, there are not only issues with regulation but with the ethical and human biases that are embedded within A.I. For A.I. develops its knowledge entirely through extensive datasets, which are prone to racial and gender biases of their designers. The most notable example was the Microsoft A.I. chatbot Tay that was deleted after displaying offensive remarks.
So, what can regulators and in turn insurers and risk managers do? Well, regulators need to have an independent body tasked with not only funding research into A.I. but with creating a regulatory (RegTech) framework. A framework that not only encourages innovation but discourages any malpractice.