Playing games online? More than likely your challenger is a bot rather than a human. They invaded entertainment many years ago, and bots are now well embedded in business and industry with problem solving abilities akin to human thinking. Beneficiaries include emergency services, healthcare, defence, banking and finance, public safety – even my online shopping is filled with chatbots suggesting I need a new handbag to match the dress I just dropped into the shopping cart.
Artificial Intelligence (AI) systems and tools use complex algorithms and machine learning to influence our daily decision making, even if we are not aware of it. Programmed well they eliminate human error and contribute to society in ways that include increasing the detection of crime, matching identities, predictive maintenance, and processing vast quantities of data very quickly. AI is also pushing new boundaries with its ability for intelligence gathering. AI systems can move, manipulate and identify patterns in big data, make decisions on our behalf and perform minor tasks, all which contribute to customer satisfaction (and sometimes frustration), better decisions, business competitiveness and disruption.
But with these many benefits also come challenges. As AI systems become pervasive, the underlying data and coding that drives these tool are not always clearly visible or communicated – and because the tools reflect the developers assumptions and worldview, are never free from bias. The lack of transparency and accountability for AI systems is far less likely to be reported than their functionality (interested in knowing more about the trends in AI? Check out this article for an introduction).
Who is thinking about what are acceptable behavioural parameters, what are the ethical issues of this type of intelligence, the privacy issues and the resulting societal impacts of AI?
There is clearly a strong need for new approaches to governance. Governance is more than a set of roles and responsibilities – it includes decision rights, controls and accountabilities together with the processes to enable their execution. But, in an environment of AI, the governance continuum has been disrupted. As with other technology-based disruptions, governance is not keeping pace with the AI industry.
Just like there is a need for continued research and innovation within the boundaries of ethical use, there is a need for conscious decisions about how to govern the use of data and machines by AI systems. Because you cannot see it does not mean it shouldn’t operate within a framework of decision rights, controls, regulation, accountabilities and with a social conscience.
AI will continue to evolve and society will adapt and respond. It is not unrealistic that children born today will never need a driver’s license as their cars will be driverless, or that through use of data, AI systems find a cure for diseases which we currently believe are terminal. But we must think about the governance we expect, and how it will be implemented – otherwise we will end up with a poorly designed, biased system that fails to properly balance risk and reward.
Don’t miss any of the latest insights into the world of information and data – join our newsletter