As Malta's drive to become a hub for digital companies continues unabated, the development of AI, still in its infancy on the island, is a key path in which the Government is investing.
With an expected annual growth rate of 33 per cent over the coming years, the global AI market is expected to reach $267 billion by 2027, although whether Malta is poised to take a slice of the pie remains to be seen.
Gege Gatt, CEO of the AI studio EBO.ai, recently acquired by leading IT services provider BMIT Technologies plc, leads a team spread over five countries focusing on AI in healthcare, financial services and iGaming. He has previously expressed his hopes that AI will allow humans to "focus on what really matters: work which thrives on emotional and human intelligence".
In a publication bringing together thought leaders who share their insight on the upcoming year through the perspective of their respective industries, he addresses the technology's impact on labour markets, democracy and economic justice.
Will robots take our jobs?
The impact of AI on employment has been much discussed and debated, but Mr Gatt is optimistic that countries can leverage its development to improve both job quality and overall wellbeing.
In the short term, the impacts are unlikely to be significant. Mr Gatt expects that most of the AI deployments this year will rely on human judgement when complex cognitive tasks are at play.
However, in the medium term and for low-skill, linear and predictable jobs, it is likely that the automation will displace around 10%-12% of jobs according to the OECD.
Mr Gatt explains how employment is affected through three main thrusts when businesses adopt AI to automate production.
First, new technologies lead to a direct substitution of jobs and tasks currently performed by employees (known as the ‘displacement effect’).
Second, there is a complementary increase in jobs and tasks necessary to use, run and supervise AI technology (known as the ‘skill-complementarity effect’).
Third, there is a demand effect both from lower prices and a general increase in disposable income in the economy due to higher productivity (known as ‘the productivity effect’).
“Because these thrusts are not simultaneous but linear and progressive, it is likely that those countries who adapt their education and economy to AI realities will not register unmanageable unemployment surges through job displacement since the automation of tasks will occur before the automation of jobs, and even the latter will occur over a period of time."
He points to multiple studies that show that AI will roughly create the same number of jobs that it will displace although he notes that this will place increased stress on the educational system to ensure that it is providing the right formative environment for the new roles to take shape.
“This opens a major public policy issue as it is imperative that Governments consider policies aimed at providing the necessary social security measures for affected workers while investing heavily in the rapid development of the necessary skills to take advantage of the new jobs created.”
Fake news and how to stop it
Turning to the impact of AI on information, Mr Gatt expresses concern that the role of choosing and filtering stories has moved “from the hands of an editor to the algorithmic muscle of the channel we most commonly use”.
“False news is a significant problem in a democratic society which is polarised through the aggregation of people and ideas with analogous interests. As Obama’s former communication director, Cass Sunstein put it; ‘it is precisely the people most likely to filter out opposing views who most need to hear them’.”
He acknowledges that present online revenue models don’t help. “The Internet has commoditised most services which we were previously ready to pay for,” he says, adding that “paying for a newspaper is largely unheard of” nowadays.
“Instead content is prioritised based on how many times it is clicked and subsequently revenue flows were eyeballs go. Truth is often the casualty.”
He is concerned that democracy can be eroded by a manipulative mechanism which rewards falsehoods and deceptions that spread fast through link bait.
“Misinformation therefore poses a two-fold threat to democracy,” he says. “It leaves citizens ill-informed, and it undermines trust and engagement with accurate content.
He believes the solution lies in the development of fact-checking services dedicated to creating content which examines the facts and claims behind content. These however must be transparent and responsible.
“Transparent AI makes our underlying values explicit and encourages companies to take responsibility for AI-based decisions. Consequently, responsible AI is AI that has all the ethical considerations in place and is aligned with the core principles of the technology provider, the requirements of the law, societal norms and user expectations.”
This does not mean that indecipherable core algorithms need to be published, but rather a clear and easily understood explanation on how a decision is made by an AI model.
However, since AI, like science, is “a human endeavour guided by values”, Mr Gatt believes that it can never be neutral and calling it so is “dangerous, as it offers a convenient route to escape from responsibility”.
It is for this reason that he calls for frameworks which provide human accountability (and agency) for the outcomes of the technology delivered or adopted, implying that human beings will develop, deploy and use AI systems, and that they ought to do so in a responsible, ethical and lawful manner.
Mr Gatt says that Saudi Arabia’s stunt to give the robot Sophia legal personality seems “absurd”.
“AI stakeholders should not be able to evade responsibility for legal or ethical faults by ascribing pseudo legal-personality to their creations, irrespective of their degree of AI autonomy. Accountability should be construed to always keep humans in a position to modify, monitor or control AI systems, and subsequently be accountable for them and their actions.”
Finally, Mr Gatt turns to trust and its fundamental place in the relationships we establish in everyday life.
“We trust our friends, the scientific community and perhaps even politicians. ‘Trusting’ an inanimate object like AI is somewhat of a paradox as it has the effect of anthropomorphising thus human and moral sentiment. Yet ‘trusting’ the AI tools which are part of our world is a precursor to their use and acceptance.”
In the case of AI, “The features, functions and outcomes of [an AI] system make it trustworthy for achieving a goal, and thus are objective reasons to trust that system.”
“But beyond that, citizens must have the certainty that compliance with fundamental rights is ensured and that AI systems will be used only for specifically defined tasks and that individuals remain in control of their data. In turn this allows users to create subjective reasons to trust an AI tool as it doesn’t merely surpass the benchmark but provides inherent benefits such as speed or improved quality.”
A new world order
Mr Gatt also highlights problems that can arise when AI is in the hands of companies alone.
“The revenues from AI might not be redistributed equitably, which in turn opens a new debate about social divide and the marginalisation of the poor.”
Recalling JFK’s iconic speech, he says that “a new world order is already emerging: those countries who are able to leverage AI technology to accelerate societal and economic growth; and those who are unable to do so.”
“The latter revert to outdated generalised political systems with stunted economic outcomes, whilst the rest race ahead. The countries who are left behind will be those who experience higher levels of inequality, laggard healthcare systems and economic dismay.”
“However,” he concludes, “our discussion on AI trust, reliability and ethics should be grounded in the here and now rather than on imaginary future technology. It should be grounded on the technical and social developments which we can pragmatically control and influence."
“In turn this also creates a perspective for a moral economy which wields the power of AI for good, fairness and justice.”
Mr Gatt’s insights were featured on Next12, a publication produced by Seed, a research-driven consultancy firm, which brings together 19 industry leaders who share their vision for the next 12 months for Malta’s economy.