Although artificial intelligence has been the subject of academic research since 1950 and has been used commercially in some industries for decades, it is still in its infancy in all sectors.

The rapid adoption of this technology, coupled with the unique issues of privacy, security and accountability associated with it, has created opportunities for attempts to ensure that its use is ethical and legal.
On the specialised website Abajournal, authors Brenda Leong and Patrick Hall outline five things you should know about artificial intelligence:
1. Artificial intelligence is probabilistic, complex and dynamic. Machine learning algorithms are incredibly complex, learning billions of rules from datasets and applying those rules to arrive at an output recommendation.
2. Make transparency an actionable priority. The complexity of AI systems makes it difficult to ensure transparency, but organisations implementing AI can be held accountable if they are unable to provide certain information about their decision-making process.
3. Bias is a significant problem, but not the only one. AI systems learn by analysing billions of data points collected from the real world. This data can be numeric, categorical – such as gender and education level – or image-based, such as photos or videos. Because most systems are trained using the data generated by existing human systems, the biases that permeate our culture also permeate the data. Thus, there can be no such thing as an unbiased AI system.
Data privacy, information security, product liability and third-party sharing, as well as performance and transparency issues, are equally critical.
4. AI system performance is not limited to accuracy. While the quality and value of an AI system is largely determined by its accuracy, this alone is not sufficient to fully measure the wide range of risks associated with the technology. But focusing too much on accuracy is likely to ignore the transparency, fairness, privacy and security of a system.
Data scientists or lawyers, for example, should work together to create more robust ways of verifying AI performance that focus on the full spectrum of real-world performance and potential harms, whether from security threats or privacy shortfalls.
5. The hard work has just begun. Most organisations using AI technology must adopt policies to ensure the development and use of the technology and guidance for systems to comply with regulations.
Some researchers, practitioners, journalists, activists and lawyers have begun this work to mitigate the risks and liabilities posed by current AI systems. Companies are beginning to define and implement AI principles and make serious attempts at diversity and inclusion for technology teams.
_____
Aquest apunt en català / Esta entrada en español / Post en français