The AI and Data Protection Risk Assessment Toolkit, available in beta, draws upon the regulator’s previously published guidance on AI, as well as other publications provided by the Alan Turing Institute.
The toolkit contains risk statements that organisations can use while processing personal data to understand the implications this can have for the rights of individuals. It will also provide suggestions for best practices that companies can put in place to manage and mitigate risks and ensure they’re complying with data protection laws.
It’s based on an auditing framework, according to the ICO, which was developed by its internal assurance and investigation teams following a call for help from industry leaders back in 2019.
The framework provides a clear methodology to audit AI applications and ensures they process personal data in compliance with the law. The ICO said that if an organisation is using AI to process personal data, then by using its toolkit, it can have high assurance that it is complying with data protection legislation.
“We are presenting this toolkit as a beta version and it follows on from the successful launch of the alpha version in March 2021,” said Alister Pearson, the ICO’s Senior Policy Officer for Technology and Innovation Service. “We are grateful for the feedback we received on the alpha version. We are now looking to start the next stage of the development of this toolkit.
“We will continue to engage with stakeholders to help us achieve our goal of producing a product that delivers real-world value for people working in the AI space. We plan to release the final version of the toolkit in December 2021.”
The ICO has urged anyone interested in testing the toolkit on a live AI application to get in contact with the regulator via email (AI@ico.org.uk).