Responsible AI Governance

The development and widespread use of AI bring both risks and new business opportunities in private as well as in public sectors. Proactive preparation for risks and threats helps organisations to implement their strategies more effectively and to adapt to changing operating environments flexibly.

Successful exploitation of AI can be supported by creating an efficient, organisation-wide model that focuses on identifying and minimising risks through various means of control. The most significant challenges when using AI relate to privacy and transparency, biases in automated decision-making, copyright issues, manipulation and security threats. The aim of means of control is to ensure that AI is in line with the organisation's goals and is developed as well as used responsibly, ethically and legally.

Legislation applicable to artificial intelligence

AI-specific regulation has been prepared by the EU for quite a long time. Finally, in December 2023, the European Council, Parliament and  Commission reached a consensus on the content of the AI Regulation. The purpose of the regulation is to ensure safety and compliance with copyright legislation when using AI in the EU. Further to the AI Regulation, the provisions in the EU General Data Protection Regulation (GDPR) are applicable to AI, particularly regarding the principles of data minimisation and transparency as well as the rights of data subjects, too. Moreover, a requirement to prepare an impact assessment in certain situations stated in the GDPR is emphasised when adapting AI applications. Additionally to the European level regulation, for example the national Digital Services and Equality Acts are applied to the use of AI.  Good governance and public service ethics shall be taken into account as well.

Our approach to responsible AI governance

PwC's approach provides a comprehensive and practical solution for responsible AI governance, enabling organisations to harness the potential of AI safely and ethically. PwC's approach to responsible AI governance is based on a comprehensive management framework that includes the following areas:

Responsible AI governance model

What does artificial intelligence mean for persons working in different functions?

A functioning governance model requires commitment from management, collaboration among different experts and other resources from various areas of expertise. Below are presented some considerations for different organisational functions when developing AI or implementing AI applications. It is important to adapt these considerations to each organisation's specific context and requirements.

Organisation's management

The role of the organisation's management is to ensure that the development and implementation of artificial intelligence (AI) are aligned with the organisation's strategy with sufficient resources, managed risks, stakeholder engagement, effective communication and training, and consideration of ethical and legal aspects.

Business

It is important to understand how artificial intelligence can support business needs, such as automation, prediction, and improving customer experience. Businesses should ensure the ethical use of AI and manage the risks and impacts associated with its implementation. They should also take care of change management and ensure that employees are aware of the benefits of AI and can utilise it safely and effectively.

Chief Information Security Officer (CISO)

The CISO should ensure that the development and use of AI occur safely and responsibly. They are responsible for the organisation's level of information security, vulnerability management, data protection and privacy, staff training, and compliance with relevant rules and standards.

Chief Information Officer (CIO)

The role of the CIO is to ensure that the development and implementation of AI is aligned with the strategic objectives with sufficient resources, adequate levels of information security and data protection, close collaboration, and continuous monitoring of progress.

Data Protection Officer (DPO)

The DPO should ensure that the development and implementation of AI comply with data protection legislation. They are responsible for ensuring that the organisation's activities align with the General Data Protection Regulation (GDPR) requirements.

Compliance and legal functions

The compliance function should ensure that the development and implementation of AI comply with applicable legislation and the organisation's agreed-upon rules. They are responsible for compliance with laws, creating internal guidelines, contributing to risk management, reporting and documentation, as well as audits and inspections.

Internal Auditor

The internal auditor should develop new audit methods to verify compliance with AI and AI-driven processes. This requires training and familiarity with new tools.

Chief Financial Officer (CFO) and Controllers

The CFO should consider the following aspects in AI development and implementation: costs and financial benefits, business strategy, risk management, financial forecasting and analytics, as well as regulations and compliance.

Users of AI applications

All employees who use AI applications should follow the guidelines for their use, understand the limitations of AI and that the content generated by the AI application may be inaccurate. Users should provide feedback on problems and bugs they encounter.

How we can help you with responsible AI governance

Examples of our services:

  • Assessing the current state and conducting GAP analyses
  • Defining, developing, and documenting governance models
  • Determining responsibilities and roles, as well as planning governance processes
  • Implementing the governance model throughout your organisation
  • Defining monitoring and tracking mechanisms
  • Conducting risk analyses and impact assessments
  • Drafting and negotiating contracts
  • Training and supporting management and staff
  • Testing the quality and reliability of data used by artificial intelligence
  • Data-driven evaluation of the reliability of AI models
  • Providing continuous expert support

Why PwC?

We assist organisations to grow, operate efficiently and responsibly as well as to report reliably in a constantly varying environment – whether it’s a question on a listed, a family-owned or a startup company, a public sector actor or a non-profit organisation.

By investing in good management of AI development and usage, an organisation is able to utilise the full potential of AI without compromising ethicalness and safeness in the activities. We offer comprehensive and tailored solutions that meet the specific needs in each organisation. As our client, we support you based on our extensive expertise covering legal, risk management, data protection and information security services – globally, when needed.

Contact us

Timo Takalo

Timo Takalo

Partner, Cybersecurity & Privacy Assurance Services, PwC Finland

Tel: +358 (0)20 787 7489

Sanna Oinonen

Sanna Oinonen

Risk Assurance Services, PwC Finland

Tel: +358 (0)20 787 8894

Anna Tuominen

Anna Tuominen

Risk Assurance Services, PwC Finland

Tel: +358 (0)20 787 7836

Elina Kumpulainen

Elina Kumpulainen

Partner, Legal, PwC Finland

Tel: +358 (0)20 787 7907

Seija Vartiainen

Seija Vartiainen

Senior Manager, Legal, PwC Finland

Tel: +358 (0)20 787 7483

Stay connected