Close search

Algorithms in the justice system - the European Ethical Charter

The adoption of artificial intelligence (AI), in combination with other disruptive technologies, such as big data and cloud computing, offers huge potential for the justice system; however the introduction of AI in the judicial system also highlights a number of interesting issues and challenges. These include machine bias (i.e. humans are more likely to defer to a decision made by a machine); a lack of transparency (due to the ‘black box’ nature of much AI); inherent biases within data sets; and a worry that we are seeing the creeping erosion of human rights, and a lack of accountability, with privately developed code becoming equivalent to binding law and regulations.

Ethical Charter

With this in mind, the Council of Europe has adopted its first European Ethical Charter on the use of AI in judicial systems[1].

The Charter was developed by The European Commission for the Efficiency of Justice (CEPEJ).

It provides a framework of principles to aid policy makers, legislators and justice professionals when looking at the implementation and operation of AI in national judicial processes.

The CEPEJ takes the view that whilst the application of AI in the field of justice will help to improve efficiency and quality, it must be implemented in a responsible manner. In particular, the implementation of AI must comply with the fundamental rights guaranteed in the European Convention on Human Rights (ECHR) and the Council of Europe Convention on the Protection of Personal Data (the Conventions). Accordingly, AI must operate “in the service of the general interest” and its use must respect individual rights.

Core Principles

To achieve this, the Charter sets out five core principles to be respected in the field of AI and justice:

  • Respect of fundamental rights: the design and implementation of AI tools and services should be compatible with fundamental rights;
  • Non-discrimination: discrimination between individuals or groups of individuals should be prevented;
  • Quality and security: models should be developed with short design cycles, based on existing and enhanced ethical safeguards, with development teams drawn widely from justice system professionals (e.g. judges, prosecutors, lawyers) and academia; data sources should be certified, and the models and algorithms executed in a secure technological environment;
  • Transparency, impartiality and fairness: data processing methods should be accessible and understandable, and should enable external auditing;
  • “Under user control”: instead of a prescriptive approach, users should be ‘informed actors’ who are in control of their choices.The CEPEJ believes that compliance with these principles is fundamental to the processing of judicial decisions and data by algorithms, and the use made of them. Appendix I to the Charter sets out an in-depth study on current use cases for AI in judicial systems, notably AI applications processing judicial decisions and data.

Speaking recently[2], Stéphane Leyenberger, Secretary to the CEPEJ, stated that the next step is for the CEPEJ to engage with private companies about a process whereby AI developers and technology companies could self-certify compliance of their products and services with the Charter’s principles.


The principles in the Charter will assist those involved in the development and testing of AI in judicial systems, notably AI applications processing judicial decisions and data (machine learning or any other methods deriving from data science). Principle 1, in particular, will be relevant with a strong preference to be given to ethical-by-design approaches, meaning that straight from the design and learning phases, rules prohibiting direct or indirect violations of the fundamental values protected by the Conventions are fully integrated in the AI itself. Appendix IV of the Charter sets out a checklist for evaluating proposed processing methods to ensure compatibility with the Charter.


[2] The Law Society, Technology and Law Policy Commission evidence session

Featured Insights


Contact us