Innovation Law and Regulation 101: Silicon Justice
top of page

Innovation Law and Regulation 101: Silicon Justice

Foreword


Artificial Intelligence systems are set to be the next revolution, forever changing humans’ lives. This new phenomenon and its many effects will cause great changes in our society, which is why regulating is the first step toward ethical development. In fact, unregulated use of these technologies could give rise to negative consequences such as discriminatory uses and disregard for privacy rights. The challenges brought by the use of Artificial Intelligence urge legislators and experts to protect citizens and consumers as regulating becomes a priority if humans wish to protect themselves from unethical and abusive conduct. This series explores the topic of new technologies such as artificial intelligence systems and their possible regulations through legal tools. In order to do so, we will start with an explanation of the rise of new technologies and delve into the complicated question of whether machines can be considered intelligent. Subsequently, the interplay between Artificial Intelligence and different branches of law will be analyzed. The first chapter of this series of articles explored the possibility of granting A. systems with legal personality and the main legislative steps taken in the EU towards that direction. Moving into the realm of civil law the second chapter considered the current debate on the responsibility regime concerning the use and production of AI. The third chapter will discuss the influence that AI plays on contract law and the stipulation of smart contracts. The use of AI in criminal law and the administration of justice will be examined in the following chapter with a focus on both the positive and negative implications of their use. The fifth chapter will be dedicated to the use of Artificial Intelligence by public sector bodies. Finally, the complicated relationship between data protection and AI will be discussed in light of the EU General Data Protection Regulation.

  1. Innovation Law and Regulation 101: Recognizing Silicon Minds

  2. Innovation Law and Regulation 101: AI on Trial, Blaming the Byte

  3. Innovation Law and Regulation 101: Navigating Smart Contracts

  4. Innovation Law and Regulation 101: Silicon Justice

  5. Innovation Law and Regulation 101: AI as Civil Servants

  6. Innovation Law and Regulation 101: Defending Data from Silicon Eyes


Innovation Law and Regulation 101: Silicon Justice


Artificial Intelligence is considered the revolution of our time with greater consequences expected to impact how people live and carry out their tasks. Professor of Law in the United Kingdom Richard Susskind-who specializes in the relationship between computers and law-already anticipated the changes that these systems will cause for legal professionals (Briscoe & Gardner, 2017). Therefore, it comes as no surprise that AI is finding its way into courtrooms around the world and in many instances has already found its place. Criminal justice is in fact one of the many fields in which it is expected to impose itself with great, but worrying, consequences. This leads to the current debate on whether Artificial Intelligence can have a place in courts either next to the judge or replacing the judge altogether.


This article will consider the above-mentioned debate with the aim to illustrate the many effects that the use of Artificial Intelligence might have on the criminal justice system. In the first part, the use of machine learning systems in retributive criminal justice will be considered to illustrate the main problematic aspects as well as the main advantages linked to their use. Furthermore, the intelligent system COMPAS will be considered to give a practical example of the main implications caused by the correlation between criminal justice and advanced technology. Subsequently, the essay will focus on the possible employment of Artificial Intelligence systems in the field of preventive criminal justice where their use is expected to have a more positive reception and be surrounded by fewer issues. The following part of the essay will shed light on the possibility of replacing the judge with robotic machines and the effect this might have on fundamental human rights. The final part will illustrate the final conclusions on the matter.



Figure 1: Artificial Intelligence Definition (Fishel, 2023).

Retributive Justice and Artificial Intelligence

Criminal justice is about punishing those who have committed actions that harm others and are thus considered crimes. Punishing is not only about inflicting a punishment on the person guilty of the crime, but it is also about protecting citizens from harmful individuals and striving to educate the offender while also being a deterrent from future crimes (Custers, 2022). The right conviction is a fair punishment that does not amount to degrading treatment but rather teaches the person why his or her action was wrong. The Italian Constitution clearly enunciates that punishments cannot be degrading and should aim to re-educate the convicted (Italian Constitution, 1946). However, deciding which punishment to give to guilty individuals is not an easy task. Judges have to take into account multiple factors such as past convictions, behavior, intentionality, and dangerousness (Ostrom; Ostrom & Kleiman, 2004). Because of the delicateness and importance of the matter, one might wonder if judges can really be impartial before the offenders and convict them in an objective fashion. The doubts about the impartiality, or lack thereof, of judges open the door for Artificial Intelligence systems to enter the courtroom. If deciding on a conviction really is basing one’s reasoning on a list of factors, an AI-run system could offer interesting solutions.


How Decisions are Taken

Artificial Intelligence systems are able to analyze multiple factors and come up with a desired solution to any possible given scenario. Certainly, it is necessary that the system is fed with large amounts of data concerning not only information regarding the offender, such as his or her behavior, age, or the specific crime committed but also data pertaining to past convictions in similar cases. By considering all the information, the system is expected to offer the most suitable conviction for the individual (Custers, 2022). The potential outcome would offer an insight into what punishment is better, e.g., a pecuniary sanction, imprisonment, or civil services. Furthermore, it could also analyze what are the chances of recidivism in groups of people united by the crime committed, personality traits, or personal information. By showing the probabilities of recidivism, the system could offer a solution to new crimes being committed by past convicted and tackle the most problematic aspects by suggesting specific solutions such as educational programs or training for specific cognitive skills or to reduce negative tendencies (Custers, 2022). Professor Neil Hutton perfectly summarized the ideal sentencing algorithm as a program with “a set of rules describing the criteria which should be taken into account and the method through which account is to be taken”, and “an unambiguous, formally specified aim or set of aims for punishment, and a rational set of rules determining how appropriate punishments are to be allocated to particular cases” (Hutton, 1995, p. 558).



Figure: Robot Judges and Human Judges (Unknown, 2021).

Impartiality: Dream or Reality?

Despite being able to emulate human thinking, Artificial Intelligence systems are not yet human, and this comes as both an advantage and a disadvantage. However, it is impossible not to mention that as humans, judges are victims of what can only be defined as human nature. Judges ought to be impartial, but their impartiality can never be fully assessed. Whenever a judge is asked to give a criminal sentencing, the law is the guiding light for it but the judge’s interpretation is really the determining factor. The extensive work by Professor Charles Gardner Geyh, known for his work and study on civil procedure law at Indiana University, illustrates that impartiality is only illusory in criminal law procedure and perfect impartiality is impossible to attain (Geyh, 2014). In the United States, more than 30% of people incarcerated are Afro-American men despite them being only 13% of the overall population of the country (Kovera, 2019). The statistics shed light on a worrying scenario where it is likely that bias enters the courtroom alongside the judges. Therefore, the question is whether we should be satisfied by a forever flawed notion of judiciary impartiality. Supporters of Artificial Intelligence systems might argue that by using new technologies we could have more impartiality in the courtroom and, as a consequence, more fairness in criminal sentencing (Maas et al., 2020). The fact that AI systems analyze information objectively offers an important step forward in ensuring a fair and impartial trial before the law. Recent studies demonstrate the high level of accuracy of Artificial Intelligence algorithms and offer a positive outlook on the use of these new technologies (Bagaric et al., 2020).


A strict correlation of impartiality is legal certainty. The more impartial the judge, the more likely that the principle of legal certainty is respected. Legal certainty can be understood as the possibility that the average citizen can predict the consequences of certain actions as prescribed by the law. According to the principle, the law should be known, clear, precise, stable, certain, and predictable (Van Meerbeeck, 2016). In criminal procedure law, this entails that prior to the commission of a crime, one should be able to understand what the possible punishment inflicted by the judge might be. This is of the utmost importance given that most legal frameworks provide that, in order to be convicted of a crime, the law must clearly state in advance that the action qualifies as a crime and the related punishment for that crime (Article 7, ECHR). Therefore, legal certainty can be seen as a shield protecting individuals against arbitrary decisions made by judges. Differently from judges, Artificial Intelligence systems do not have personal views and, when offering a potential criminal sentencing, they do so based on a set of data and rules. However, when judges decide on the conviction of an offender, they do so based on their interpretation of the law and the personal background that they bring with them. Even the most objective judge is in fact subject to unconscious bias based on their personal lives (Spencer et al., 2016). As a result, it is unlikely that the same situations are always treated the same, seriously endangering the principle of legal certainty. Algorithms, however, would treat the same situations equally offering a transparent and predictable outcome of similar criminal proceedings (Hutton, 1995).



Figure 3: Principle of Fair Trial (Council of Europe, 2020).
Efficiency of Artificial Intelligence Systems and Efficiency in Courtrooms

Another important aspect to consider concerning the use of Artificial Intelligence systems is that they are extremely efficient in their performances. They can analyze vast amounts of data in only a matter of seconds and do so with extraordinary results in terms of accuracy (Ruffolo, 2020). This has been shown by recent competitions in which AI systems outshined the capabilities of highly skilled lawyers in analyzing a large number of non-disclosure contracts (Hoppner et al, 2023). The same results could be achieved in criminal courtrooms where judges are expected to review past judicial reasonings and decisions dating back hundreds of years and carefully examine the multiple provisions dictated by legislators. In a recent study, it was illustrated that algorithms could predict the outcomes of criminal cases with an accuracy of 97% (Collenette et al., 2023). Furthermore, judges are humans and as such have limited capabilities. Moreover, their skills might be clouded by fatigue or personal issues that negatively impact their decisions. On the other hand, a machine will never feel tired or stressed, meaning that its capabilities will stay consistent throughout the whole time the criminal proceedings are carried out. Furthermore, the improved efficiency could also be followed by a faster conclusion of criminal trials. In fact, Artificial Intelligence systems do not read as humans do but are able to scan data in a much quicker and more effective manner. The faster pace of criminal proceedings could be beneficial to the victims and their families in ensuring that justice is rendered as soon as technically possible. Furthermore, it is in the interest of all citizens to be protected in a timely fashion by possible offenders still facing trial. Article 6 of the European Convention on Human Rights provides that a trial should have a reasonable time length (Article 6, ECHR). It is also essential to point out that a reasonable time ensures the protection of the accused individuals who should not stand trial for longer than needed (Trechsel, 2006). The lack of resources and demanding tasks asked of the justice system cause unreasonable long proceedings. For instance, in Italy, the expected time to conclude a criminal trial is approximately three years and a half (Esposito et al., 2014). The urgency for changes is thus as needed as ever to ensure fair protection of all interests involved.


Despite the promising results of Artificial Intelligence systems, a number of problems still appear in the debate surrounding the use of these technologies. Since algorithms need access to data in order to make decisions, it is vital that the reliability and quality of the data used to train the algorithm are ensured and monitored. If past decisions by judges turn out to be biased or discriminatory, the data should not be used to feed the system because, if not, the bias would be considered as a rule by the algorithm. The concern is that software, despite being objective in nature, is trained using incorrect or biased data causing an even wider spread of bias in criminal proceedings. Some scholars argue that bias in code is inevitable given the fact that they need access to real decisions by judges, but that the bias can be more easily tracked and eliminated by training the system (Barabas, 2019). However, this is only possible with enhanced attention being given to the development and uses of Artificial Intelligence systems which is still not a shared pillar of legal framework around the world. Until that moment comes, algorithms might cause more harm than good.



Figure 4: Algorithmic Decision-Making Process (Pohrebniak, 2022).

Algorithms are usually defined as black boxes for their lack of transparency. Because of the complicated nature of the functioning of these systems, even skilled professionals have difficulty understanding how decisions are made by Artificial Intelligence algorithms. This entails that even in courtrooms decisions would lack transparency and therefore compromise the respect of the principle of legal certainty that provides that the law, including judicial decisions, should be clear and understandable (Van Meerbeck, 2016). If Artificial Intelligence systems can really be expected to have a seat in the courtroom, it is important that their transparency is enhanced and that all people involved in the criminal proceedings can have access to its functioning. Only in this way, the principle of legal certainty can really be upheld during the proceedings. An algorithm that is not accessible nor understandable could be perceived as the most dangerous arbitrary decision-maker and forever put a stop to the development of these systems (Ryberg et al., 2022).


A Practical Example: The COMPAS System

COMPAS is an algorithmic system whose function is to predict probabilities of recidivism in individuals based on information regarding their age, ethnicity, personal background, education, tendencies, behavior, past convictions and much more (Freeman, 2016). The system is particularly popular in the United States where its use is said to be beneficial for judges in making decisions on convictions of offenders (Bieriain, 2018). However, the decision made by a judge with the use of the COMPAS system was challenged by Eric Loomis. Mr. Loomis was sentenced to six years of imprisonment, and he argued that the high conviction he was given was based on the results of the COMPAS test (Bieriain, 2018). In fact, the answers he had given resulted in him being given a high probability result of recidivism which, Mr. Loomis argued, influenced the decision of the judge (Bieriain, 2018). Mr. Loomis challenged the decision and made an appeal arguing that his right to due process had been compromised. He had been refused access to the algorithmic decision and refused what he argues was his right to an explanation. His lawyers also based the appeal on the fact that they had not been able to build an effective defense because of the use of an undisclosed algorithm. Furthermore, Mr. Loomis complained about the fact that the COMPAS algorithm makes biased decisions based on group characteristics rather than accurate and individual situations. However, the Supreme Court of Wisconsin confirmed the decision taken by the first instance judge claiming that the defendant had the chance to know how the algorithm would work and that his right to an explanation had not been violated because he was the one to provide the data in the first place (Freeman, 2016). Consequently, the Supreme Court of Wisconsin rejected the alleged claim that the use of the COMPAS algorithm had violated the defendant’s right to fair and due process. The Supreme Court of Wisconsin accepted that the algorithm based its outcomes on group statistics rather than individual scenarios, however, it argued that that was not enough to endanger Mr. Loomis' fair and individualistic sentencing, as it was not the sole determinant factor in his conviction (Freeman, 2016). In fact, the appeal was rejected on the claim that the COMPAS result was only one of the multiple factors taken into consideration by the human judge.



Figure 5: COMPAS Algorithm Results (ProPublica, 2018).

Despite the clear advantages that Artificial Intelligence systems would bring into the courtroom, the fact that full transparency and predictability cannot be ensured minimizes the possibility of really finding these new technologies used during criminal proceedings. The rule of law provides that a person can be convicted only when guilty beyond any reasonable doubt (Kitai, 2003). Criminal proceedings cannot be the place where probability and almost perfect accuracy have a place given that it is the lives of people that are at stake. One could argue that human judges can also never be completely sure of their sentencing and convictions, making a strong case to also accept fallacies in Artificial Intelligence algorithms. However, the aim of scholars and legislators should be to enhance the tools used in criminal procedure law and ensure better and fairer trials.


Preventive Justice: A Fertile Soil for Artificial Intelligence?

Preventive justice is an important field of criminal law given that preventing crimes should be the pillar of criminal law policies. In preventive justice, differently from punitive justice, the judge is not asked to convict a person for his actions but rather to take necessary measures before actions are committed by means of avoiding crimes and punishments altogether (Cole, 2015). In this realm, probabilities are accepted as intrinsic to preventive measures as it is never certain whether a future action will take place. The law does not provide for measures to be applied only proving themselves to be necessary when beyond any reasonable doubt, giving a wider scope of application to preventive policies and a much wider wiggle room. Artificial Intelligence algorithms might, therefore, be of great use and bring their beneficial uses leaving beyond the concern for certainty.



Figure 6: Wisconsin Supreme Court (Bauer, n.d.).

An individual standing trial could be subject to preventive measures applied in order to prevent the person from committing other crimes, fleeing, or destroying vital evidence to the case. While in retributive justice the ubi ("place"), quomodo ("way") and quando ("time") of the criminal actions represent fundamental elements that need to be verified before the conviction, placing the burden of proof on the prosecution, the same does not happen when applying preventive measures (Caianiello, 2021). What is needed is only that there is a risk of harmful behavior and preventive measures can be imposed. The judge has more freedom in deciding what factors to employ as criteria in his or her decision-making process. The already mentioned system COMPAS would in this case be beneficial because a risk assessment of the individual would be made without having an impact on the final conviction but only in preventive measures that would be lifted as soon as the criminal proceedings come to an end. In this case, the same concerns with regard to the role of judges arise. Their impartiality could lead them to make biased decisions ultimately limiting the freedom of yet-to-be-convicted people. Furthermore, their human nature would make it impossible for them to carefully and timely analyze the vast amount of information concerning every single defendant. It is because of these reasons that efficient and fast algorithms would be able to make decisions based on the same factors that a judge usually bases decisions on but without mistakes or partial tendencies.


The need to prevent crimes could also lead to different and new uses of Artificial Intelligence in the realm of preventive policies. For instance, Artificial Intelligence systems could be used to detect areas where crimes are more likely to be committed. In fact, when a burglary occurs, statistics show that more crimes could be committed in a short span of time (Blount, 2022). Algorithms could detect those areas and indicate possible solutions to be taken. The system ShotSpotter is already used to determine where crimes are committed and notify police patrols to go and take action (Blount, 2022). Another Artificial Intelligence algorithm, with 97% accuracy results, was employed in Chicago to detect where crimes could be committed based on past data, behavior, and personal information of the inhabitants (He and Zheng, 2021). These systems reach two important objectives: protecting innocent individuals from harmful conduct and preventing people from committing crimes.



Figure 7: ShotSpotter System (ShotSpotter, n.d.).

The promising characteristics of Artificial Intelligence algorithms in preventive policies and preventive justice cannot completely silence the voices of those urging for improvements and caution. The already mentioned problems surrounding the uses of these technologies in punitive justice appear again in this other field of criminal law. Specifically, biases could also be perpetuated if already present in the training data of algorithms leading to discrimination among individuals (Contissa & Lasagni, 2017). Moreover, a lack of transparency might lead to unfair or discriminatory decisions with no clear way to understand how to fix the issues and comprehend how the algorithms fully work. These issues also cause a feeling of mistrust toward new technologies which cannot be employed until all stakeholders agree that their advantages are greater than their disadvantages.


However, another issue arises which might be specific to the application of Artificial Intelligence systems in the realm of preventive justice. If algorithms are to be used in predicting areas or individuals who might commit crimes or assess the risks of recidivism or harmful behaviors in defendants, it is vital that personal information can be accessed by these systems. This leads to concerns with profiling methods that might be used. The problem does not arise when information is voluntarily given by the individual, as it happens with the COMPAS system, but it does when other means to gather information are put into place. Profiling is defined by the European General Data Protection Regulation as “any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the data subject” (Recital 71, GDPR). Not only is profiling potentially discriminatory, but it is also heavily regulated by the GDPR. In fact, profiling might lead to statistics that do not honestly and accurately describe the individual, but rather the group he or she is part of. The individual might be considered more likely to commit a crime just because of belonging to a specific ethnic group or economic background (Simmons, 2016). Moreover, the GDPR provides that profiling methods are prohibited when producing legal effects (Recital 71, GDPR). Therefore, the chances of future legitimate uses of Artificial Intelligence become even narrower if based on profiling methods.



Figure 8: EU General Data Protection Regulation (Doofy, 2018).
Imagining a New World

Changes are coming and, if it is true that delaying them is possible, stopping them is not. New technologies will alter forever how criminal procedure law is administered and it is essential that legislators find a way to regulate the phenomenon before it happens. Scholars and academics are already discussing how a new world where Artificial Intelligence systems find full legitimization in criminal law is set to be. The most radical way forward does find a handful of supporters in the current debate and that is completely replacing human judges with Artificial Intelligence. The need for human intervention during criminal proceedings is not perceived as essential by those who claim that the legal process can be described as a fully computational one where human emotions should not ever find their place (Tegmark, 2018). However, this could not actually be a solution to the already existing problems. Placing all responsibility, and thus power, in robotic hands could solve some problems but not all. That is why it seems that the preferred solution is a world where cooperation is possible between human judges and robotic ones. This would bring the best of the two worlds into the same courtroom. However, the question becomes: who has the final say? One wonders whether more power is given to the voice of the judge or the voice of the robot judge. Scholars worry that judges may feel pressured to blindly follow the solutions offered by algorithms because unable to contradict them (Philipsen & Themeli, 2019).


The most compelling solution is offered by scholars who compare Artificial Intelligence systems to an Advocate General (Buocz, 2018). The latter is a judge part of the Court of Justice of the European Union who, working impartially, gives an opinion on the matter brought before the Court (Article 49, Statute of the Court of Justice of the European Union). The Advocate General’s reasoning and opinion must not be followed but can be used and referred to by the General Court when making the final and binding decision. Similarly, the judge in criminal courtrooms might use Artificial Intelligence systems' solutions and outcomes as assistive tools in the decision-making process (Kerr & Mathen, 2013). This would ensure that judges cannot blindly follow and adhere to the conclusions reached by algorithms but can use them if and when necessary. In the presented scenario the judge would remain human, but his or her flaws would be compensated by the helpful presence of new technologies.



Figure 9: Example of Algorithmic Sentencing (Chen, 2022).

Conclusions

In conclusion, it is fair to say that the debate on the use of Artificial Intelligence systems during criminal law proceedings is at an important stage where possible scenarios take concrete shape. The many advantages of Artificial Intelligence algorithms could finally balance the many flaws that judges, as human beings, have. The lack of impartiality and efficiency would be compensated by objective and highly efficient systems able to predict judicial outcomes and give criminal sentencing. However, because systems would be trained on existing data, the concern for a repetition of the same bias already perpetuated by judges is as felt as ever. Moreover, the fact that algorithms run on a probability method endangers the principle of legal certainty and presumption of innocence until proven guilty. This leads scholars to argue that Artificial Intelligence systems might be better suited to predict crimes rather than punishing them. It is in this realm that Artificial Intelligence might be employed to the best of its possibilities, however without forgetting about the same old issues. Furthermore, profiling methods could also represent a problematic aspect of their employment. The issues mentioned above might lead one to think that their use is impossible, but this is not true when embracing a scenario in which robot judges and human ones interact and cooperate. It is important that research continues to ensure that enhanced technologies become available as soon as possible to make sure that criminal law is finally void of the existing issues clouding its true aim: protecting the innocents and re-educating the guilty.


Bibliographical References

Bagaric, M., Svilar, J.D., Bull, M., Hunter, D., Stobbs, N. (2020). The Solution To The Pervasive Bias And Discrimination In The Criminal Justice: Transparent Artificial Intelligence. American Criminal Law Review, 59(1).


Barabas, C. (2020). Beyond Bias: Re-Imagining the Terms of Ethical AI. Criminal Law, 19. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=337792


Beriain, I. (2018). Does the use of risk assessments in sentences respect the right to due process? A critical analysis of the Wisconsin v. Loomis ruling. Law. Probability and Risk, 17(1). DOI: https://doi.org/10.1093/lpr/mgy001


Blount, K. (2022). Using artificial intelligence to prevent crime: implications for due process and criminal justice. AI & Soc. DOI: https://doi.org/10.1007/s00146-022-01513-z


Briscoe, F., and Gardner, H. (2017). Review of The Future of the Professions: How Technology Will Transform the Work of Human Experts, by R. Susskind & D. Susskind. Administrative Science Quarterly, 62(4). https://www.jstor.org/stable/48561371


Buocz, T. (2018). Legitimacy Problems of AI Assistance in the Judiciary. Artificial Intelligence in Court, 2(1).

https://static1.squarespace.com/static/59db92336f4ca35190c650a5/t/5ad9da5f70a6adf9d3ee842c/1524226655876/Artificial+Intelligence+in+Court.pdf


Caianiello, M. (2021). Dangerous Liaisons. Potentialities and Risks Deriving from the Interaction between Artificial Intelligence and Preventive Justice. European Journal of Crime, Criminal Law and Criminal Justice, 29(1), 1-23. DOI: https://doi.org/10.1163/15718174-29010001


Cole, D. (2015). The Difference Prevention Makes: Regulating Preventive Justice. Criminal Law, Philosophy, 9, 501-519. DOI: https://doi.org/10.1007/s11572-013-9289-7


Collenette, J., Atkinson, K., Bench-Capon, T. (2023). Explainable AI tools for legal reasoning about cases: A study on the European Court of Human Rights. Artificial Intelligence 317. DOI: https://doi.org/10.1016/j.artint.2023.103861.


Contissa, G., and Lasagni, G. (2017). When it is (also) Algorithms. Washington University Law Review, 1109-1189.


Esposito, G., Lanau, S. and Pompe, S. (2014). Judicial System Reform in Italy - A Key to Growth. International Monetary Fund.


Freeman, K. (2016). Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis.18 N.C. J.L. & Tech. 75.

https://scholarship.law.unc.edu/ncjolt/vol18/iss5/3


Geyh, G. C. (2014). The Dimensions of Judicial Impartiality. 65 Fla. L. Rev. 493. http://scholarship.law.ufl.edu/flr/vol65/iss2/4


He, J., Zheng, H. (2021). Prediction of crime rate in urban neighborhoods based on machine learning. Engineering Applications of Artificial Intelligence, 106.


Höppner, T., and Streatfeild, L. (2023). ChatGPT, Bard & Co.: An Introduction to AI for Competition and Regulatory Lawyers. Hausfeld Competition Bulletin, 1. DOI: http://dx.doi.org/10.2139/ssrn.4371681


Hutton, N. (1995). Sentencing, Rationality, and Computer Technology. J.L. & SOC’Y 22, 549-558.


Kerr, I., & Mathen, C. (2013). Chief Justice John Roberts is a Robot. University of Ottawa Working Paper. http://robots.law.miami.edu/2014/wpcontent/uploads/2013/06/Chief-Justice-John-Roberts-is-a-Robot-March-13-.pdf


Kitai, R. (2003). Protecting the Guilty. Buffalo Criminal Law Review, 6(2), 1163–1187. DOI: https://doi.org/10.1525/nclr.2003.6.2.1163


Kovera, M. B. (2019). Racial Disparities in the Criminal Justice System: Prevalence, Causes, and a Search for Solutions. J. SOC. ISSUES 75(4), 1139. https://spssi.onlinelibrary.wiley.com/doi/abs/10.1111/josi.12355


Maas, M., Legters, E. & Fazel, S. (2020). Professional en risicotaxatie-instrument hand in hand: hoe de reclassering risico’s inschat. NJB afl. 28, 2055–2059.


Ostrom, W. C., Ostrom, B. J. & Kleiman, M. (2014). Judges and Discrimination: Assessing the Theory and Practice of Criminal Sentencing. Report N. 204024. U.S. Department of Justice.

https://www.ojp.gov/pdffiles1/nij/grants/204024.pdf


Philipsen, S., & Themeli, E. (2019) Artificial intelligence in courts: a (Legal) introduction to the Robot Judge. Montaigne Center. http://blog.montaignecentre.com/index.php/1940/artificial-intelligence-in-courts-a-legal-introduction-to-the-robot-judge-2/


Protocol 3 of the Treaty on the Functioning of the European Union. Statute of the Court of Justice of the European Union. https://curia.europa.eu/jcms/upload/docs/application/pdf/2016-08/tra-doc-en-div-c-0000-2016-201606984-05_00.pdf


Ryberg, J. & Roberts, V. J. (2022). Sentencing and Artificial Intelligence. Oxford University Press.


Ruffolo, U. (2020). XXVI Lezioni di Diritto dell’Intelligienza Artificiale. Giappichelli.


Simmons, R. (2016) Quantifying criminal procedure: how to unlock the potential of big data in our criminal justice system. Michigan State Law Rev. DOI: https://doi.org/10.2139/ssrn.2816006


Spencer, K, B., Charbonneau, K, A., & Glaser, J. (2016). Implicit Bias and Policing. SOC. & PERSONALITY PSYCHOL. COMPASS 50, 10(1). https://gspp.berkeley.edu/assets/uploads/research/pdf/SpencerCharbonneauGlaser.Compass.2016.pdf


Tegmark M. (2018). Life 3.0: Being human in the age of artificial intelligence. Vintage Books.


Trechsel, S. (2010). The Right to be Tried Within a Reasonable Time, Human Rights in Criminal Proceedings. Oxford Press.


Van Meerbeeck, J. (2016). The Principle of Legal Certainty in the Case Law of the European Court of Justice: From Certainty to Trust. European Law Review, 2.

https://dial.uclouvain.be/pr/boreal/object/boreal%3A177694/datastream/PDF_01/view

Visual Sources







Author Photo

Sofia Grossi

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page