Professor Tony Hoare, a titan of early computer programming, recently passed away at 92. An article highlighting some of his views on programming prompted these reflections on AI, which have been raised internally here within Cowell Clarke.
This piece is largely spurred by Professor Hoare’s statement in his 1980s speech, “The Emperor’s Old Clothes” (full copy of the speech: https://bit.ly/4upc8gB):
“I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.”
The application of this observation to AI and the law follows, but first an illustration closer to Professor Hoare’s quip.
A recent and practical example
The screenshot below is of the start of a “Kris Kringle” (KK) app developed using Google’s AI Studio last year. It was an attempt to replicate the functionality of a KK app used each year which is now marooned on an old device and no longer supported on the app store.
On its face, this early attempt appears to have all of the functionality sought to be replicated.
Does it operate as intended? Will it actually work when deployed on a phone?
That is unclear.
For a non programmer, the AI generated code cannot be meaningfully interrogated. It may as well be powered by Santa costume wearing unicorns for all that can be discerned from the underlying code.
What is known, is that studies have suggested that up to 45% of AI generated code contains security flaws, and Amazon has recently adopted a “human in the middle” approach following a number of incidents involving AI. There are clear echoes of Professor Hoare’s observations on the risks of unreliable programming being a far greater risk than unsafe cars or nuclear power stations.
More importantly, even to a programmer, AI can produce “bad code that looks good”.
As a result, this app is unlikely to be appearing on a phone any time soon.
The legal parallel
Now to the Emperor’s (legal) clothes.
Many lawyers will by now have had clients provide them with “memorandums of advice” generated by ChatGPT, often accompanied by a confident assurance of success (sometimes even on a percentage basis).
The issue with AI advice is now not limited to hallucinated case citations or invented chronologies.
The more significant issue is AI now produces “bad law that looks good”.
Quoted cases may exist but not support the propositions advanced. Arguments may appear persuasive, even to those with legal training, but fail when even lightly interrogated.
Why does this matter?
Those involved in litigation are accustomed to interrogating every footnote in an opponent’s submissions. However, as AI evolves, errors are becoming more latent and less obvious and may only be apparent to those able to see through the complexity.
Key takeaways
AI can produce outputs that appear highly polished and authoritative, but that surface quality may mask underlying flaws, inaccuracies, unsupported assumptions or interpretations or other risks.
Both in coding and legal analysis, AI often generates “bad work that looks good,” which makes human expertise and critical review essential.
The increasing sophistication of AI means errors are becoming less obvious and more difficult to detect without domain knowledge.
A “human in the loop” is not optional. Responsible use of AI requires verification, interrogation, and professional judgment before relying on its outputs.
This publication has been prepared for general guidance on matters of interest only and does not constitute professional legal advice. You should not act upon the information contained in this publication without obtaining specific professional legal advice. No representation or warranty (express or implied) is given as to the accuracy or completeness of the information contained in this publication and to the extent permitted by law, Cowell Clarke does not accept or assume any liability, responsibility or duty of care for any consequences of you or anyone else acting or refraining to act in relation on the information contained in this publication or for any decision based on it.