In recent years, one of the most widely discussed topics has been artificial intelligence (AI). Increasingly, people are becoming interested in and using applications such as ChatGPT, Gemini, and others. Bots are being integrated into mobile operating systems, various applications, and are rapidly gaining popularity.
In the public sphere, claims range from predictions that AI will bring about the end of humanity, to arguments that AI is practically useless. One of the most common forecasts is that many professions will disappear or their work will be replaced by AI. Among them is the legal profession, in particular, and the provision of legal advice more broadly.
In practice, we already encounter clients who arrive “prepared” by ChatGPT, as well as those who want to verify the bot’s answers—likely with the idea that they might save on attorney’s fees. A few months ago, the media circulated a story about a man who won a case against the National Social Security Institute (NSSI) without a lawyer, relying solely on AI.
The purpose of this article is not to prove that lawyers are irreplaceable and that AI cannot be used for legal or consulting work. But first, some basic points must be clarified. AI applications are tools—technological, software-based, but still tools. Like any tool, they have a scope of application, limitations, and—last but not least—require knowledge and experience in order to be used effectively.
For example: you buy a powerful, high-quality drill from a reputable brand. Can you drill through stainless steel with it? Yes—but only if you select the right drill bit, set the correct speed, and have the necessary skills. And yet, drilling through stainless steel was never the primary purpose of the tool.
Another example: suppose you still intend to use a lawyer but want to “make their job easier” and save money. Think of this: can you buy a cheap pair of trousers, three sizes too big, take them to a tailor, and say that by asking them to alter the trousers you’re actually making their work easier? Wouldn’t it be simpler for the specialist to cut, measure, and sew a pair of trousers tailored specifically for you?
What Are the Problems of Using Artificial Intelligence in Legal Work?
- Hallucinations – AI sometimes invents information—even entire laws, regulations, or articles. This is a weakness of the technology. It is no coincidence that every AI application prominently displays a disclaimer that the information must be verified. In other words, anyone using AI must have the necessary knowledge to check the bot’s output.
- ChatGPT is not a human being – it has no consciousness, no matter how convincingly it may appear to act like a living person. This should never be forgotten—GPT and similar systems are what are known as large language models. They are loaded with vast amounts of data and continue to evolve, but they do not possess the human and moral qualities required to resolve legal cases. Furthermore, they lack awareness of certain practical elements of a lawyer’s work in Bulgaria—such as communication with courts, administrative bodies, and other institutions.
- Lack of responsibility – An attorney stands before the client with their name, reputation, and bears responsibility, including financial liability. This is why a lawyer strives to deliver the best possible service and outcome for the client. AI, even in paid versions, bears no responsibility for potential legal consequences arising from its advice or from documents generated by it.
This article reflects the opinion of Attorney Yasen Kraychev. It is intended for informational purposes only and does not constitute legal advice. Kraychev Partners Law Firm has extensive experience in the fields of civil and administrative law. If you wish to engage our services, do not hesitate to request an offer and do so.