In-Brief:

Artificial intelligence is rapidly reshaping how businesses approach legal work. From drafting internal summaries to structuring legal questions, AI tools are increasingly being deployed by executives and in-house teams to scope projects and prepare instructions for external lawyers. The appeal is clear: AI can quickly organise complex issues, identify potential legal angles and produce a structured briefing note in seconds. For time-pressed business leaders, while this appears to offer an efficient way to initiate a legal conversation, businesses that use AI wholesale to define the scope of legal work, rather than treating it merely as a preliminary tool, risk receiving legal advice that is misdirected, incomplete and ultimately poor value for money.

What may feel practical in the moment can prove costly in the outcome.

We caution that while AI can serve as a useful preliminary tool, using it exclusively to define the scope of legal work, without meaningful human oversight or refinement, carries substantial risks that directly undermine the quality and value of the legal advice that follows.

The most fundamental concern is accuracy. AI systems generate responses based on patterns in data rather than verified legal reasoning. This means they routinely produce outputs that sound authoritative but are partially or entirely incorrect. Lawyers refer to this as “hallucination” – when a system confidently presents information that has no real legal basis. When businesses rely too heavily on AI to frame instructions, they risk directing lawyers to analyse problems based on assumptions that are fundamentally flawed. The result is wasted time, misdirected effort and legal advice that fails to address the actual risks the business faces.

In practice, this means businesses come to paying their lawyers to correct the scope of work rather than to solve the underlying legal problem – a direct erosion of value that could have been avoided with proper human oversight at the outset.

Confidentiality considerations add another layer of risk. Information entered into publicly available AI tools may be stored or processed outside the organisation’s control. When sensitive or commercially confidential information is fed into Large Language Models (LLMs) and Generative AI systems developed by companies such as OpenAI, Google or Anthropic, businesses need to understand how that information may be handled. Careless sharing of details could weaken legal professional privilege (where recognised under common law, such as in the DIFC) or breach internal confidentiality obligations – risks that can have serious legal and commercial consequences far beyond the immediate legal project.

Perhaps most damaging is the risk of flawed framing. The way a legal issue is described fundamentally influences how it is analysed. AI-generated summaries routinely mischaracterise matters, framing what is fundamentally a regulatory exposure as a contractual dispute, or overlooking employment law, operational feasibility or reputational dimensions entirely. When businesses use AI extensively to set the scope, this flawed framing becomes embedded in the instructions. Lawyers then analyse the wrong problem, and the business receives advice that is technically competent but may be strategically irrelevant. The result is not just wasted fees, it is missed risk, unaddressed exposure and advice that delivers no real commercial value.

The UAE Cabinet approved a code of conduct for legal professionals in 2025 which emphasises integrity and professional, ethical AI use, and professional organisations including the American Bar Association and the Law Society of England and Wales have issued guidance reminding lawyers that AI outputs must always be independently verified and should never replace professional judgement. The same principle applies to businesses: AI outputs should never be used wholesale to define the scope of legal work without meaningful human review and refinement.

There is also a reputational dimension that businesses may not immediately consider.

AI-generated instructions used indiscriminately are often immediately recognisable to experienced lawyers.

Many lawyers report that requests for advice drafted largely by AI follow a distinctive and problematic pattern: highly formalised language, generic issue lists and an analytical style that lacks the commercial nuance or factual prioritisation that experienced business leaders typically bring to a legal discussion. The result is instructions that feel detached from the real commercial context and that signal to lawyers that the business may not fully understand the problem it is asking them to solve.

Whilst there is nothing inherently wrong with using technology to help organise a request, relying on AI can give lawyers an unintended impression about the sophistication of the business or the depth of legal understanding behind the instruction. That impression may not be accurate – many highly experienced executives simply use AI tools to save time – but it can affect the way lawyers approach the work. More importantly, it can result in advice that is technically correct but commercially shallow, because the lawyer has not been given sufficient context to understand what the business truly needs.

A rigid or overly technical AI-generated brief can also obscure the most critical element of any legal instruction: the commercial objective. As lawyers, we are ultimately engaged not just to analyse law but to help businesses achieve practical outcomes. A briefing that focuses heavily on theoretical legal issues while providing little insight into the commercial context makes that task significantly more difficult, and often impossible. The result is legal work that may be entirely sound but strategically useless. Businesses then pay fees for advice that does not serve their actual needs – a fundamental failure of value.

None of this means that AI should be avoided when preparing legal instructions. On the contrary, used thoughtfully as a starting point, not as a replacement for human judgement, it can be a valuable tool for structuring a problem, identifying initial questions and gathering relevant facts before approaching counsel.

The critical distinction is this: an AI-generated brief must be treated as a preliminary draft requiring substantial human review and distillation, rather than a finished product.

The most effective and beneficial legal relationships are collaborative. A good lawyer will rarely accept an initial scope of work at face value, particularly one that appears to have been generated by AI. Instead, experienced counsel will test the assumptions behind the brief, ask questions about the factual background and explore the broader commercial context in which the issue has arisen. This dialogue is not a formality. It is essential to ensuring that the legal work addresses the right problem and delivers genuine commercial value.

Through this dialogue, the lawyer and the business can refine the scope of work together. Sometimes that process reveals that certain issues do not require detailed analysis after all. In other cases, it may uncover risks or opportunities that were not initially apparent, particularly those that an AI system, lacking commercial judgement, would never have identified.

The result is more focused, more relevant and ultimately far more useful legal work, work that justifies the fees paid and genuinely serves the business's interests.

For businesses, the lesson is clear. AI can help organise thinking and accelerate the early stages of a legal project. Using AI to define the scope of legal work without meaningful human oversight, refinement or dialogue with legal advisers does not deliver efficiency. It delivers misdirected effort, wasted fees and advice that fails to address the business's real needs. Defining the right scope of work, one that properly reflects the legal risks and the commercial goals of the business, remains a task that requires human judgement and collaborative discussion with trusted legal advisers.

Conclusion

Technology may help start the conversation. But businesses that use it wholesale, rather than as a preliminary tool, risk paying for legal advice that while competent, is commercially irrelevant to them, and that represents poor value for money. The real value still comes from the human judgement and the collaboration that follows.

If you require any further information in this regard, please reach out to Victoria Woods, Partner, Head of Commercial at v.woods@hadefpartners.com.

This article is intended for general informational purposes only and does not constitute legal advice. Readers should seek independent legal counsel in relation to their specific circumstances.

 

Experts

Contacts