About Counsel Stack

Empowering the next generation of legal professionals

A gavel on a black and gold background

Built by lawyers, for lawyers

Counsel Stack is the unified platform designed specifically for new and growing law firms, improving team productivity, work product quality, and billing efficiency.

OUR GOAL

Build the World's Most Reliable Legal AI

Counsel Stack is powered by tens of millions of authoritative sources.

Publicly available AI models generate unreliable legal information at least 58% of the time with ChatGPT 4 and 88% with Llama 2. Current legal AI hallucinates between 17% and 33% of the time. This is a real problem, and courts have even sanctioned several lawyers for citing inaccurate AI content.

Counsel Stack's goal is to create the world's most reliable legal AI assistants by integrating LLMs with our proprietary suite of tools and curated legal data. Each assistant is purpose-built to exhibit sound legal reasoning and to cite authoritative sources.

Counsel Stack provides guardrails to ensure your legal team use AI responsibly.

Our platform is powered by tens of millions of primary sources, updated in real-time.

This example shows one of our research assistants accessing information directly from the eCFR, building a research plan, and displaying its thought process and raw data for attorney review.

The same concept applies to our datasets comprising federal, state and local case law, federal and state legislation, bills, statutes, the Code of Federal Regulations, executive policy documents, economic reports, congressional hearings, reports, meeting notes, bill sponsors, and voting records, the Federal Register, U.S. Code, and more.

Since each practice is unique, we can also work with your team to build custom datasets like agency enforcement actions, oral arguments, or local court rules.
Legal AI

Frequently asked questions

Common questions about language models in legal practice
Why should lawyers care about language models?
Lawyers should be interested in language models because they offer significant advantages in legal research, align with ABA guidance on maintaining competence, and enhance cost efficiency. These models can automate and streamline many routine tasks, freeing lawyers to focus on more complex aspects of their cases.
What is the ABA's stance on AI and legal practice?
The ABA has adopted resolutions (604, 608, 609, 610) emphasizing responsible AI development and use, promoting ethical, transparent, and accountable deployment of AI in the legal sector. These resolutions also focus on enhanced cybersecurity, guidelines for organizations engaging in AI, and integrating cybersecurity education into law school curricula.
How do large language models work?
Language models are next word predictors. Large language models like GPT-4, Llama 2, and Mistral utilize training data, attention, and transformers. These mechanisms help the model capture nuanced semantic relationships between words and sentences, enabling them to generate coherent text.
How can litigators benefit from using language models?
Litigators can benefit significantly from language models by leveraging them for efficient legal research, strategic development, jury analysis, and enhancing overall litigation planning. These tools can streamline various aspects of legal practice, making processes more efficient and data-driven.
How do language models increase capital efficiency in legal practice?
Properly developed language models enhance capital efficiency in legal practice by accelerating tasks and making sophisticated legal analysis more accessible. This is particularly beneficial for less experienced practitioners, leveling the playing field in terms of resource availability and expertise.
How can response quality be improved when using language models in legal contexts?
Enhancing response quality with language models in legal contexts involves prompt engineering and grounding techniques, which provide the language model with necessary contextual information. This approach ensures clear, context-aware, and accurate language model responses, effectively utilizing the model's reasoning capabilities over the provided knowledge.
What is prompt engineering in the context of language models and law?
Prompt engineering involves designing specific queries or instructions to guide AI models towards generating more accurate and contextually appropriate responses. It's a critical skill for legal professionals using AI, ensuring that the technology aligns with the specific needs and nuances of legal cases.
What are some basic prompting techniques to improve response quality in language models
A few basic techniques to improve response quality in language models include (1) specifying clear task formats and tone, (2) encouraging step-by-step thinking through a chain of thought approach, and (3) persona prompting. These techniques help in eliciting more precise and relevant responses.
What is "grounding" a language model?
Grounding a language model is connecting it to a reliable datasource. Grounding with Retrieval Augmented Generation (RAG) helps mitigate hallucinations while enhancing the model's accuracy. This process involves providing the language model with a rich context or specific information to base its responses on, leading to more accurate and reliable outputs.
What are hyper parameters in language models, and how do they change outputs?
Hyperparameters in language models, such as temperature, token window, and penalties, significantly influence the model's outputs. The temperature setting affects the creativity or randomness of the response, the token window determines the scope of the output, and penalties help prevent repetitive or redundant phrases. Adjusting these settings allows legal professionals to tailor the language model's responses, ensuring they are suitable for the specific requirements of brainstorming, legal research, drafting, or analysis.