Document review for litigation: One size does not fit all
In the Wild West of e-Discovery, practitioners and vendors frequently speak of “document review” as if it were a monolithic proposition. Implicit in their debates over cost and efficacy is a notion that one size fits every review.
In reality, every litigation or investigation has different objectives, budgets, and risk tolerances. Within any particular matter, moreover, there are different tasks to be accomplished. At the most basic level, litigants typically need to respond to demands for production. They need to collect data, remove content that is out of scope or otherwise patently non-responsive, and vet the remaining documents for responsiveness, privilege, and (sometimes) confidentiality.
But even within this production activity, choices abound. If the adversary’s claims are well-founded, and key evidence likely resides in the data, document review will need to be meticulous and exhaustive. If the demand is mostly a compliance burden, as in a third-party subpoena addressing topics that are not sensitive to the client, the review can probably be more cursory (if not bypassed altogether). And there are many gradations between these two poles.
In cases where the documents can really affect the merits, it is worth considering whether counsel should marry the review for production with a rigorous check for relevance. Relevance, as distinguished from “responsiveness,” speaks to the significance of a document to the issues of the case. One should expect the relevant subset to constitute a fraction of the collection subject to production. Ultimately, trial lawyers only need a handful of the relevant documents to frame a line of questions in deposition or to bolster their arguments for motion practice.
But how does a review process effectively bring relevant documents to the surface for analysis and case preparation? Under traditional approaches, lawyers manually check each document in a collection. Once that check occurs, it is very time consuming and expensive to go back and “re-review” a document. So if the trial team wants to flag relevance along with responsiveness, it needs to load up the reviewers with multiple assignments, asking them to answer more nuanced questions even as they are trying to vet a document for responsiveness. Commonly known as “issue coding,” this practice has many shortcomings, not the least of which is the suspect quality of work product. After all, how reasonable is it to expect that a reviewer, laboring for hours on end at a workstation, will reliably and consistently assess each document for five, ten, or even twenty substantive issues?
Technology-assisted review, if executed properly, can change this paradigm. Because of its economy and scalability, technology assisted case preparation research can accompany technology assisted review for production, allowing litigants to get the best of both worlds without compromising the efficacy of either. It may be that the trial team still wants to categorize the collection by issues. Perhaps they need a flexible, organized repository for complex trial preparation activities which they cannot define at the time of document production. In many, if not most cases, however, technology assisted review can obviate the need for issue coding. It can generate results that are more targeted to the specific depositions or trial activities at hand, so that lawyers obtain just the documents they need at just the right time.
In any event, no longer do litigants need to force their document discovery needs into a single pre-conceived workflow. Every case is different, and with the right expertise, clients can tailor their approach to make the most of their discovery budgets.