Keywords Before TAR? What to Ask First.

Back to Blog

This article has been reprinted with permission from the August 29, 2017 edition of Legaltech News. © 2017 ALM Media Properties, LLC. All rights reserved. Further duplication without permission is prohibited. For information, contact 877-257-3382 or [email protected] #010-08-17-04.
By Amanda Jones and Bruce Hedin, H5

The application of keywords before using TAR should not be based on an “always” or “never” rule. These seven categories of questions can help make your determination.

A recent ruling in FCA US v. Cummins has caused a stir in the e-discovery community. The ruling resolved a dispute between two parties, one who wanted to use keywords to cull collected data before employing a TAR process and the other who wanted to apply TAR to all collected data without any preliminary keyword culling. The court ruled in favor of the latter.

Given that keyword culling is quite commonly used to reduce the volume of data submitted for downstream review—manual or technology-assisted—it is not surprising that this ruling drew the attention of courts and practitioners alike. But, in our view, there is no universal rule governing the appropriateness or reasonableness of keyword culling.

The validity of keyword culling depends on the circumstances of the matter at hand and on the methodology adopted for developing, applying and testing the keywords. To come to a sound assessment of whether keyword culling is appropriate in a given instance, it is essential to ask questions about the situational factors surrounding the case and the specific keyword proposal being considered.

There Is No Universally Valid Rule

Most e-discovery attorneys and practitioners would concede that applying keywords before TAR could be justified in some matters as a means of controlling volumes and costs; proportionality and reasonableness are legitimate concerns in e-discovery. Sometimes, for example, parties are forced to collect large volumes of documents from suboptimal data sources, knowing that the collected data will be rife with off-topic material. When this occurs, especially in cases where responsive topics are concretely defined with clear-cut boundaries, using a robust set of broadly inclusive keywords to limit the volume of completely off-topic material might well be justified and would not necessarily negate the possibility of using TAR productively downstream.

 Likewise, in virtually all matters, there are several classes of patently nonresponsive documents (e.g., spam emails, auto-generated system emails, etc.) that can easily be targeted for removal with targeted keywords. Eliminating this type of irrelevant noise from the population using keywords poses little, if any, risk to the overall comprehensiveness of the review and should not preclude the subsequent use of TAR.

It is a mistake to view keywords as a single, undifferentiated tool, intrinsically inferior to TAR. As with most tools, keywords are excellent in some situations and abysmal in others; their efficacy depends largely on the use case and on the care and competency that goes into crafting and implementing them.

Thus, decisions regarding the application of keywords before using TAR should not be based on the rote application of an immutable “always” or “never” rule. It should be based on a thoughtful examination of the specific circumstances that hold for a given matter.

Ask Questions

The questions that should be asked to enable a sound assessment of the appropriateness of pre-TAR keyword culling cover a range of topics: goals for the culling exercise, the manner in which keywords for culling will be developed, protocols for validating the effectiveness of the keywords, and so on. We would begin here:

1. Goals: What are the goals of the culling exercise? To identify and exclude from further review patently nonresponsive documents or to identify and pass to downstream review potentially responsive documents? To what extent is precision a goal of the exercise, in addition to recall?

2. Data set: Has the data set to which keywords will be applied been analyzed to determine its amenability to keyword culling? Have file types not amenable to keyword culling (e.g., multi­media files) or file types requiring specialized keyword treatment (e.g., non-English documents) been identified and put on a separate track appropriate to their distinct characteristics?

3. Topics: Are the topics targeted by the requests for production reasonably well understood and well defined? Do the individuals who will be developing the keywords have a reasonable sense of how those topics will manifest in the document population to be culled?

4. Sampling: To what extent will the development of the keywords be informed by samples of documents drawn from the data set to which they will be applied? If samples will be used, will they be drawn via random sampling, judgmental sampling, or some combination of the two? Will the samples be drawn from the entire universe of data that is to be culled or from some subset of that universe? If a subset, what measures will be taken to ensure that the subset is reasonably representative of the entire universe (e.g., representative of the full range of functional roles of custodians represented in the universe of data to be culled)?

5. Development process: What are the procedures by which the keywords will be developed? Will the process be iterative? What criteria will be used to determine when keyword development is to be concluded? What provisions will be made to ensure that responsive concepts and language discovered later in the TAR process are adequately captured by the culling keywords?

6. Expertise: What are the qualifications of the individuals developing the keywords? Do they have the linguistic expertise and subject-matter knowledge that would enable them to account for the many ways a given topic will be expressed in the document population to be culled? Do they have experience developing keywords for purposes of reducing the volume of data to be passed on for downstream review?

7. Validation: Once developed, what testing will be done to demonstrate that the keywords are, in fact, achieving their intended purpose? What metrics will be used as quantitative gauges of effectiveness? To the extent that sampling will be used to generate metrics, what will the sampling protocol entail? To what extent will quantitative gauges of the effectiveness of the keywords be incorporated in the evaluation?

It is important to recognize that there are no “correct” answers to the above questions—i.e., no answers that will guarantee approval of the use of keywords in advance of TAR in every matter. But the information gathered through these questions, when viewed in the larger context of the discovery effort and weighed against proportionality considerations, can certainly help parties arrive at reasonable, well-informed decisions.

Note, as well, that there is nothing here that is specific to the use of TAR. The considerations raised above are relevant for any use of keyword culling, regardless of the particular review method adopted following the culling step. We believe that the standard to which keyword culling should be held is the same whether the downstream process involves manual review or some variety of TAR.

So, while the FCA US ruling may well have been appropriate for that matter, we do not believe it would be appropriate for all matters. In our view, keyword culling is a reasonable and appropriate step if, but only if, the specific plans for developing and testing the culling terms meet a standard of rigor proportionate to the matter at hand.

Click here to download a copy of the original article that appeared in LegalTech News.



Amanda Jones is associate director of professional services at H5. Jones designs and supervises development of innovative linguistic and statistical techniques to support document classification and review at H5 and writes frequently on related topics.


Bruce Hedin, Ph.D., is principal scientist at H5. Hedin designs and oversees the
sampling and measurement protocols used by H5. He has wide experience in using sampling to quantify the effectiveness of AI tools and methods and is a frequent writer and speaker on the role of sampling and measurement in e-discovery.

Leave a Reply

Your email address will not be published. Required fields are marked *


Thank you for subscribing to the H5 blog, True North.

We strive to provide quality content on a variety of topics related to search, eDiscovery and the legal realm.

Please check your email inbox for your subscription confirmation!