A recent discovery order in a Southern District of New York public housing lottery discrimination case supported the use of technology assisted review (TAR) but required additional transparency, providing another view into how judges will consider the use of advanced analytics in litigation. In Winfield v. City of New York, Magistrate Judge Katharine H. Parker ordered limited transparency (and encouraged even greater transparency) regarding the defendant’s use of TAR after examining their process in camera. Ultimately, Judge Parker found that the defendant’s process was essentially accurate and reliable, although the defendant’s process was not without its own flaws.

Prior to her review and order, Judge Parker had ordered that the City of New York (the “City” for those readers outside New York) use TAR after the plaintiffs repeatedly complained about the glacial pace of the City’s linear review process. Unsatisfied with a general instruction for the use of TARs, the plaintiffs then found fault specifically with the TAR process the City employed. The plaintiffs’ complaints centered on the City’s use of restrictive search terms, and they argued that the City employed an overly narrow understanding of relevance that led to poor document reviewer training, inaccurate responsiveness calls, and a subsequent use of incorrectly coded documents as the basis for the TAR training. The plaintiffs asserted that the entirety of these practices resulted in an improperly trained TAR system that returned too few responsive documents. Because of these perceived deficiencies in the process, the plaintiffs asked to validate the City’s process. They requested that Judge Parker order the City to produce sample nonresponsive documents, to disclose details about the TAR process, and to hand over documents the plaintiffs believed were incorrectly withheld as nonresponsive or privileged.

Noting current disagreement as to the degree of transparency required by the producing party when using TAR, Judge Parker aimed to strike a balance between the parties. Citing to The Sedona Conference’s Principle 6, Judge Parker gave initial deference to the City, as the responding party, as the entity best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing its own electronically stored information. After a review of the City’s in camera submissions explaining their TAR process, Judge Parker found that the City “appropriately trained and utilized its TAR system.” She did not grant the plaintiffs’ demands for additional insight into the City’s process, although she did encourage the City to share this information “in the interests of transparency and cooperation in the discovery process.”

Judge Parker did allow the plaintiffs’ requests for information that would allow them to validate the results of the City’s TAR process. This action was predicated on an earlier issue in the matter’s discovery, when a handful of nonresponsive documents were produced by the City in “slipsheet” form; however, the production also (and erroneously) included text files, giving the plaintiffs an opportunity to review their contents. The plaintiffs argued that these documents were both responsive and clear examples of the City’s inaccurate responsiveness calls. When the plaintiffs pointed out these minor errors and inconsistencies in the City’s productions, Judge Parker responded that “the Federal Rules of Civil Procedure do not require perfection”; rather, the proper inquiry is whether the “search results are reasonable and proportional.” Nonetheless, she determined that the plaintiffs had justified their request for additional information, finding that

[T]he [requested] sample sets will increase transparency, a request that is not unreasonable in light of the volume of documents collected from the custodians, the low responsiveness rate of documents pulled for review by the TAR software, and the examples that Plaintiffs have presented, which suggest there may have been some human error in the categorization that may have led to gaps in the City’s production.

In addition to requiring some transparency in TAR use, this order also offers additional lessons for discovery practitioners. First, it appears the City was using TAR 1.0, which requires the upfront use of seed sets, assessments, and validation that the Court describes. While TAR 1.0 can be an appropriate option for many document reviews, a decision to use a flexible continuous active learning, or “CAL,” TAR platform in this instance might otherwise have circumvented the plaintiffs’ main complaint – that the system was incorrectly trained – because a CAL TAR system is designed to continue to respond as responsiveness calls are updated. Second, the plaintiffs were able to assess the handful of documents produced as nonresponsive slipsheets because the City did not suppress underlying text – an accidental production error noted in the order. Here, this production error gave the plaintiffs additional fodder to question the reliability of the City’s understanding of relevance and responsiveness. Presumably this was a happy accident for the plaintiffs rather than a procedure other judges might consider ordering in a similar instance.