LegalTech New York is right around the corner. And, much like the recent Georgetown Advanced Ediscovery Institute, there will be no shortage of discussion about predictive coding/technology-assisted review (“TAR”). The agenda includes fourteen program descriptions that directly reference TAR and another eight that clearly suggest TAR will be a major topic.
As a user, student and proponent of TAR in the right cases, I hope we can sharpen the discussion at LegalTech TAR 2013. Along those lines, permit me a few (hopefully) constructive suggestions:
(1) Please don’t describe five cases, including orders that range from detailed to handwritten, as an unassailable judicial endorsement of TAR. Da Silva Moore, Kleen Products, Global Aerospace, Actos and EORHB are all interesting, but they have just begun the judicial discussion on the use of TAR. Far more interesting uses of TAR are occurring in cases that have not required judicial attention (or in which the parties have sought to avoid the sideshow of public dissection of their protocols).
And while Global Aerospace is being held out as the first judicial approval of the results of TAR, I hardly feel emboldened to walk into another court holding a half-handwritten one-page order approving the use of the technology along with a statement that the Court later approved the results after there was no objection by the requesting party.
(2) Please don’t base return on investment discussions on an outdated baseline that does not account for iterative and focused identification of custodians and other sources of electronically stored information (“ESI”). In discovery involving large amounts of ESI, there seems to be a growing appreciation that we should focus less on finding every document that meets the extremely broad definition of relevance (an impossibility) and, instead, focus on the search for ESI that is not merely relevant, but relevant to resolving matters that are actually in dispute. Savvy practitioners who, through interviews, negotiation and advocacy, develop a focused, iterative approach to the selection of custodians and sources for review, and the identification of relevant documents, would benefit from an ROI analysis that begins with more nuanced assumptions as to the starting data set.
(3) Please adapt your analysis and discussion of recall and precision to focus on ESI that really matters. This is really a variation of the last point, but practitioners would benefit from an analysis of recall and precision – and the efficiencies of achieving desired levels of recall and precision — that is focused on important documents rather than merely relevant documents. It is understandable that the meaningful discussion of TAR began with a focus on exhaustive manual review and comparison of humans and machines in identifying all relevant documents (recall) and only relevant documents (precision). Now we need a deeper understanding of recall and precision focused, instead, on meaningfully relevant documents (those that matter to resolving issues that are actually in dispute or are likely to lead to such documents). At the end of the day, only a few binders of documents will be used in supporting even the most complex motions, and even documents are likely to be admitted as exhibits at trial. In the TAR v. human debate, is there a clear winner in finding those documents?
I look forward to attending all 22 sessions discussing TAR next week!