Part Two of a Three-Part Series where we examine the evolving challenges of second requests, and the promise of AI to help. Our last post addressed the logistical challenges for antitrust counsel facing a second request review. This post will explore competing demands of growing scrutiny and growing risk.
In a previous blog post, my colleague Weichen Weng said that “scale is a feature.” Antitrust counsel have experienced this for themselves. One of the defining characteristics of second request compliance is the overwhelming quantity of documents that need to be turned over. If you have received a second request, you need to submit detailed business data, analyses of competitive effects, and a log of privileged material. The number of documents can easily stretch into the millions.
This scale is bound to timeframes that can be impossible. Hundreds of thousands of new documents may be discovered only days before the production deadline. Any one of these documents could be a sensitive needle in a haystack, creating a disaster if slipped into the wrong hands.
Brute force human review is no match for these demands. Attorneys are seeking the most advanced technology they can leverage in order to manage two competing risks: meeting government demands on one hand, and protecting their company’s latent risk on the other hand.
Growing Volume Versus Growing Demands
As corporate documents proliferate in number and form, the scope of second request documentation defined by the FTC and DOJ is also proliferating. Antitrust agencies have augmented efforts to aggressively seek out corporate documents and data, particularly in email. In addition, mobile and cloud data, along with documents in many different languages, is creating more complexity – and opacity – of the document data.
The heightening demand for ever larger quantities of electronic documents is reflected by the FTC’s revised guidance in its Model for Second Requests (August 2015), which addresses the growing importance of ediscovery to process the mounting sets of data that corporations must produce. Parties subject to second requests thus risk drowning in documentation if they fail to overhaul currently inefficient approaches to ediscovery.
Privilege Is A Top Priority
Due to the sheer volume of documentation that must be handed over to the government, the potential for exposing privileged information or a “hot document” is a real danger.
The DOJ’s revised Model Second Request (November 2016) acknowledges the elevated risk of producing Personally Identifiable Information (PII) and Protected Health Information (PHI) contained in electronic and cloud-based data, as this sensitive information is often stored in databases with other responsive documents for a Second Request.
While human review, at times combined with keyword searches, remains standard practice, this time-consuming and costly approach is becoming outdated. It also diverts corporate counsel’s energy from primary business tasks and necessitates hiring an army of contract attorneys, making it impossible to ensure consistency across a large team of reviewers.
But the most concerning failing of human review is that it’s risky. Even the most meticulous attorneys can miss instances of privilege. In my experience as the Director of Customer Success and Strategy at Text IQ, I’ve found that the average privilege-finding accuracy of the traditional approach is only 90%. After deploying our solution, our customers see that figure rise to 99.9%.
It’s no wonder that our customers are beginning to look past traditional methods to help them manage all these growing risks.
The Hard Limit of TAR
While Technology Assisted Review (TAR), namely predictive coding, allows corporations to leverage technology to work faster and more accurately, limitations reveal the unfulfilled promise of this approach in the privilege review stage of Second Requests.
To accommodate large data volumes, tight deadlines for compliance, and a corporate focus on reducing review costs, TAR has become the ‘new normal’ for Second Request reviews.
Both the FTC and DOJ have also become increasingly comfortable with the use of TAR for Second Request e-discovery. Senior Counsel for the DOJ’s Antitrust Division Tracy Greer stated, “Given the size of its investigations, the Division has been aggressive in promoting the use of technology to reduce the burden and expense of complying with its compulsory process.”
Keeping with this emphasis on e-discovery, the DOJ added instructions to its Model for Second Request in 2012 that require organizations to disclose any ediscovery search techniques employed.
Predictive coding necessitates subject matter experts to hand-code ‘seed sets,’ meaning that the accuracy of this intervention remains dependent on human input. The DOJ Antitrust Division advises that a broad view is therefore critical when implementing TAR protocol, noting that if the initial reviewer does not code a certain category of information as relevant, documents related to that entire category will not be produced to the Division, thus resulting in non-compliance.
Another limitation of TAR, particularly with respect to the goal of mitigating latent risk in a Second Request, is its inability to look beyond the four corners of a document. Subject to this constraint, predictive coding cannot delineate connections across the entire ecosystem of documents, where the most vital context, and often privileged information, lies.
Despite promises to help corporations clear the seemingly insurmountable hurdles of Second Request, TAR lacks the full capacity to perform in this context. So what is the answer?
The AI Solution
Artificial intelligence can address the inherent challenges of second requests, allowing attorneys to catch the right documents with a new scale of accuracy, in reduced timeframes and at reduced costs that were impossible before.
In Part III of this blog series, we will explore that promise of AI.