Better Walls, Better eDiscovery
Technology Legal

Better Walls, Better eDiscovery

Source: IG InitiativeMeet Aaron Crews, Head of eDiscovery at Walmart. In his tenure at the behemoth and beyond, he’s seen the evolution of eDiscovery operations as both in-house and outside counsel. Crews understands firsthand the gravity of building strategies for a massive litigation profile that are repeatable, defensible, transparent, and cost-efficient. At Text IQ, we’re building solutions to address the most critical problems of eDiscovery. We sat down with Crews to learn more about the challenges he faces, and what excites him about the future of eDiscovery.

Can you share some thoughts on the growth of Head of eDiscovery?

My professional evolution mirrors the evolution of Head of eDiscovery in many different arenas. This kind of role often starts with the realization that someone needs to be the strategic mind behind the discovery process deployed in individual cases. Applying best practices within the ambit of each specific case is essentially the entire game at this early stage. However, once that is rolled out on a regular basis, people realize very quickly that a case by case strategy is like trying to hold back the ocean with the walls of a sandcastle. The better idea is to figure out proactively “how do we build better walls” and deploy these best practices in an iterative way across litigation and compliance. The head of eDiscovery at any organization of a meaningful size generally starts out on the micro level, on a case-by-case basis, and eventually they expand out to play a more programmatic role. For me personally, and with some exceptions, I tend to be less involved these days on the micro-level – aka, in the day-to-day machinations of individual cases – than I used to be. Most of my time is spent on the macro view, creating solutions that apply to all cases – better walls, if you will.

What do you think of when you hear ‘high-stakes data’ or ‘high-stakes legal disasters’?

High-stakes data tends to be both highly organization-specific and highly situation-specific. At the heart of it, high-stakes is usually data that is highly sensitive, nearly always confidential, and if leaked could be disastrous for the party. The organizationally- and temporally-dependent nature of high-stakes data can make it tricky to identify, and so this tends to be a space where good processes, good people, and good technology all working together is paramount.

What is the biggest challenge for HOEs today?

I think there are a few. The first one is the need for resources – and the struggle for them – both internally and externally. eDiscovery is primarily a cost center inside of organizations where “legal” isn’t the main business and so there is the question of cost and risk reduction in the face of physical budgets.

A second challenge are the cultural barriers that can be present inside organizations, which inhibit the progression of an eDiscovery program. There are a lot of organizations where there are roadblocks that prevent eDiscovery programs from happening – such as a culture of outsourcing to law firms, or some institutional memory of trying to insource something and getting burned by it. Working the “change management” necessary to convert hearts and minds can be a full-time job in and of itself.

The challenge of identifying and bringing in the talent that is necessary to conceptualize and move an eDiscovery program forward is also an issue that companies commonly run into.

What about predictive coding – what are the pros and cons?

I think of predictive coding as one of several types of technology tools to assist review. At a super high level, the benefit of predictive coding is that, when used correctly, it can dramatically speed up the process of getting through voluminous data in a variety of contexts. This makes it cost-effective as well.

But there are several problems with predictive coding, the first of which being that how it’s trained has significant results on what comes out of it – you put garbage in, you get garbage out. In the hands of non-savvy users, this can be problematic. However, the real problem with predictive coding is the legalese around it. The case law around it is not very helpful. Some courts have ruled that if you use predictive coding, you have to show the other side all of the documents used to train the algorithm, including the non-relevant/non-responsive docs from the seed-set. In litigation, discovery is bounded by relevance, and the production of non-relevant documents can be problematic – you may be giving a law firm that is in the business of suing organizations like yours the fuel for their next lawsuit. And these sorts of problems are starting to percolate into other areas like FTC and DOJ investigations.  It can make using these tools very difficult.

How does a tool like Text IQ address eDiscovery problems?  

Well, document review is a problem. At various points in my career I have thrown lots of technologies and tools at the review issue. Some of these have worked well and some have been horrible disappointments. The ones that have been disappointing often suffer from a similar flaws, bad inputs or (sometimes and) a myopic view of the content in a document that should be analyzed. But the way that Text IQ’s tool has been designed, it’s able to tease out content based on a variety of factors. By weighing those factors, it can tease out high-stakes data from irrelevant material. It allows someone savvy to solve a number of high-stakes data problems and avoid the risks that come with it, which helps alleviate a lot of the headache rightfully associated with document review – e.g., I can identify privileged docs, but I can also identify other sensitive information with consistency and speed.