Judge Peck on the IoT, TAR, and Avoiding 'Discovery About Discovery'

Judge Peck on the IoT, TAR, and Avoiding 'Discovery About Discovery'

No one has ever accused Judge Andrew J. Peck of being too timid when it comes to pushing the bar forward, whether it’s over boilerplate objections or the benefits of certain discovery technologies. In 2011, for example, Judge Peck wrote that too few attorneys had taken up computer-assisted review, waiting, perhaps, for a court's seal of approval. Just a few months later, he gave them just that in Da Silva Moore v. Publicis Groupe, making Judge Peck the first judge to embrace TAR from the bench.

Peck-thumb.jpg

We sat down with Judge Peck to discuss technology, his distaste for “discovery about discovery,” and the future challenges that the next generation of eDiscovery judges will face. This is the second part of our two-part interview. Part I, covering the decline of trials, the role of proportionality, and where lawyers need to to “wake up,” can be found here.

Logikcull: Turning now to more technological issues: When it comes to eDiscovery, one of the big challenges that we think about a lot here -- and I'm sure that you have come across when making decisions on proportionality or any aspect of the discovery process -- is the limited visibility that courts and parties have into the actual costs of discovery and how the various eDiscovery technologies work. For example, you have situations where judges or parties are faced with a claim that a party can review and produce a huge amount of data in a short amount of time by a specific deadline, but they're going to use some sort of proprietary “black box” algorithm to accomplish that. In these instances, how do courts ensure that the information off of which they're basing their discovery decisions is accurate?

Hon. Andrew J. Peck: OK. Well, first of all, it is not that the discovery decisions produce accurate information as much as that the process -- and by process, I really do mean the combination of technology, process or procedure, and legal talent that goes with it -- will do a reasonable job in getting the responsive information the discovery requests. The concept is not perfection; it has always been reasonableness.

As I said in Rio Tinto -- among my eDiscovery predictive coding trilogy of cases, Da Silva Moore v. Publicis, Rio Tinto v. Vale, and Hyles v. City of New York -- TAR and predictive coding, using those two terms interchangeably, should not be held to a higher standard just because one can get certain metrics from the predictive coding system that one could not get in the old days.

"[I]t is not that the discovery decisions produce accurate information as much as that the process... will do a reasonable job in getting the responsive information the discovery requests."

I think it is clear from all the research that has gone on, some of which is from the TREC Legal Track series and other research done by Maura Grossman and Professor Gordon Cormack from the University of Waterloo and others who have studied this area, old fashioned, ‘eyes-on-document’ review is not the gold standard. It's what we all grew up on, for a certain age, because that is all we had.

Way back when there was no technology, it was all paper and you’d pull files from various people’s offices at the client. You reviewed them in the random order in which they were pulled out. You didn't do it by subject matter. You didn’t do it by date. None of that was really possible. Indeed, I liked to joke that the big technological advancement when I was an associate was in the mid-80s when 3M came out with Post-it Note stickers that we could put on a document.

So, we went from that to the use of keywords. That’s better than not having any technological advancements, but there are also well-documented problems with keywords. They are both over-inclusive, in that they will find documents that are not relevant or responsive to the discovery requests but just happen to have the same keyword in them. Also, they are under-inclusive because of the various ways there are words that are synonyms. If you just put in the word ‘dog’ but you don't put in ‘pitbull,’ you may not get a document talking about being bitten by a pitbull that doesn’t use the word dog somewhere in it.

In my William A. Gross case, leaning on the work that had been done by Judges Grimm and Facciola and their leading decisions on keywords, that was my first wake-up call to the bar about, indeed, if lawyers are going to use keywords, they need to do it smartly, interactively, testing quality control, etc.  

Then we got to the point where predictive coding was available. While there is a lot of talk about “black box” and all of that, I suspect most of us very freely get on an airplane without having a faint clue as to the aerodynamics that allow such a heavy object to not only take off but, most of the time, then land where it's supposed to land. It seems to me, there is now enough information out there both in both the judicial opinions and in the computer research that is mentioned in many of the judicial opinions, for judges to be able to make reasoned decisions without needing Daubert hearings for expert testimony per se.

"While there is a lot of talk about 'black box' and all of that, I suspect most of us very freely get on an airplane without having a faint clue as to the aerodynamics that allow such a heavy object to not only take off but, most of the time, then land where it's supposed to land."

As far as I'm concerned, and as I said in Rio Tinto, looking at the caselaw, to me it is now black-letter law that if a responding party wants to use predictive coding, the courts will allow it.

That doesn't mean that all predictive coding systems are the same or that the fact that one vendor's system may have been approved in case one, means that that system will work perfectly used in case two in the future. But lawyers should be much less reluctant to go down the predictive coding route now than they were back before Da Silva Moore in 2012 or even before Rio Tinto in 2015. For those who are interested, they should go to the Sedona Conference website and look at the Sedona TAR Case Law Primer which discusses the important TAR case law for both in the United States and in the jurisdictions abroad, Ireland, England, and Australia, that have also approved the use of predictive coding.

Logikcull: In terms of TAR, reliability it seems can be influenced by a variety of factors, I think as you mentioned. So anything from user training, to people knowing how to use this technology correctly, to the specific features of a vendor's product, to what documents are included in the document corpus that you're basing your predictive coding on -- all sorts of variables are at play. So, in making decisions around TAR or dealing with parties’ motions around TAR, what duty, if any, does a court have to ensure that the producing party manages these variables properly and ensure that claims made about the abilities of TAR are, maybe not independently validated, but reliable?

Peck: I would say that probably the best way to deal with this is whether the requesting party can show gaps in the production. In other words, I am not a big fan of discovery about discovery absent there being some indication that there is a problem. Any more than in the old days, how one trained the reviewers would make a big difference in what was produced and what was in the gray area that was decided not to be responsive by one reviewer while perhaps another reviewer, perhaps being more ethical, or would find that it is responsive. We’ve had these problems forever. The fact that one can do a deeper dive to predictive coding technology doesn't mean one should.

"I am not a big fan of discovery about discovery absent there being some indication that there is a problem... The fact that one can do a deeper dive to predictive coding technology doesn't mean one should."

I would also note that the technology constantly changes. In Da Silva Moore, the technology was what the vendor community and blogs now call TAR 1.0, using a system where you had to have a training set and repeated rounds of training, at which point the system was then stabilized and would never learn anything new. TAR 2.0, again using the vendor advertising way of putting it, systems that use continuous active learning, have fewer concerns about the training set because every document reviewed continues to train the system. So it’s not a case where the system is stabilized and then forgotten. Which also means if you start off with the material from custodians A and B and then additional material is gathered from custodians C and D, it can be inserted into the TAR protocol without having to retrain the system because what the system already knows will be applied. If something is showing up new from the new custodians, that will retrain the system when they are bubbled up to the reviewer.  So, there is a less to fight about the training set.

In addition, as some of the Grossman and Cormack research shows, one can deal with some of those concerns by letting the requesting party provide documents to use in the initial “training” of a CAL system. So, if they know of certain documents, they can give those and say, “OK, these ten or these hundred, whatever it may be, should be marked as relevant” use that to train the system. Indeed one can take the good old-fashioned Rule 34 request for production, strip out some of the extra verbiage, and use a Word version of that as a training document as well.

So, there are many ways to make sure the system is running appropriately, and then, at the end, using the various analytical tools available for the requesting party, see if Holmes was talking to Watson and all of a sudden there’s a two-month gap which might well be shown to have occurred right after an email saying, “Let’s take this discussion offline.” That would be indication either of gaps in the production or gaps in there being anything to produce because the parties went back to the good old-fashioned face-to-face or telephone conversation.

I would try to avoid any sort of deep dive into the so-called “black box” of a particular vendor -- both because the vendor is not likely to share that very openly but also because, frankly, I don't think I could understand it. I probably know more than many judges about predictive coding, but I'm not a computer major, I’m not in that field. If you start saying, “in order for judges to approve that, they've got to understand it and have experts explain it,” no one is going to use it. The cost of doing all of that will overwhelm the savings from using TAR instead of using keywords.

"If you start saying, 'in order for judges to approve that, they've got to understand it and have experts explain it,' no one is going to use it. The cost of doing all of that will overwhelm the savings from using TAR instead of using keywords."

Logikcull: You were the first judge to embrace TAR, but you recently ruled in Hyles v. New York that a requesting party can't force the responding party to use TAR. What should the courts’ role be in accommodating versus promoting TAR? Where do you see that balance or the borders?

Peck: In terms of accommodating, as I say, if the responding party wants to use TAR, courts should allow it, courts do allow it. But I'm a firm believer in Sedona Principle number six which says that the responding party is to be in the best position to decide what methodology it will use to produce the material that has to be produced. That was my reasoning in Hyles. Indeed, as you could see in the opinion and as was even clearer in the transcripts of the pre-motion conference, I kept saying, “TAR is much better. Why on earth don't you want to use it?”  Part of their answer was that they thought the plaintiff’s counsel would raise too many issues with it. That’s fine. Now, the place where this will change is, first of all, we may get to a point five years from now where everyone pretty much is using TAR and that would be in the state of the industry. For someone to not use it, they would need a good reason.

The other factor is when it comes to cost and proportionality. Seems to me that while I will not force someone, currently, to use TAR, if they say, “Judge, the request is disproportionate because it's going to take us 500 hours of associates time billing $500 an hour and the case isn’t worth it,” and I see that they could do it using ten reviewers or fewer in ten hours if they're using technological assistance, they're not going to win the proportionality or cost-shifting argument. They can be as inefficient as their client will allow them to be, but that's not going to be used against the requesting party.

Logikcull: We spoke to your colleague Judge Francis earlier and one of the concerns that he raised was this growing divide in technological savvy among practitioners. Some people really know these systems, eDiscovery, technology, and how to get things done in and out. Then you have some people who view that as something they don't need to know, that someone else can handle, or it’s just not something that has come up in their practice very often. Do you think that this disparity exists? Is it growing? Are people becoming more technologically savvy or are we having sort of a split?

Peck: I do think that split exists. On the one hand, the gap is closing to a certain extent, not as fast as one might wish. It certainly is closing with respect to email. But for all of the progress we're making on email, we then get to newer technologies or other technologies, the gap goes back to being just as wide as it probably was back in 2006.

Now, younger people are starting to think of emails are for sixty-plus-year-olds. They're using text messages or various application messaging systems to get both their personal and business communication across. I think lawyers are not nearly as proficient with those sorts of things.

"[F]or all of the progress we're making on email, we then get to newer technologies or other technologies, the gap goes back to being just as wide as it probably was back in 2006."

Then, of course, moving forward, we're going to have more of the “Internet of Things” and issues about: Is there relevant information in those devices? Is the information actually in the device or is it in the company headquarters’ servers? How do you get it? What is kept? How long is it kept? How do you preserve it if you want to preserve it? All the issues we’ve been going through with emails and that are largely resolved by a judicial decision or industry practice even if there are some lawyers who still don’t get it. We’re in the infancy of the IoT and other technologies, that is going to still be a problem for years and years to come.

Logikcull: Pretty much every judge that we've spoken to recently has brought up the “Internet of Things” as a major concern regarding discovery. It definitely seems to be on everyone's mind. I think Cisco made an estimate that by 2020 the IoT will have generated 600 zettabytes of data every year, which is 600 trillion gigabytes. When you think about that sort of massive amount of data, it does create huge issues around collection, preservation, and production.

To what extent do you think that data management issues are going to impact practice going forward? Not just in cases where large amounts of data might already be in use, but there's going to be new data and discovery issues around everything from personal injury or even divorce. There was even a criminal case out of Ohio where a man's pacemaker data was used to charge him with arson. So, it seems like these issues are spreading farther and farther. Is the legal system, is the bar prepared for this growth?

Peck: Some members of the bar are prepared for it and many others are not.

It’s going to be up to businesses to figure out how to deal with this first. It may be that while some of the IoT data exists, it’s not kept for very long. I don't know how long Nest, for example, keeps what your thermostat settings are other than your current setting. Presumably they know and they're not necessarily going to change in order to accommodate eDiscovery practices. It's going to be interesting is about all I can say.

Probably the next generation or two of “eDiscovery judges” are going to have to deal with it. As those of us who were earliest in this sphere -- Judge Facciola, Judge Francis, Judge Maas, Judge Scheindlin, me, etc. -- are retired or will be retiring in the next few years, the younger generation will have to deal with all of that.

Just as we all struggled with email and the first wave of eDiscovery, they will struggle. The next generation of Facciolas will come out with a brilliant decision about some IoT issue that other judges that will follow.

This post was authored by Casey C. Sullivan, Esq., who leads education and awareness efforts at Logikcull. You can reach him at casey.sullivan@logikcull.com or on Twitter at @caseycsull.

Want to see Logikcull in action? 

Let us show you how to make Logikcull can help you save thousands in discovery.

Want to see Logikcull in action? Let's chat.

Our team of product specialists will show you how to make Logikcull work for your specific needs and help you save thousands in records requests, subpoenas, and general discovery.