eDiscovery Leaders Live - Presented by ACEDS, hosted by George Socha

eDiscovery Leaders Live: Chris Chapman

Written by George Socha | Nov 23, 2021 8:34:08 PM

 

Chris Chapman, Director of Litigation Support at Dickinson Wright PLLC, joins George Socha, Senior Vice President of Brand Awareness at Reveal, for ACEDS #eDiscoveryLeadersLive.

With a background as a practicing lawyer, eDiscovery attorney, and vice-president and shareholder of an eDiscovery service provider, Chris joined Dickinson Wright in June 2021. As the firm’s director of litigation support, Chris counsels clients on how to structure ESI document review processes to best leverage people and technology, maximize efficiencies, and streamline the review process.

In this eDiscovery Leaders Live episode, Chris discusses TAR validation processes and talks about the importance of people as part of the eDiscovery process.

Key Highlights

  • [1:47] Data validation questions raised by a recent In Re Diisocyanates Antitrust Litigation Report and Recommendation.
  • [2:51] How to know when you are done training your TAR model.
  • [5:37] TAR 1.5 to get good insight into what is in your dataset.
  • [6:33] Evaluating progress within prioritized reviews.
  • [7:49] Richness estimates and hard targets, and the challenges they pose.
  • [10:13] A more effective ESI protocol.
  • [12:14] How transparent transparency should be.
  • [13:32] How to know what you are done training your model: Looking at rank distribution.
  • [15:49] Why to sample from the whole dataset, and frequently.
  • [18:26] In Re Diisocyanates Antitrust Litigation and how you validate your process and your recall.
  • [22:22] The importance of people in eDiscovery generally and with TAR specifically.
  • [25:24] Structuring reviews to maximize reviewer effectiveness.
  • [27:32] The importance of facilitating effective reviewer feedback.

Key Quotes by Chris Chapman

    • “There are multiple approaches you can take [to determining when you are done training your TAR model] and I like to layer them with I am working on a complex matter like this. Rather than just a straight up richness sample, you can make a control set and conduct a review of that matter and that allows you to get more insight into your progress. Rather than just looking for a number – 2,100 documents that you found – you can monitor your recall, your precision, and see how the model is performing as you work through training it.”
    • “I think that anytime you set hard targets like that, whether it’s a hard recall target, whether you say you are going to hit a certain elusion target, or whether you say you are going to in your last two batches of prioritized review find less than five percent responsive documents, you are just predicting and guessing as to what your dataset is going to look like. You can really push yourself in a corner if you are trying to hit a target like that and you have a dataset that just does not lend itself to that.”
    • “I think agreeing on what calculations you are going to use to assess the effectiveness of your training and what metrics you are going to share at the end of the process, I think that’s what you put into your protocol and make sure both sides agree on this is what we’re going to do and this is what we’re going to get out of the process at the end.”

Connect with Chris

Chris Chapman on LinkedIn