blog.1.image
Blog
June 15th

Introducing the Reveal AI Pledge

Cat Casey & George Socha
Cat Casey & George Socha

Introducing the Reveal AI Pledge

 

"Our organization pledges to promote the responsible use of data when developing AI models - employing trustworthy practices, knowledgeable practitioners, and secure methodologies."

Why the AI Pledge?

Artificial Intelligence (AI) is fundamentally entwined with how we live, work, and communicate in the modern world. This increasingly ubiquitous technology also has hit the legal stage in a major way. Legal professionals now employ a rapidly growing array of AI capabilities to reduce time to insight, aid in decision-making, and improve access to key content. These capabilities include machine learning, natural language processing, computer vision - and AI models.

AI models offer great power. “The AI model library has been an important differentiator in Reveal's technology offering,” noted Dr. Irina Matveeva, Chief of Data Science & AI at Reveal and a member of the team that developed the AI Pledge. “I am excited about innovative ways in which our clients use Reveal's AI models and apply their own bespoke models.”

“Balfour Beatty leverages custom-built AI models, specific to our industry, providing strategic advantages to our compliance, investigatory, and litigation practices,” said Aaron B. Bath, CEDS, RP, Balfour Beatty - National Director of Litigation Management. “By combining our comprehensive expertise in the construction/infrastructure space with Reveal’s leading AI/machine learning technology, we’ve taken ownership of our discovery process to build portable bespoke AI models that exponentially improve our command of what our documents/ESI have to tell us.”

“Building and deploying AI models with our clients is part of the UnitedLex DNA,” remarked Cory Osher, Director, Analytics & AI, UnitedLex. “As early adopters of the full Reveal ecosystem, our team of Data Scientists and Analytics Consultants is able to stay laser-focused on utilizing cutting-edge technologies and workflows to solve important challenges facing our clients today.”

With the expanding adoption of AI models for litigation and legal work generally, a critically important emerging question is how the legal community can responsibly leverage this technology.

“When describing our breakthrough AI Model Library to law firms and corporations, a fair and common question from this audience is, ‘Will my data or models be used without my knowledge?’,” said Jay Leib, Reveal’s Chief Strategy & Innovation Officer and another member of the pledge team. “The AI Pledge is designed to preemptively answer that question. Any organization that joins Reveal in this pledge, will convey a sense of trust when client’s data and models are under their care.”

 

Responsible Use of Data

As the question above demonstrates, a key non-technical component of the appropriate creation of AI models is the responsible use of data. Simply put, those who develop AI models should use data responsibly.

“As lawyers, we long have been expected to safeguard client information,” remarked George Socha, SVP of Brand Awareness at Reveal and a third member of the team. “An increasing number of privacy laws and regulations such as the European commission’s General Data Protection Regulation (GDPR) as well as the rise of data breaches and other cybersecurity concerns only accentuate the importance of handling client data responsibly. “

Organizations wanting to make responsible use of data when developing AI models should strive to employ trustworthy practices, knowledgeable practitioners, and secure methodologies.

 

Trustworthy

Never use client data for AI models without explicit permission from the client.

By default, entities building AI models should not use client data. If they intend to use client data, they need to obtain explicit permission from the client.

To obtain explicit permission, the entity building AI models must first provide an authorized client representative with information sufficient for the client to make a well-considered decision. In return, the representative must provide explicit and unambiguous written consent to the model builder.

Avoiding the use of client data, or use of that data only after obtaining permission, matters. It matters because client data often contains protected personal information, high-risk content, and other sensitive material. As a result, it is important to assure clients that: (1) by default, such information will not be used when building AI models, and (2) if there appears to be reason to use such information when building AI models, that will happen only after clients have provided explicit and unambiguous written consent.

Requiring explicit permission matters as well. It reduces uncertainly and ambiguity, and hence the changes for misunderstandings, even innocent ones, that could be problematic later on.

Requiring explicit permission also helps address the black box problem, where clients understandably can be concerned that their sensitive data may become available to others by dint of having been used to create a model. No permission, no use; and no use, no risk.

 

Knowledgeable

Build AI models using knowledgeable and trained professionals.

AI models should be built by professionals who are knowledgeable about and trained in the construction of AI models.

Professionals can obtain the requisite knowledge and training in various ways. Examples include platform-specific certification programs, academic programs, and on-the-job experience.

Likewise, professionals can demonstrate that they have the required expertise in various ways, such as by presenting certifications, providing portfolios of models, or submitting test results.

Professionals can develop AI-model-building expertise through on-the-job, hands-on experience. For many of us in eDiscovery, this long was the only path open to us.

Some who build AI models have an academic grounding they can draw upon. At Reveal, for example, all the members of our data science team, which builds AI models, have MS and PhD computer science degrees.

For professionals seeking to build AI models responsibly but lacking the requisite expertise, they can build knowledge via training and certification programs such as the ones from Reveal. Reveal currently offers nine online certification courses, including one focused specifically on building AI models. Reveal’s AI Modeling Framework Certification course teaches Reveal’s standard approach for building reusable AI models. To build up to that course, you should first take the Reveal Review Certification course, which teaches the fundamentals of document review in Reveal, and then the Reveal AI Certification course teaches the fundamentals of Reveal’s AI technology.

 

Secure

Use security measures tailored to the privacy needs of those whose data was used in creating models.

Responsible use of data when developing AI models encompasses constructing and securing AI models in ways that respect and protect the privacy needs of data owners.

While specific privacy needs will vary, entities building models should ascertain what data owner privacy needs apply to a specific model. They then should determine and apply reasonable security measures to protect those needs.

 

Interested in Taking the Pledge?

You too can take the AI Pledge. Contact us for more information by clicking here.

 

 

 

*/