AI for Good

Applying AI to society’s biggest challenges

Putting our code where our mouth is

As a leading artificial intelligence company, we believe we have a responsibility to ensure the ethical use of AI. We’re skipping the generic PR statements and instead are deploying AI applications to protect privacy and combat unconscious bias.

Through research, development, and partnerships, we’re working to ethically use AI to address the most pressing challenges today.

Applying AI to Uncover Bias

It’s easier to fight an opponent you can see. But unconscious bias is easy to miss and much more pervasive in the workplace than blatant discrimination. Unconscious bias can be blamed for lower wages, less opportunities for advancement, and high turnover. 

As part of our mission to use AI for good, we invested months of research and development and worked in conjunction with leading ethicists and business leaders to build a solution to efficiently and accurately detect unconscious bias. The result: our Unconscious Bias Detector.

How the Unconscious Bias Detector Works

To get started with the Unconscious Bias Detector, the only inputs that are needed are an employee list including demographic information and a set of performance reviews. By utilizing Text’s IQ’s socio-linguistic hypergraph technology to understand context and patterns within the documents at scale, the AI is able to efficiently and securely analyze the documents and detect patterns of potential bias.

Text IQ - AI for Good - Gender Bias - Report-01

High-level Snapshot

The easy-to-read report dashboard provides a clear view of what appears in the performance reviews. This includes a diversity breakdown, a phrasing analysis (e.g. is manager feedback more work or personality-focused), and a manager comment sentiment audit with positivity scores.

Text IQ - AI for Good - Gender Bias - Analysis drilldown-01

Analysis Drill-down

This granular view shows commonly-used phrases across the organization and a reviewer-centric report.

unconscious bias 3

Objective Approach

The Unconscious Bias Detector utilizes the strengths of sophisticated AI, anomaly detection at scale, within the context of each unique organization to detect potential occurrences of unconscious bias. This detection empowers management to take action to fight this now visible opponent.

Artificial intelligence tool helps detect unconscious racial, cultural bias in the workplace

abc7  By David Louie

SAN FRANCISCO (KGO) -- Race and social justice is a key pillar of Building A Better Bay Area. Many companies are looking closely at the diversity of their employees. Now, technology is helping them to detect unconscious racial and cultural bias by managers.

Read the full story

ABC 7 story on Text IQ unconscious bias detection

A priceless tool with no price tag

To make as big an impact as possible against unconscious bias, our Unconscious Bias Detector needs to be at work in as many organizations as possible. To remove price as a barrier and to align with our mission of promoting the ethical use of AI, we’re offering the Unconscious Bias Detector at no charge

Please request more information on how you can join our fight against unconscious bias. This tool is suitable for organizations with at least 1,000 employees.  

Request More Info

Applying AI to Protect Privacy

We are currently working alongside investigators, journalists, and historians to ensure public information remains public, while at the same time protecting individuals’ right to privacy. In a pro bono project, our AI to Protect Privacy tool is assisting partners at Columbia University and the History Lab to identify and redact sensitive and personal information (PI) from 100,000s of documents obtained through the Freedom of Information Act (FOIA) by the Brown Institute for Media Innovation

FOIA is a critical tool to ensure public information remains public, an important aspect of democracy. And, it has been imperative for journalists reporting in the wake of the COVID-19 public health crisis. However, personal information is easily released in requested documents either due to oversight or it not being covered under privacy laws. While it may not be illegal to publish these documents, it is unethical and responsible journalists are working to avoid trampling individual privacy rights while upholding rights to public information. 

foia gif

How AI to Protect Privacy Works


Millions of documents and unstructured data, like declassified government files and emails, are uploaded into the AI to Protect Privacy dashboard.

Analyze & Redact

The AI then analyzes the data, utilizing context and NLP to identify what information is sensitive, automatically redacts it, and returns the redacted documents.


Once all sensitive information has been redacted the documents can be published without infringing individual citizen’s rights to privacy.

Too much data, too few resources 

Additionally, our AI will assist Matthew Connelly, professor of History at Columbia University and principal investigator at the History Lab, and his team in developing tools to turn documents into data to explore the history and preserve the fabric of the past while providing lessons for the future.

Funded in part by a grant from the Mellon Foundation, this History Lab project aims to publish declassified federal documents obtained under the Freedom of Information Act (FOIA). Connelly’s work may also help to transform how the government responds to FOIA requests. A process that’s ripe for innovation, the federal government spends around $500M per year on responding to FOIA requests and about $100M per year on declassifying documents with questionable consistency and accuracy and long lead times. 

Interested in exploring options to use Text IQ AI for good in your organization?

Please share your information, and one of our Text IQ specialists will be in touch.