Understanding Recent Changes to Protected Health Information Governance and How Artificial Intelligence Can Help

Understanding Recent Changes to Protected Health Information Governance and How Artificial Intelligence Can Help

Regulations surrounding healthcare data privacy are constantly changing. Due to the invention of new technology, the emergence of the COVID-19 pandemic, and other privacy complications, it’s more important than ever that companies handling protected health information (PHI) do their due diligence and ensure they comply with ever-changing regulations.

When it comes to PHI and data privacy, the most referenced compliance regulation is the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Though HIPAA was enacted 25 years ago, it has evolved and expanded to address advancements in technology and the way data is collected and stored.

Recently, the Office for Civil Rights (OCR) sector in the U.S. Department of Health and Human Services issued new HIPAA guidelines aimed at addressing necessary public health disclosures in light of the COVID-19 pandemic. Similarly, discussions surrounding privacy exposure related to data collected via wearable technology and utilized by private companies have resulted in a Senate bill proposal.

In addition, in 2020, the “Stop Hacks and Improve Electronic Data Security Act” (SHIELD Act) became effective in the state of New York. The act is intended to protect New York citizens anytime their protected information is collected.

The ways data can be collected, stored, and utilized is growing at a rapid pace. Companies that deal in this information should understand these new regulations, how they’ll affect your data security program, and how artificial intelligence (AI) programs can be used in the prevention and response to a data breach.

Defining New Types of Data

PHI is formally defined as individually identifiable health information, held or maintained by a covered entity or business associates acting on behalf of the covered entity, that is transmitted or maintained in any form. Although this definition is broad, and HIPAA is the most far-reaching health privacy law in the United States, it covers only information created, received, or maintained by or on behalf of healthcare providers. Patient or user-generated data, shared with private companies via apps or other means, is not subject to HIPAA rules.

This can promote confusion among consumers and private technology businesses alike. The rapid growth of wearable trackers, like the FitBit or Apple Watch, allows people to track fitness goals and health activities. While this may motivate consumers and improve their health, the lax approach to managing their data can be a threat to privacy. As user activity grows, so does the amount of data these trackers generate. According to a Pew Research Survey conducted in 2019, roughly 21% or 1 out of 5 American adults say they regularly wear a smartwatch or wearable fitness tracker, and those users will likely produce millions of health-related data points in their lifetimes.

So the main question that arises with wearable trackers is whose responsibility is it to safeguard the data? As a rule, HIPAA requirements apply only to covered entities and business associates. Businesses like Apple don’t fall into either of these categories. Should they be required to protect PHI just like a doctor’s office through additional regulation?

The popularity of wearable technology has caused some members of Congress, as well as data privacy advocates, to consider new regulations. Two senators recently proposed a bill called the Smartwatch Data Act, which would prevent businesses that amass data from wearable tech from selling, sharing, or otherwise using the information without consumer (patient) consent. Meaning that though HIPAA does not currently enforce regulations on these trackers, your company is not yet free of all regulatory burdens. In addition to failing to establish processes for handling health information, many technology companies even fail to provide users with clear terms surrounding standard data usage. One study found that 81 percent of apps for depression and smoking cessation share data for marketing and advertising purposes, while others use health data to develop new products or sell them for use in clinical trials.

Consumers’ lack of understanding can cause them to believe the data that they share with apps and related devices are more secure than they really are. Lawmakers largely agree that there is a need for private companies to adopt health privacy programs that provide accountability for the handling of the types of health data that fall outside the bounds of HIPAA. 

Thankfully, businesses can get ahead of these inevitable regulation changes through the use of AI. AI software can assist with data encryption, as well as the evaluation and control of device access. Additionally, if a data breach of users’ health information does occur, AI solutions can speed up the assessment and user notification process per state regulations. 

COVID-19, Public Health & Employers

To add to the confusion of the best privacy approach for wearable tech and ambiguous health data types, when COVID-19 first hit the United States, a recurring question among both small and large organizations is whether employers would be subject to HIPAA by taking employee temperatures or collecting COVID-related medical information from employees. Similarly, the initial answer was that HIPAA does not apply to employers; however, this situation also turned out to be more complicated than initially expected.

While an employer would generally not qualify as a Covered Entity, and thus not subject to the HIPAA Privacy, Security, and Breach Notification rules, an employer-run health plan and/or COVID testing site would fall under these guidelines. Ensuring compliance often falls on an employer, as a sponsor of the plan, or a third-party administrator. This can mean not requesting or accepting unauthorized employee test results, as well as not disclosing an employee’s COVID status to others.

What is most important is that employers carefully analyze which activities likely fall outside of HIPAA and which activities may impose new HIPAA obligations. Using advanced technology, like AI, to categorize and properly label PHI can provide peace of mind in data protection for employees and employers alike.

Along with employers, health organizations and government agencies all over the world are using technology to communicate and track, monitor, and predict the spread of COVID-19. Data has proven to be a valuable resource to help control the spread of infection during the global pandemic. To aid in this usage, several countries have passed emergency legislation to permit the use and disclosure of personal data to combat the spread of the virus.

However, these temporary accommodations should not be construed to mean the removal of privacy rights. While we are seeing unprecedented flexibility to make research and planning easier for front-line workers and epidemiologists, the pandemic has not resulted in any permanent changes to HIPAA. Businesses should err on the side of caution and know that HIPAA rules remain in effect and the requirements of the HIPAA Privacy Rule, Security Rule, and Breach Notification Rule remain unchanged.

Data security professionals should treat the disclosure, use, and storage of COVID-related PHI in the same manner.  

New Developments in HIPAA & State Regulations

HIPAA

In December 2020, OCR announced proposed changes to the HIPAA Privacy Rule stating that the aim of the changes is “to support individuals’ engagement in their care, remove barriers to coordinated care, and reduce regulatory burdens on the healthcare industry.”

The new HIPAA regulations include providing greater data access for patients and creating pathways for secured sharing and disclosure of information, as well as clarifying term definitions and storage requirements.

Issues like patient access to electronically stored information, enhancing flexibilities for disclosures in emergency circumstances such as public health, and reducing administrative burdens on data managers are all addressed and may require a review and update of your existing security procedures.

New York SHIELD Act

In conjunction with these new HIPAA additions, in July 2019, the state of New York adopted the “SHIELD Act” to expand existing data breach and cybersecurity laws. New York has stated that it intends to have the SHIELD Act and HIPAA work in concert to ensure that data breaches are reported, and that patient information is kept secure.

The act encompasses private information, which is defined as personal information (name, phone number, other contact information) plus “data elements,” which include more sensitive private data like Social Security numbers and financial information.

It also includes two newer types of data elements: 1. Biometric information, which is data generated by electronic measurements of physical characteristics, such as fingerprints, and 2. Online Access Tools like usernames, passwords, and security questions. Plus, an expanded definition of a breach from unauthorized acquisition to unauthorized access as well.

The SHIELD Act, at its core, also requires that entities implement and maintain “reasonable administrative, technical, and physical safeguards.” Some examples are included below:

Administrative Safeguards

Technical Safeguards

Physical Safeguards

Designation of privacy professionals to manage your company’s security program. 

Updates to networks and software programs as well as information transmission and storage procedures. 

Workforce training.

 

Regular risk identification and corrective measures.

Breach detection and response.

Hardware access and password security.

Consistent security audits.

 

Anti-virus software, two-factor authentication, encryption, firewalls, and remote wipe capabilities.

Protection against unauthorized access to storage locations and disposal sites.

Safeguards should be reasonable in light of your business’s size and complexity; nature and scope; and the sensitivity of the personal information that you work with.

The SHIELD Act’s provisions apply to any person or entity — whether conducting business in New York or not — that owns or licenses electronic data that includes private information of a New York resident. This means it can apply to any business from healthcare providers to tech companies to e-commerce sites.

The OCR regularly issues information on common compliance violations it sees throughout various sectors. A good starting point for a business looking to ensure compliance is to be aware of common missteps. Poor risk management practices, lack of HIPAA procedures, failure to standardize business associate agreements, unauthorized PHI disclosures, and an absence of safeguards all caused businesses to incur HIPAA fines in 2020, even with the more forgiving breach accommodations.

How to Limit Risk by Integrating AI

HIPAA data compliance is not synonymous with data security. Compliance with HIPAA requirements is often the minimum necessary to keep healthcare data secure across multiple networks.

Businesses that handle PHI should not only have a professional on staff who understands the management of patient data, but also utilize the most effective technology to ensure the most secure data protection program.

Since management of this amount of data can be overwhelming and prone to human error, utilization of artificial intelligence, machine learning, and other trusted technological programs have shown to greatly reduce time and cost ensuring compliance. Additionally, maintaining secured systems can prevent a breach in most cases, but in cases where a breach is unavoidable, AI can help with immediate and appropriate breach response.

Areas where AI can streamline your security methods consist of automated data recognition and de-identification, establishing consistent system controls and monitoring, staff training and identification of bad actors, and rapid process updates for changing regulations. 

De-identification

To alleviate breach concerns, cybersecurity experts and tech companies have introduced the process of de-identifying sensitive patient data with the use of AI. The de-identification process removes any identifying information such as names, contact information, geographical data, Social Security/account numbers, biometric identifiers, and any other information that keeps PHI from being truly anonymous.

AI allows businesses to quickly and consistently de-identify or redact healthcare data to allow for complete and proper transmission and storage.

System Controls

AI can also be used to automatically develop system rules and algorithms governing user rights and privileges, audit regulations, tamper-proofing data, and secure transmission.

Workforce Training

Security measures such as standards for login credentials, proper storage of hardware like laptops, and spotting phishing emails and malware attempts are as much a matter of software implementation as they are staff training.

Using AI incident reports and statistics to identify bad actors and develop your training policies is perhaps the best front-line way to protect access to your data. A good AI tool and service provider can also guide security professionals with a framework for developing well-defined, well-documented process flows.

Regulatory Compliance

The privacy legislation landscape continues to evolve, which makes it challenging to stay up-to-date with which regulations your business must comply with and when. Sophisticated AI software can remove the guesswork and provide reports of whose data is affected by a breach and what compliance factors must be met. 

Data Breach Assessment and Notification

Often a data security incident means businesses must use valuable resources to verify what data was compromised and who needs to be notified. The amount of data that needs to be reviewed can cause a bottleneck in the breach assessment and notification process. AI can assist by streamlining the process and providing decision-makers the information and insights they need to ensure timely notifications.

Preventative security measures are important not only from a liability perspective but for due diligence. In January 2020, bill HR 7898 amended the Health Information Technology for Economic and Clinical Health Act (HITECH) to introduce a “safe harbor” for HIPAA-covered entities and business associates that adopt recognized security best practices, but despite best efforts, still experienced a data breach. The bill requires OCR to give credence to the reasonable attempted security practices that the business had in place before a breach when determining financial penalties and sanctions. This is a great incentive for businesses to adopt strong data security measures. Evidence of a reasonable security program that implements best practices can keep severe penalties at bay even in the event of a breach.

Artificial intelligence will enhance your security program by utilizing many of the security best practices and reasonable standards mentioned above. In fact, according to a report by Ponemon, though only 26 percent of organizations currently use AI and automation as a primary part of their security program, 41 percent are in the process of investing more time and money into AI defense measures.

So whether you control PHI as a tech company developing new types of wearable data trackers or as a third-party administrator of a health plan or even as a business associate, you should strongly consider bolstering the way AI can work for you.

To learn more about AI for Privacy, visit here