An online retailer always gets users’ explicit consent before sharing customer data with its partners. A navigation app anonymizes activity data before analyzing it for travel trends. A school asks parents to verify their identities before giving out student information.
These are just some examples of how organizations support data privacy, the principle that people should have control of their personal data, including who can see it, who can collect it, and how it can be used.
One cannot overstate the importance of data privacy for businesses today. Far-reaching regulations like Europe’s GDPR levy steep fines on organizations that fail to safeguard sensitive information. Privacy breaches, whether caused by malicious hackers or employee negligence, can destroy a company’s reputation and revenues. Meanwhile, businesses that prioritize information privacy can build trust with consumers and gain an edge over less privacy-conscious competitors.
Yet many organizations struggle with privacy protections despite the best intentions. Data privacy is more of an art than a science, a matter of balancing legal obligations, user rights, and cybersecurity requirements without stymying the business’s ability to get value from the data it collects.
Consider a budgeting app that people use to track spending and other sensitive financial information. When a user signs up, the app displays a privacy notice that clearly explains the data it collects and how it uses that data. The user can accept or reject each use of their data individually.
For example, they can decline to have their data shared with third parties while allowing the app to generate personalized offers.
The app heavily encrypts all user financial data. Only administrators can access customer data on the backend. Even then, the admins can only use the data to help customers troubleshoot account issues, and only with the user’s explicit permission.
This example illustrates three core components of common data privacy frameworks:
Compliance with relevant regulations is the foundation of many data privacy efforts. While data protection laws vary, they generally define the responsibilities of organizations that collect personal data and the rights of the data subjects who own that data.
The GDPR is a European Union privacy regulation that governs how organizations in and outside of Europe handle the personal data of EU residents. In addition to being perhaps the most comprehensive privacy law, it is among the strictest. Penalties for noncompliance can reach up to EUR 20,000,000 or 4% of the organization’s worldwide revenue in the previous year, whichever is higher.
The Data Protection Act 2018 is, essentially, the UK’s version of the GDPR. It replaces an earlier data protection law and implements many of the same rights, requirements, and penalties as its EU counterpart.
Canada’s PIPEDA governs how private-sector businesses collect and use consumer data. PIPEDA grants data subjects a significant amount of control over their data, but it applies only to data used for commercial purposes. Data used for other purposes, like journalism or research, is exempt.
Many individual US states have their own data privacy laws. The most prominent of these is the California Consumer Privacy Act (CCPA), which applies to virtually any organization with a website because of the way it defines the act of “doing business in California.”
The CCPA empowers Californians to prevent the sale of their data and have it deleted at their request, among other rights. Organizations face fines of up to USD 7,500 per violation. The price tag can add up quickly. If a business were to sell user data without consent, each record it sells would count as one violation.
The US has no broad data privacy regulations at a national level, but it does have some more targeted laws.
Under the Children’s Online Privacy Protection Act (COPPA), organizations must obtain a parent’s permission before collecting and processing data from anyone under 13. Rules for handling children’s data might become even stricter if the Kids Online Safety Act (KOSA), currently under consideration in the US Senate, becomes law. KOSA would require online services to default to the highest privacy settings for users under 18.
The Health Insurance Portability and Accountability Act (HIPAA) is a federal law that deals with how healthcare providers, insurance companies, and other businesses safeguard personal health information.
The Payment Card Industry Data Security Standard (PCI DSS) is not a law, but a set of standards developed by a consortium of credit card companies, including Visa and American Express. These standards outline how businesses must protect customers’ payment card data.
While the PCI DSS isn’t a legal requirement, credit card companies and financial institutions can fine businesses that fail to comply or even prohibit them from processing payment cards.
Privacy compliance is only the beginning. While following the law can help avoid penalties, it may not be enough to fully protect personally identifiable information (PII) and other sensitive data from hackers, misuse, and other privacy threats.
Some common principles and practices organizations use to bolster data privacy include:
For effective data governance, an organization needs to know the types of data it has, where the data resides, and how it is used.
Some kinds of data, like biometrics and social security numbers, require stronger protections than others. Knowing how data moves through the network helps track usage, detect suspicious activity, and put security measures in the right places.
Finally, full data visibility makes it easier to comply with data subjects’ requests to access, update, or delete their information. If the organization doesn’t have a complete inventory of data, it might unintentionally leave some user records behind after a deletion request.
A digital retailer catalogs all the different kinds of customer data it holds, like names, email addresses, and saved payment information. It maps how each type of data moves between systems and devices, who has access to it (including employees and third parties), and how it is used. Finally, the retailer classifies data based on sensitivity levels and applies appropriate controls to each type. The company conducts regular audits to keep the data inventory up to date.
Organizations can limit privacy risks by granting users as much control over data collection and processing as possible. If a business always gets a user’s consent before doing anything with their data, it’s hard for the company to violate anyone’s privacy.
That said, organizations must sometimes process someone’s data without their consent. In those instances, the company should make sure that it has a valid legal reason to do so, like a newspaper reporting on crimes that perpetrators would rather conceal.
A social media site creates a self-service data management portal. Users can download all the data they share with the site, update or delete their data, and decide how the site can process their information.
It can be tempting to cast a wide net, but the more personal data a company collects, the more exposed it is to privacy risks. Instead, organizations can adopt the principle of limitation: identify a specific purpose for data collection and collect the minimum amount of data needed to fulfill that purpose.
Retention policies should also be limited. The organization should dispose of data as soon as its specific purpose is fulfilled.
A public health agency is investigating the spread of an illness in a particular neighborhood. The agency does not collect any PII from the households it surveys. It records only whether anyone is sick. When the survey is complete and infection rates determined, the agency deletes the data.
Organizations should keep users updated about everything they do with their data, including anything their third-party partners do.
A bank sends annual privacy notices to all of its customers. These notices outline all the data that the bank collects from account holders, how it uses that data for things like regulatory compliance and credit decisions, and how long it retains the data. The bank also alerts account holders to any changes to its privacy policy as soon as they are made.
Strict access control measures can help prevent unauthorized access and use. Only people who need the data for legitimate reasons should have access to it. Organizations should use multi-factor authentication (MFA) or other strong measures to verify users’ identities before granting access to data. Identity and access management (IAM) solutions can help enforce granular access control policies across the organization.
A technology company uses role-based access control policies to assign access privileges based on employees’ roles. People can access only the data that they need to carry out core job responsibilities, and they can only use it in approved ways. For example, the head of HR can see employee records, but they can’t see customer records. Customer service representatives can see customer accounts, but they can’t see customers’ saved payment data.
Organizations must use a combination of tools and tactics to protect data at rest, in transit, and in use.
A healthcare provider encrypts patient data storage and uses an intrusion detection system to monitor all traffic to the database. It uses a data loss prevention (DLP) tool to track how data moves and how it is used. If it detects illicit activity, like an employee account moving patient data to an unknown device, the DLP raises an alarm and cuts the connection.
Privacy impact assessments (PIAs) determine how much risk a particular activity poses to user privacy. PIAs identify how data processing might harm user privacy and how to prevent or mitigate those privacy concerns.
A marketing firm always conducts a PIA before every new market research project. The firm uses this opportunity to clearly define processing activities and close any data security gaps. This way, the data is only used for a specific purpose and protected at every step. If the firm identifies serious risks it can’t reasonably mitigate, it retools or cancels the research project.
Data privacy by design and by default is the philosophy that privacy should be a core component of everything the organization does—every product it builds and every process it follows. The default setting for any system should be the most privacy-friendly one.
When users sign up for a fitness app, the app’s privacy settings automatically default to “don’t share my data with third parties.” Users must change their settings manually to allow the organization to sell their data.
Complying with data protection laws and adopting privacy practices can help organizations avoid many of the biggest privacy risks. Still, it is worth surveying some of the most common causes and contributing factors of privacy violations so that companies know what to look out for.
When organizations don’t have complete visibility of their networks, privacy violations can flourish in the gaps. Employees might move sensitive data to unprotected shadow IT assets. They might regularly use personal data without the subject’s permission because supervisors lack the oversight to spot and correct the behavior. Cybercriminals can sneak around the network undetected.
As corporate networks grow more complex—mixing on-premises assets, remote workers, and cloud services—it becomes harder to track data throughout the IT ecosystem. Organizations can use tools like attack surface management solutions and data protection platforms to help streamline the process and secure data wherever it resides.
Some regulations set special rules for automated processing. For example, the GDPR gives people the right to contest decisions made through automated data processing.
The rise of generative artificial intelligence can pose even thornier privacy problems. Organizations cannot necessarily control what these platforms do with the data they put in. Feeding customer data to a platform like ChatGPT might help garner audience insights, but the AI may incorporate that data into its training models. If data subjects didn’t consent to have their PII used to train an AI, this constitutes a privacy violation.
Organizations should clearly explain to users how they process their data, including any AI processing, and obtain subjects’ consent. However, even the organization may not know everything the AI does with its data. For that reason, businesses should consider working with AI apps that let them retain the most control over their data.
Stolen accounts are a prime vector for data breaches, according to the IBM Cost of a Data Breach report. Organizations tempt fate when they give users more privileges than they need. The more access permissions that a user has, the more damage a hacker can do by hijacking their account.
Organizations should follow the principle of least privilege. Users should have only the minimum amount of privilege they need to do their jobs.
Employees can accidentally violate user privacy if they are unaware of the organization’s policies and compliance requirements. They can also put the company at risk by failing to practice good privacy habits in their personal lives.
For example, if employees overshare on their personal social media accounts, cybercriminals can use this information to craft convincing spear phishing and business email compromise attacks.
Sharing user data with third parties isn’t automatically a privacy violation, but it can increase the risk. The more people who have access to data, the more avenues there are for hackers, insider threats, or even employee negligence to cause problems.
Moreover, unscrupulous third parties might use a company’s data for their own unauthorized purposes, processing data without subject consent.
Organizations should ensure that all data-sharing arrangements are governed by legally binding contracts that hold all parties responsible for the proper protection and use of customer data.
PII is a major target for cybercriminals, who can use it to commit identity theft, steal money, or sell it on the black market. Data security measures like encryption and DLP tools are as much about safeguarding user privacy as they are about protecting the company’s network.
Privacy regulations are tightening worldwide, the average organization’s attack surface is expanding, and rapid advancements in AI are changing the way data is consumed and shared. In this environment, an organization’s data privacy strategy can be a preeminent differentiator that strengthens its security posture and sets it apart from the competition.
Take, for instance, technology like encryption and identity and access management (IAM) tools. These solutions can help lessen the financial blow of a successful data breach, saving organizations upwards of USD 572,000 according to the Cost of a Data Breach report. Beyond that, sound data privacy practices can foster trust with consumers and even build brand loyalty (link resides outside of ibm.com).
As data protection becomes ever more vital to business security and success, organizations must count data privacy principles, regulations, and risk mitigation among their top priorities.
Discover the benefits and ROI of IBM Security Guardium Data Protection in this Forrester TEI study.
Learn about strategies to simplify and accelerate your data resilience roadmap while addressing the latest regulatory compliance requirements.
Data breach costs have hit a new high. Get essential insights to help your security and IT teams better manage risk and limit potential losses.
Follow clear steps to complete tasks and learn how to effectively use technologies in your projects.
Stay up to date with the latest trends and news about data security.
Identity and access management (IAM) is a cybersecurity discipline that deals with user access and resource permissions.