MIT Study Shows How Big Data is Dangerous To Privacy
Big data is undoubtedly helping researchers and companies to connect disparate events, allowing better insight into the data. However, once the data is collected, big data can also mean far less privacy. The latest example to this is a new study conducted by Massachusetts Institute of Technology (MIT) on credit cards, which reveals that credit card data isn’t quite as anonymous as promised.
Companies routinely require personal identifiers from credit card data for sharing information, saying the data is now safe because it is “anonymized” and will wipe away personal details. But the MIT researchers showed that anonymized isn’t quite the same as anonymous. The MIT report published in the journal Science, examined three months of credit card records for 1.1 million people.
MIT researcher Yves-Alexandre de Montjoye, lead author of the report offers an example. He says the report looks at data from September 23 and 24 and who went to a bakery one day and a restaurant the other. Searching through the data set, they found there could be only one person who fits the bill — they called him Scott. “And we now know all of his other transactions, such as the fact that he went shopping for shoes and groceries on 23 September, and how much he spent,” says the report.
While this data may not be confidential, it is an example of how easy it is to identify the card holder and get his or her details. These metadata also raises the question of how it would pinpoint to individual privacy when is used within insurance actuarial calculations, insurance claims and adjustments, loan and mortgage application considerations, divorce proceedings, believe researchers. The same holds true for bank and financial transactions as well. As de Montjoye notes, “We need to be aware [of] and account for the risks of re-identification.”
People with higher incomes were also easier to identify, perhaps because they “have distinctive patterns in how they divide their time between the shops they visit,” it adds.
The study shows that when we think we have privacy when our data is collected, it’s really just an “illusion,” said Eugene Spafford, director of Purdue University’s Center for Education and Research in Information Assurance and Security. Spafford, who wasn’t part of the study, said it makes “one wonder what our expectation of privacy should be anymore.”
The researchers suggest that large data sets should not be publicly released, but kept by a custodian who could then allow researchers to conduct queries and submit programs to analyze the data. On the whole, the research called for more advanced technologies to protect data that is simply made anonymous.
- Keep Security At The Heart Of Digital Strategy: Unisys
- CIOs Can Majorly Gain From Edge Data Centers
- How Enterprises Can Start Building Their AI-ML Capabilities
- Which AI Is Best For Customer Engagement, Revenue Generation
- Autonomous Database: Next Big Thing In The Indian Market
- QYOU Chalks Out India Expansion Plans
- Indian Life Insurers Partner with Cognizant to Develop Blockchain Solution
- Why AI Could Be Cybersecurity’s Next Big Thing
- HCI Makes Software-Defined Data Center Simple
- Facebook CEO Explains How They Will Fix Data Privacy Issues