On the ethics of selling data

Varun Iyer
8 min readMar 1, 2023

An Introduction

Of course. Data collection.

The one supposed ethical obstacle that stopped our morally-perfect selves from becoming like Mark Zuckerburg.

But what of it?

In this day and age, with the viewership of new, apparently free technologies like ChatGPT soaring at an all-time high, what are the ethical implications of data mining, and from a philosophical perspective, is it really as bad as most people swear it is?

This article aims to traverse you through this very topic, hoping to uncover a few secrets, enjoy a few laughs and attain media-ethical enlightenment, so the next time this issue pops up, you can spit out the fancy new consequentialist methodology you acquired.

A brief overview of data mining

In case you are unaware of the nature of data mining, and what it entails (believe me that’s perfectly fine), it is the process of analyzing dense volumes of data to find patterns, discover trends, and gain insight into how that data can be used.

This is indeed what Google told me.

To put it in layman’s terms, it is essentially when companies, such as Facebook and the like, acquire information about you, as you browse through their website. This data may then be sold, for it to be analysed by some large directed-marketing company that uses it to allow advertisers to optimize their product placement.

This is why I am bombarded with ads for foot massages when I Google Bali.

The Philosophical Perspective

Let’s cut to why most of you are here — to deliberate on the various philosophical perspectives at play concerning data mining.

I just acquired a British accent. God save the Queen.

Our first position on the issue is the consequentialist approach.

Essentially, it is a moral theory that evaluates the virtuousness of an action based on its consequences.

Here, consequentialists would try to evaluate the positive and negative consequences of data mining, to come to a conclusion.

Common viewpoints supporting data mining tend to take the capitalistic approach, advancing the argument that data mining ultimately gives us positive results, such as better marketing, a more feedback-oriented product and overall higher customer satisfaction.

In short, it helps businesses analyze information better, thus leading to a happier society as a whole.

Of course, like most things in life, every action has an equal and opposite reaction — a clear demonstration of Newton’s laws in sociology.

Quite often, consequentialists will also argue that the negative consequences of the selling of data far outweigh the positive ones.

For example, data selling may create power imbalances, leading to a more centralized monopoly, where large organizations in an industry, equipped with a huge share of resources, can gain unfair advantages over smaller, less-equipped ones, who may be unable to get the same level of directed-advertising/analysis that these corporations receive.

On a more social note, data mining and eventual selling may sneakily employ discrimination and unfair pricing strategies.

The effects of this are beyond far-reaching, for not only do they lead to a more morally unjust society now, but the data and resulting analyses may also influence the algorithms and neural networks of the future, essentially ingraining unwanted social biases into the fate of computing.

Another important issue to address here is privacy, which is arguably most important for our resident civil rights activists — the Americans.

Now privacy, in and of itself, is innately an extremely controversial topic to philosophers around the globe. The definition of privacy is not concrete, which is concerning since we seem to use it almost every day.

Here, to resist straying away from the topic at hand, a loss in privacy mainly signifies the increased stress, anxiety and reduced trust in institutions that the general public may face after finding out that their data has been sold/used against their will.

Moreover, data selling can compromise individuals’ privacy and autonomy, as their data is used without explicit and informed consent.

For consequentialists to emerge on a verdict, we must understand that consequentialism itself is an extremely divided philosophical ideology, as it all depends on what your perspective of a favourable consequence is.

For example, if you derive that the primary moral goal is to spread happiness and relieve suffering, your perspectives may be a little different to another who prioritises creating as much freedom as possible in the world or generally promoting the survival of our species.

Hence, it is quite difficult to characterize what exactly consequentialism says about data mining, but one thing is for certain — all consequentialist viewpoints agree that consequences are all that matters.

Consequentialism, while a popular philosophical perspective, with pioneering philosophers like John Stuart Mill leading the charge, does face its fair share of flak.

Most non-consequentialists argue that consequences are not all that matters, often advancing the point that morality is all about doing one’s duty — obeying a certain moral legislature of the universe — be it through respecting rights, obeying your heart, actualizing your potential or allowing other to be — no matter the consequences.

One popular non-consequentialist movement emerges from ancient Greece itself — deontological ethics.

Essentially, it emphasizes the very point we just argued, declaring that one must follow their moral duty regardless of consequence, with the word itself stemming from Greek, meaning “obligation”.

Thus, deontology ventures into a less practical, more ethics-based view of the issue at hand, aiming to truly evaluate whether the action is morally righteous.

Generally, deontology tends to pursue a more socialist-centric viewpoint, advancing arguments against the authorization of data selling, perhaps claiming that privacy is an inherent human virtue and should never be compromised, regardless of social gain/impact.

Thus, deontologists also argue that privacy is held only through a certain trust that is established between the individual and our institutions, and it is our moral duty, regardless of consequence, to uphold and protect this trust.

Through this, deontologists quite clearly establish that the resulting breach of privacy when data selling occurs is immoral and should be barred.

This clear verdict is an inherent strength of the deontological way of life, for it provides clear guidelines/a framework for us to live by.

However, not all is bliss.

As we have seen, deontological perspectives are indeed quite reminiscent of religious doctrines, and thus force rigid and inflexible rules that may not take into full account the nature of the situation.

For those that disagree with both approaches, we present a third philosophical theory — virtue ethics.

Essentially, it emphasizes the character of an individual above all else, placing the cultivation of virtuous traits and habits as the driving force for all mankind.

For example, if a certain action can inculcate beneficial moral values in an individual or society as a whole, it is deemed successful and expedient to the Sapien race.

Here, following their philosophically determined path, you may witness a virtue ethicist arguing that data mining and selling is only desirable to society if and when data miners, individuals and other affected parties involved can attain a higher moral self by participating in the action.

Quite often, a virtue ethicist might illustrate the importance of empathy and compassion in the industry, to determine whether data selling is beneficial.

Relatively similar to the deontological perspective, virtue ethicists may also advance the argument that individuals and organizations should attempt to cultivate qualities such as honesty, transparency and respect for the privacy of all parties involved.

Thus, one may view the collection and sale of personal data, especially without informed consent, as a failure to embody these virtues and a reflection of a flawed character.

Most virtue ethicists justify their philosophy by elucidating that it aims to cultivate ethical behaviour in society as a whole — a favourable world to live in.

However, with virtue ethics often being seen as a humanitarian and impractical perspective, critics argue that it is too vague and subjective, leading to the theory being difficult to apply in most realistic situations.

But why is it important?

Well, delving into this field of technological ethics is indeed important when we grasp the nature of the issue, and how the disruption of an individual’s privacy is not only uninformed or misrepresented, but utterly indispensable in our daily lives.

Take, for example, the argument that individuals can refuse to utilize certain services, perhaps even the internet as a whole. It is on their word that this data is taken from them. How can it be unethical when the data is voluntarily given and appears to be there for the taking?

Although the argument appears to be sensible and factually founded, it breaks down when we explore the price of privacy on the internet. Individuals that are unable to utilize the most basic of services on the internet, face a great disadvantage in society and are thus forced into releasing what is a supposed human right in most modern democracies, to be able to stand on somewhat level ground. Also, a large percentage of internet users are like my grandparents — completely unknowing of the dangers and consequences of their actions.

Most often, supporters of data analytics, or those that remain neutral, will turn to the law as their lord and saviour.

This argument falls short on several grounds, for the law is and never will be sufficient concerning any problem whose nature is continuously changing. For example, take the fact that current privacy laws only offer protection for the misuse of identifiable personal data and state null about the use of anonymized data as if it were personal.

Besides, actions can be unethical independent of their legal status. For instance, if you lie to someone which leads to their ultimate death, that can be considered unethical on several grounds while not necessarily against the law.

Lastly, others would also turn towards the individualization aspect of data mining and may argue that since this is ultimately for the benefit of the end consumer, there can be no wrong in doing so.

While personalization certainly can be for the customer’s gain, if the personalisation is done by creating non-distributive group profiles, which is when you analyze data based on the whole instead of the individual, it can lead to de-individualisation and discrimination. There is a fine line between personalisation and personal intrusion.

So what are the solutions?

Well at present, not much.

There doesn’t seem to be a great deal that we can do to stop corporations after they have acquired data since legislation is too time-consuming and frankly ineffective here. The one path remaining is on the user end.

Here, many propose mandating a privacy-education tool, which tells unknowing users about the dangers or risks when visiting a website.

Others suggest that a central, unbiased organization be formed to monitor the collection and handling of data, although this is largely impractical and unfeasible.

All in all, consumer education seems to be the only viable option.

A final statement

Throughout this piece, we have attempted to dissect an otherwise controversial and highly debated topic, especially at times when Zuckerburg is put on trial for laughing at your search history.

Various philosophical perspectives have been considered, and we have tried to present you with a complete grasp of the ethics behind the actions of these large corporations.

Will this article change the industry? No.

All we can hope for is to change the minds of a few individuals who may be unknowingly impacted, perhaps changing it for the better.

--

--

Varun Iyer

High School Student at Greenwood High. Economics and Technology enthusiast. Aspiring writer. Looking for opportunities.