Mission Grey - Geopolitical Risk Management
  • Home
  • Use Cases
    • For Businesses
    • For Consulting
    • For Investments
  • About us
  • Blog
  • Contact us
  • Guild
  • Login

11/3/2024

What Should We Know About AI Data – And Why Do We Need Domain-Specific AI Tools?

Read Now
 
​Juha Särestöniemi has over 30 years of experience in various roles in the ICT sector. He currently focuses on data management and Enterprise Resource Planning systems.
Picture

Artificial Intelligence (AI) has gained widespread attention due to its expanding applications. Many of those are highly useful and contribute to its growing popularity. The more information AI gets, the better it becomes.
​

However, AI is not only useful but, in some cases, may also be dangerous. This text will focus on real, everyday data concerns AI presents for users, not on extreme scenarios.

What should the users be concerned about when using AI tools? There are at least seven risks to take into account.

 
1. Misinformation and Accuracy

AI can sometimes provide inaccurate or misleading information, especially if it doesn’t fully understand the context or if the information it’s trained on is incomplete or outdated. Over-reliance on AI for critical decisions without verification can lead to mistakes.

Applications like ChatGPT are extremely good in summarizing and simplifying many topics. That is done by feeding it data, from which it creates the summary. The accuracy of the summarized information depends mostly on the data fed to ChatGPT.

However, when one asks questions to which the AI application searches for the data on its own, it relies solely on the data it finds and creates the summary of that. It believes the data to be accurate and, too often does not even give any indication of potential inaccuracies. It means the responsibility of the verification is on the user – who should be aware of it.

Therefore, my recommendation is not to use general AI tools for topics in which the users have zero expertise. That is why we need domain specific AI tools.

 
2. Bias and fairness

AI systems are trained on large datasets. If those datasets contain biased information (regarding race, gender, culture, etc.), the AI can perpetuate or even amplify those biases. This can result in unfair treatment or discrimination in areas like hiring, law enforcement, or healthcare.

All publicly available AI systems are owned by large US corporations. Most likely some governments have systems of their own, but they are not widely available. How are the rules of the publicly available systems created and what limits are built in for data usage or output? This is mostly unknown.

Thus, AI is not to be used for oversimplified questions and the answers given by it should be carefully considered.

 
3. Privacy and data security

AI systems often require a large amount of data to function effectively. There is a risk that sensitive personal information could be misused, mishandled, or exposed through vulnerabilities. Users need to be cautious about what data they share with AI services.

Companies providing AI applications are not doing that for the common good, but trying to make profit and find business models for AI usage. When using the systems for free (and perhaps even when paying for it), the users give their most valuable asset for improving the AI: their data. This should be considered before every query for any AI. Since it is often impossible to know how the data is processed and stored by the AI application provider, one should be extremely cautious in inserting any sensitive data into the system.

Once the data is collected, the lines around who owns user data are often blurred. It is processed by AI in data lakes, especially since users often don’t fully understand the extent of the data they’re providing. This raises ethical questions about their rights and consent.
 

4. Unawareness of AI

Sometimes users interact with AI without their knowledge or consent, like in the cases of…

1.       Massive Data Aggregation: Tech companies like Microsoft, Google, and Facebook have extensive “data lakes” where they gather enormous volumes of user data, often in ways not entirely transparent. This data can include browsing habits, search queries, location history, and social interactions. AI algorithms then analyze the data to improve services or create targeted ads, often without the user realizing how their data is being used.

2.       Predictive Profiling: Without clear user consent, AI algorithms may create detailed profiles of user preferences, habits, and even psychological characteristics. These profiles may be used to anticipate behavior, which can feel invasive and potentially manipulative.

3.       Influencing User Behavior: AI can track and predict user behavior to create a highly personalized experience, such as tailoring content feeds, ads, and recommendations. However, it can also lead to nudging where AI subtly influences decision-making and behavior. For instance, AI-curated news feeds can amplify particular topics, reinforcing specific viewpoints or behaviors that align with the company’s goals.

 
5. Legal and ethical concerns

AI can be used in ways that raise ethical issues, such as in surveillance, deep fakes, or even autonomous weapons. The lack of clear regulation or ethical guidelines in some areas means AI could be misused in ways that harm society or individuals.

AI algorithms have been instrumental in distributing political content, often without users knowing the selection criteria. This can intensify political polarization by showing users content that aligns with their existing beliefs, often called the “echo chamber” effect.

Many jurisdictions don’t address AI’s unique challenges, meaning there may not be a clear legal path to hold anyone accountable for AI’s actions. Without specific legislation, holding developers or companies responsible is complicated.

Even if developers attempt to make their AI secure, it’s nearly impossible to account for all potential attacks. This means that, in practice, accountability is often not clear-cut, and harm can occur with no party being fully accountable.
 

6. Lack of accountability

When AI systems make decisions (especially in high-stakes fields like law, healthcare, or finance), it can be difficult to hold anyone accountable if things go wrong. Determining who is responsible for an AI’s actions (the developers, the users, or the AI itself) is a complex issue that hasn’t been fully addressed.

AI systems, despite their capabilities, are not moral agents. They don’t make decisions based on ethics or intentions but rather on patterns and algorithms. This complicates accountability because AI can’t be held morally responsible in the way humans can.

There is an ongoing debate about whether AI should have any degree of legal “personhood,” but without that, accountability remains with the people and entities that create and deploy AI.
 

7. Loss of human connection could lead to decreased critical thinking

As AI becomes more integrated into everyday tasks from customer service to social interactions, there is a risk of reducing human-to-human connections. Over-reliance on AI in these areas could lead to a loss of empathy, social skills, and personal touch in communication.

AI can also create bubbles by curating information to match users’ preferences. While this keeps users engaged, it may also limit exposure to diverse viewpoints, discouraging critical thinking and nuanced understanding of complex issues.

AI is increasingly integrated into services that make or heavily influence decisions, such as financial recommendations, credit evaluations, and hiring algorithms. Users may not be aware that AI has influenced or even made certain decisions on their behalf, which reduces their control over these outcomes.
 

Conclusions

Understanding the risks mentioned above allows us to use AI more responsibly.

Developers and users need to approach AI with caution, ensuring transparency, fairness, and ethics in its use. Regulations and guidelines are also evolving to help manage these risks. When using AI systems, it should be kept in mind that the most important asset the users have is their data.

Over a longer period, different data systems can gather information to create a profile of a person or a company. It may include all photos, texts, searches, etc. stored in cloud platforms, even sensitive and confidential ones.

AI applications are extremely useful and powerful tools with huge potential, especially in handling exponentially growing amounts of data. By understanding the risks of AI, we are more capable of using AI safely and avoiding potential problems arising from the use. Despite the ever-growing amount of data users and companies have, it's good to remember its value. Don’t reveal everything.

Although one can mitigate all the risks, in the end, there is no AI tool without at least some potential problems. But life, business, and entrepreneurship are about taking risks one can bear. By the way: can AI define a universally tolerable risk level? No. That is something you have to decide yourself.
 

The author is solely responsible for the views expressed in this guest article and they do not necessarily reflect the views of Mission Grey Inc.

Mission Grey Inc. is committed to mitigating the problems mentioned in the text by relying on its team with strong interdisciplinary expertise in data science, programming, economics, international relations, and social sciences.


(Image: David S. Soriano - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=125089281)

Share

Details

    Archives

    June 2025
    December 2024
    November 2024
    October 2024
    September 2024
    June 2024
    May 2024
    April 2024
    October 2023
    August 2023
    July 2023
    March 2023
    December 2022
    November 2022
    October 2022

    RSS Feed


Picture
​Copyright © 2025 Mission Grey.
​All rights reserved.
Links:
​Privacy policy
Terms of Use
Open Positions
​Press Releases
Follow us on:
Mission Grey Application:
Login
Sign Up
5.0 out of 5 stars in G2:
Picture
  • Home
  • Use Cases
    • For Businesses
    • For Consulting
    • For Investments
  • About us
  • Blog
  • Contact us
  • Guild
  • Login