top of page
Search

Artificial Intelligence and Stocks: Have We Crossed the Ethical Line?

  • Writer: Erkan Öztürk
    Erkan Öztürk
  • Oct 7, 2025
  • 2 min read

As artificial intelligence (AI) permeates every aspect of our lives, financial markets are at the forefront of this revolution. Stock trading bots, in particular, can leverage AI's speed and data processing power to make millions of trading decisions in seconds. However, this raises a significant ethical question: How responsible can an AI in a decision-making position be?


The Gap Between Responsible AI Narrative and Reality


At first glance, we face a contradictory picture. On the one hand, many AI tools, especially major language models (ChatGPT, OpenAI, Gemini, Meta, Titan, Claude, Grok, DeepSeek), refrain from explicitly providing investment advice, in accordance with the principles of "responsible AI." The system is constrained accordingly. This can be seen as an ethical measure and a legal shield.

But on the other side of the coin, there are advanced stock bots.

While stock bots may claim to be "my AI," they're likely using OpenAI's GPT, Google's Gemini, Meta's Llama , or other major AI models in the background, and are specifically trained to process financial data. These systems don't use AI to directly issue "buy" or "sell" commands. Instead, the AI analyzes news, market data, and social media sentiment, and generates a "trend," "probability," or "signal" through complex algorithms and functions. Programmers interpret this output as a "strict instruction" for the final order. But doesn't this make the AI an instigator, rather than a perpetrator ?


The Instigator: A Legal and Ethical Conundrum


In this scenario, AI hides behind the defense of "I'm just a data analyst; the final decision belongs to the human." But a human operator can't manually manage thousands of data points that change in a matter of seconds. In practice, AI's "recommendations" are largely implemented automatically. In other words, while closing the door on "responsible AI" on one hand, we're introducing the same content under the guise of "investigative algorithm . "

This creates significant uncertainty about who will be held accountable for any issues that may arise. Who will be at fault if an AI model triggers an unforeseen market crash or unwittingly reinforces a manipulative trading pattern? The person who wrote the algorithm, the person operating the system, or the AI itself, which we believe is "making the decisions"?


Conclusion: Towards a Crisis of Confidence


This gray area could erode trust in AI in the long run. Financial markets are built on trust. A system in which AI plays such a central role, yet its responsibilities are not clearly defined, will inevitably face a major crisis of trust in times of crisis.


This is a topic that urgently requires discussion not only among regulators and software developers, but also among investors and society at large. As technology advances, we must simultaneously build its ethical and legal framework. Otherwise, we will only end up using machines as instigators. Is the source of the problem a handful of giant models, or the thousands of applications trying to use them in every field? It's worth discussing!

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

© 2025, Erkan Ozturk

bottom of page