top of page
Julien Haye

Effective Strategies for Managing Risks in AI-Driven Business Decision-Making

Illustration of AI algorithms managing business decision-making processes with risk management elements

My LinkedIn feed has been inundated of late with pro and, to be fair mostly, cons analysis on Artificial Intelligence (AI). I am a big fan of language model-based AI based on the GPT (Generative Pre-trained Transformer) architecture, such as ChatGPT. They can effectively serve as assistants, never get tired, and have access to a broad range of information, among other advantages.


However, I am also very conscious on their impact, particularly when it comes to business decisions. In the finance space, algo-trading platforms come to mind, which have been in use as far back as the 1970s with the Portfolio System of Interacting Traders system (POSIT)[1]


The use of artificial intelligence (AI) in risk business decision-making offers numerous benefits, but it also comes with a set of risks. Understanding and addressing these risks is crucial for organisations to effectively leverage AI technology while minimising potential negative outcomes. Discover the position of the UK Financial Conduct Authority on AI.


Algo-Trading and Decision-Making


Algo-trading involves the use of computer algorithms to automate trading decisions and execute trades in financial markets. These algorithms analyse market data, such as price movements and volume, and make trading decisions based on predefined rules and strategies. In algo trading, AI techniques are commonly used to develop and optimise trading algorithms. Machine learning algorithms, for example, can be employed to learn patterns from historical market data and identify profitable trading strategies. AI algorithms can also adapt and improve over time based on market conditions, allowing for dynamic and adaptive trading decisions.


Algo trading have been involved in several high-profile so-called “flash-crashes”. Flash crashes are rapid and severe price declines followed by a quick recovery. They can be caused by a combination of factors such as market volatility, liquidity imbalances, technical glitches, or even deliberate manipulation. Algo trading algorithms, if not properly designed or implemented, can contribute to these events by amplifying market movements or triggering a cascade of automated trades. Although the majority of these incidents were the result of complex interactions between various market factors rather than solely attributed to algo trading itself, the algo trading acted as an uncontrolled amplifier. It is believed that the flash crash of the British Pound (6% loss) in 2016 was caused by an algo-trading platform. Another example includes the 2010 crash that saw the US stock market lose 1000 points in minutes.


Regulators and financial institutions have taken steps to mitigate the risks associated with algo trading and enhance market stability. Risk controls, circuit breakers, and other mechanisms have been implemented to prevent excessive volatility and market disruptions.


AI-Driven Risk Business Decision-Making


AI-driven business decisions refer to the use of artificial intelligence technologies and algorithms to inform or automate decision-making processes within an organisation. It involves leveraging AI capabilities to analyse data, derive insights, and generate recommendations or actions that can influence various aspects of business operations and strategies.


In the context of AI-driven business decisions, AI systems are trained on vast amounts of data to recognise patterns, correlations, and trends. These systems can then apply their learned knowledge to make predictions, optimise processes, and provide recommendations. The decision-making process can range from simple tasks, such as automated email sorting, to complex strategic decisions, such as resource allocation, pricing strategies, supply chain optimisation, or trading decisions as we have just explored.


AI-driven business decisions aim to enhance decision-making by augmenting human capabilities, potentially substituting human capabilities, improving efficiency, and enabling data-driven insights. By leveraging AI technologies, organisations can gain a competitive edge, make more informed decisions, and potentially uncover hidden opportunities or risks.


In my opinion, there is a clear distinction between augmenting human capabilities and substituting human capabilities. This differentiation has a significant impact on the risks associated with the use of AI. Read more about the impact of digitalisation on risk management.


Risk Associated with Artificial Intelligence


Whether you are a proponent or critic of artificial intelligence, it is important to acknowledge that the risks associated with its use are real and must be carefully considered. While AI offers numerous benefits and opportunities for businesses, it also poses potential challenges and negative outcomes that cannot be overlooked. Understanding and addressing these risks is crucial to ensure responsible and ethical deployment of AI technologies.


Here is an initial view of the risks associated with the use of AI for business decisions, and some potential mitigants:


  • Data Bias and Discrimination: AI systems heavily rely on data for training and decision-making. If the training data used to develop AI models is biased or discriminatory, the resulting decisions can perpetuate and amplify existing biases, leading to unfair outcomes. It is important to ensure that the training data is diverse, representative, and free from inherent biases to mitigate this risk.

  • Lack of Transparency: Some AI models, such as deep learning neural networks, can be highly complex and difficult to interpret. This lack of transparency poses challenges in understanding how AI systems arrive at their decisions. It can make it difficult for businesses to explain decisions to stakeholders, regulators, or customers, leading to potential legal and ethical concerns. To reduce this risk, consider employing interpretable AI models, utilise explainable AI techniques, document the model's architecture and decision-making process, conduct external audits, ensure regulatory compliance, and establish internal ethical guidelines.

  • Security and Privacy Concerns: AI systems often deal with sensitive and confidential data. If proper security measures are not in place, there is a risk of unauthorised access, data breaches, or malicious attacks targeting AI models. Moreover, AI models can inadvertently reveal sensitive information or violate privacy regulations if not appropriately designed and implemented. To stay on top of this pitfall, implement robust security measures such as encryption, access controls, and regular security audits to protect sensitive data. Additionally, users should follow privacy-by-design principles, adhere to privacy regulations, and conduct thorough testing and validation to ensure AI models do not inadvertently disclose sensitive information.

  • Adversarial Attacks: AI models can be susceptible to adversarial attacks, where malicious actors intentionally manipulate input data to deceive the AI system. These attacks can lead to incorrect decisions, compromise system integrity, or enable unauthorised access to sensitive information. Robust security measures (such as data validation and verification, anomaly detection techniques, encryption to protect against malicious manipulations of input data) as well ongoing monitoring are essential to mitigate this risk.

  • Dependency and Technical Issues: Overreliance on AI systems can create a dependency that leaves businesses vulnerable to technical failures or malfunctions. If an AI system encounters errors or experiences downtime, it can disrupt critical business operations, leading to financial losses and customer dissatisfaction. In that context, it is not dissimilar to outsourcing part of a business value-chain to another organisation. This requires considering backup systems and redundancy, and stringent business continuity plans including reverting back to “old fashion” decision making.

  • Regulatory and Legal Compliance: The use of AI may be subject to various legal and regulatory frameworks, including data protection, privacy, and anti-discrimination laws. Failure to comply with these regulations can result in penalties, legal challenges, or reputational damage. Organizations must ensure that their AI systems and decision-making processes align with relevant laws and regulations.

  • Workforce Displacement and Skills Gap: The automation potential of AI can lead to concerns about job displacement and a widening skills gap. Organizations must carefully manage the impact on the workforce, ensuring that employees are reskilled and redeployed to roles that complement AI systems, rather than being replaced entirely.

  • Lack of Human Oversight: Over-reliance on AI systems without sufficient human oversight can lead to unintended consequences. AI models are trained on historical data, and they may not always account for rare or unprecedented scenarios. Human intervention is necessary to ensure that decisions align with business objectives and ethical considerations. This issue also raises the question of the substitutions of “human capabilities” in business decision. Out of curiosity, I asked ChatGPT to provide a view on this particular point; here is its answer:


While AI can automate certain tasks and augment human capabilities, complete substitution of human oversight is not recommended. The mitigation strategy emphasise the importance of human involvement to ensure decisions align with business objectives and ethical considerations. Human judgment, critical thinking, and domain expertise play a crucial role in addressing unique scenarios, making complex judgments, and maintaining accountability in AI-driven decision-making processes.


Connected to human supervision, there are a number of additional risks to consider:

  • Limited Accountability: Determining liability or responsibility for decisions made by AI systems can be challenging. When an AI system makes a mistake or causes harm, it can be difficult to assign accountability, especially if the decision-making process is opaque. This can create legal, reputational, and regulatory risks for organizations. To prevent this situation, organisations should establish clear mechanisms for tracking and documenting AI decisions, ensure transparency in the decision-making process, and establish protocols for assigning responsibility and accountability when errors or harm occur. This also comes back to the previous risk and the need for human supervision.

  • Lack of Ethical Decision-Making: AI decisions may not always align with ethical considerations, as they prioritise optimisation or predefined objectives without taking into account broader societal impacts. This can result in decisions that are legal but unethical, leading to reputational damage and loss of customer trust.


As discussions about Artificial Intelligence flood social media feeds, it's important to recognise both its benefits and risks, particularly when it comes to business decisions. While AI can offer significant advantages, including increased efficiency and access to vast information, organisations must also address potential pitfalls to ensure responsible AI deployment. Risks such as data bias, lack of transparency, security concerns, adversarial attacks, dependency on AI systems, regulatory compliance, workforce displacement, and limited human oversight need to be mitigated through diverse and representative training data, interpretability, robust security measures, contingency plans, adherence to regulations, reskilling programs, and the involvement of human judgment and ethical considerations.


There is little doubt in my mind that businesses, states, etc. are, or will, embrace the use of AI. Recognising and addressing the risks associated with AI-driven decision-making is an opportunity for them to demonstrate their commitment to ethical and responsible AI practices, earning the trust and loyalty of stakeholders in an increasingly AI-driven world.

 

[1] POSIT (Portfolio System of Interacting Traders) was created by Jesse Livermore and Richard Bookstaber in the late 1970s. It was a pioneering electronic trading system that employed algorithms to facilitate the execution of trades. POSIT aimed to match buy and sell orders directly between institutional investors, bypassing the traditional exchange floor. By utilizing algorithms to identify compatible trades and execute them automatically, POSIT brought efficiency and speed to the trading process.

113 views

Related Posts

See All
bottom of page