• Kinza Babylon Staked BTCKinza Babylon Staked BTC(KBTC)$83,270.000.00%
  • Steakhouse EURCV Morpho VaultSteakhouse EURCV Morpho Vault(STEAKEURCV)$0.000000-100.00%
  • Stride Staked InjectiveStride Staked Injective(STINJ)$16.51-4.18%
  • Vested XORVested XOR(VXOR)$3,404.231,000.00%
  • FibSwap DEXFibSwap DEX(FIBO)$0.0084659.90%
  • ICPanda DAOICPanda DAO(PANDA)$0.003106-39.39%
  • TruFin Staked APTTruFin Staked APT(TRUAPT)$8.020.00%
  • bitcoinBitcoin(BTC)$107,831.002.22%
  • ethereumEthereum(ETH)$2,599.102.16%
  • VNST StablecoinVNST Stablecoin(VNST)$0.0000400.67%
  • tetherTether(USDT)$1.000.01%
  • rippleXRP(XRP)$2.380.66%
  • binancecoinBNB(BNB)$660.991.65%
  • solanaSolana(SOL)$171.151.65%
  • Wrapped SOLWrapped SOL(SOL)$143.66-2.32%
  • usd-coinUSDC(USDC)$1.000.00%
  • dogecoinDogecoin(DOGE)$0.2323013.31%
  • cardanoCardano(ADA)$0.763.62%
  • tronTRON(TRX)$0.271616-0.01%
  • staked-etherLido Staked Ether(STETH)$2,596.902.12%
  • wrapped-bitcoinWrapped Bitcoin(WBTC)$107,688.002.06%
  • SuiSui(SUI)$3.900.83%
  • Gaj FinanceGaj Finance(GAJ)$0.0059271.46%
  • Content BitcoinContent Bitcoin(CTB)$24.482.55%
  • USD OneUSD One(USD1)$1.000.11%
  • Wrapped stETHWrapped stETH(WSTETH)$3,122.481.94%
  • chainlinkChainlink(LINK)$16.03-0.50%
  • UGOLD Inc.UGOLD Inc.(UGOLD)$3,042.460.08%
  • avalanche-2Avalanche(AVAX)$22.902.09%
  • ParkcoinParkcoin(KPK)$1.101.76%
  • stellarStellar(XLM)$0.2913652.18%
  • HyperliquidHyperliquid(HYPE)$26.520.10%
  • shiba-inuShiba Inu(SHIB)$0.0000151.73%
  • hedera-hashgraphHedera(HBAR)$0.1979610.97%
  • leo-tokenLEO Token(LEO)$8.821.33%
  • bitcoin-cashBitcoin Cash(BCH)$400.772.03%
  • ToncoinToncoin(TON)$3.091.53%
  • litecoinLitecoin(LTC)$95.96-1.21%
  • polkadotPolkadot(DOT)$4.762.60%
  • wethWETH(WETH)$2,598.672.25%
  • USDSUSDS(USDS)$1.000.00%
  • Yay StakeStone EtherYay StakeStone Ether(YAYSTONE)$2,671.07-2.84%
  • moneroMonero(XMR)$365.094.64%
  • Wrapped eETHWrapped eETH(WEETH)$2,773.962.16%
  • Bitget TokenBitget Token(BGB)$5.200.44%
  • Pundi AIFXPundi AIFX(PUNDIAI)$16.000.00%
  • Binance Bridged USDT (BNB Smart Chain)Binance Bridged USDT (BNB Smart Chain)(BSC-USD)$1.00-0.10%
  • PengPeng(PENG)$0.60-13.59%
  • PepePepe(PEPE)$0.0000145.16%
  • Pi NetworkPi Network(PI)$0.8210.81%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

Opening the Black Box on AI Explainability

May 20, 2025
in AI & Technology
Reading Time: 5 mins read
A A
Opening the Black Box on AI Explainability
ShareShareShareShareShare

YOU MAY ALSO LIKE

AMD unveils Radeon RX 9060 XT at Computex 2025

AMD unveils new Threadripper CPUs and Radeon GPUs for gamers at Computex 2025

Artificial Intelligence (AI) has become intertwined in almost all facets of our daily lives, from personalized recommendations to critical decision-making. It is a given that AI will continue to advance, and with that, the threats associated with AI will also become more sophisticated. As businesses enact AI-enabled defenses in response to the growing complexity, the next step toward promoting an organization-wide culture of security is enhancing AI’s explainability.

While these systems offer impressive capabilities, they often function as “black boxes“—producing results without clear insight into how the model arrived at the conclusion it did. The issue of AI systems making false statements or taking false actions can cause significant issues and potential business disruptions. When companies make mistakes due to AI, their customers and consumers demand an explanation and soon after, a solution.

But what is to blame? Often, bad data is used for training. For example, most public GenAI technologies are trained on data that is available on the Internet, which is often unverified and inaccurate. While AI can generate fast responses, the accuracy of those responses depends on the quality of the data it’s trained on.

AI mistakes can occur in various instances, including script generation with incorrect commands and false security decisions, or shunning an employee from working on their business systems because of false accusations made by the AI system. All of which have the potential to cause significant business outages.  This is just one of the many reasons why ensuring transparency is key to building trust in AI systems.

Building in Trust

We exist in a culture where we instill trust in all kinds of sources and information. But, at the same time, we demand proof and validation more and more, needing to constantly validate news, information, and claims. When it comes to AI, we are putting trust in a system that has the potential to be inaccurate. More importantly, it is impossible to know whether or not the actions AI systems take are accurate without any transparency into the basis on which decisions are made. What if your cyber AI system shuts down machines, but it made a mistake interpreting the signs? Without insight into what information led the system to make that decision, there is no way to know whether it made the right one.

While disruption to business is frustrating, one of the more significant concerns with AI use is data privacy. AI systems, like ChatGPT, are machine-learning models that source answers from the data it receives. Therefore, if users or developers accidentally provide sensitive information, the machine-learning model may use that data to generate responses to other users that reveal confidential information. These mistakes have the potential to severely disrupt a company’s efficiency, profitability, and most importantly customer trust. AI systems are meant to increase efficiency and ease processes, but in the case that constant validation is necessary because outputs cannot be trusted, organizations are not only wasting time but also opening the door to potential vulnerabilities.

Training Teams for Responsible AI Use

In order to protect organizations from the potential risks of AI use, IT professionals have the important responsibility of adequately training their colleagues to ensure that AI is being used responsibly. By doing this, they help to keep their organizations safe from cyberattacks that threaten their viability and profitability.

However, prior to training teams, IT leaders need to align internally to determine what AI systems will be a fit for their organization. Rushing into AI will only backfire later on, so instead, start small, focusing on the organization’s needs. Ensure that the standards and systems you select align with your organization’s current tech stack and company goals, and that the AI systems meet the same security standards as any other vendors you select would.

Once a system has been selected, IT professionals can then begin getting their teams exposure to these systems to ensure success. Start by using AI for small tasks and seeing where it performs well and where it does not, and learn what the potential dangers or validations are that need to be applied. Then introduce the use of AI to augment work, enabling faster self-service resolution, including the simple “how to” questions. From there, it can be taught how to put validations in place. This is valuable as we will begin to see more jobs become about putting boundary conditions and validations together, and even already seen in jobs like using AI to assist in writing software.

In addition to these actionable steps for training team members, initiating and encouraging discussions is also imperative. Encourage open, data driven, dialogue on how AI is serving the user needs – is it solving problems accurately and faster, are we driving productivity for both the company and end-user, is our customer NPS score increasing because of these AI driven tools? Be clear on the return on investment (ROI) and keep that front and center. Clear communication will allow awareness of responsible use to grow, and as team members get a better grasp on how the AI systems work, they are more likely to use them responsibly.

How to Achieve Transparency in AI

Although training teams and increasing awareness is important, to achieve transparency in AI it is vital that there is more context around the data that is being used to train the models, ensuring that only quality data is being used. Hopefully, there will eventually be a way to see how the system reasons so that we can fully trust it. But until then, we need systems that can work with validations and guardrails and prove that they adhere to them.

While full transparency will inevitably take time to achieve, the rapid growth of AI and its usage make it necessary to work quickly. As AI models continue to increase in complexity, they have the power to make a large difference to humanity, but the consequences of their errors also grow. As a result, understanding how these systems arrive at their decisions is extremely valuable and necessary to remain effective and trustworthy. By focusing on transparent AI systems, we can ensure that the technology is as useful as it is meant to be while remaining unbiased, ethical, efficient, and accurate.

Credit: Source link

ShareTweetSendSharePin

Related Posts

AMD unveils Radeon RX 9060 XT at Computex 2025
AI & Technology

AMD unveils Radeon RX 9060 XT at Computex 2025

May 21, 2025
AMD unveils new Threadripper CPUs and Radeon GPUs for gamers at Computex 2025
AI & Technology

AMD unveils new Threadripper CPUs and Radeon GPUs for gamers at Computex 2025

May 21, 2025
Fortnite is finally back in the US App Store
AI & Technology

Fortnite is finally back in the US App Store

May 20, 2025
Telegram CEO Pavel Durov is banned from leaving France without permission following his arrest
AI & Technology

Telegram CEO Pavel Durov is banned from leaving France without permission following his arrest

May 20, 2025
Next Post
Radha Basu, CEO and Founder of iMerit – Interview Series

Radha Basu, CEO and Founder of iMerit - Interview Series

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
FBI calls California fertility clinic bombing an ‘act of terrorism’

FBI calls California fertility clinic bombing an ‘act of terrorism’

May 20, 2025
Apple blocked ‘Fortnite’ App Store return, Epic Games says

Apple blocked ‘Fortnite’ App Store return, Epic Games says

May 16, 2025
This Is Why I'm Confident About American Assets (Rating Upgrade)

This Is Why I'm Confident About American Assets (Rating Upgrade)

May 19, 2025

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!