Computer vision, NLP, and other domains have seen remarkable success with deep machine learning (ML) approaches based on deep neural networks (NNs). However, the long-standing problem of interpretability vs efficiency presents several formidable obstacles. The ability to question, comprehend, and trust deep ML approaches depends on their interpretability, often described as the degree to which a person can grasp the source of a conclusion.
Bayesian networks, Boltzmann machines, and other probabilistic ML models are considered “white boxes” since they are inherently interpretable. One way these models claim to interpret is by using probabilistic reasoning to uncover hidden causal linkages; this aligns with the way human minds work statistically. Regrettably, state-of-the-art deep NNs outperform these probabilistic models by a considerable margin. It appears that current ML models cannot achieve both high efficiency and interpretability simultaneously.
Thanks to the exponential growth of quantum and conventional computing, a new tool for solving the efficiency vs. interpretability conundrum has emerged: the tensor network (TN). The contraction of more than one tensor is called a TN. The way the tensors are contracted is defined by its network structure.
A new paper from Capital Normal University and the University of Chinese Academy of Sciences surveyed the encouraging developments in TNs towards efficient and interpretable quantum-inspired ML. “TN ML butterfly” enumerates the benefits of TNs for ML. The benefits of TNs for ML with a quantum twist may be summed up in two main areas: the interpretability of quantum theories and the efficiency of quantum procedures. A probabilistic framework for interpretability that may go beyond the description of classical information or statistical approaches can be built using TNs with quantum theories like entanglement theories and statistics.
Conversely, quantum-inspired TN ML approaches will be able to operate efficiently on both classical and quantum computing platforms thanks to robust quantum-mechanical TN algorithms and substantially improved quantum computing technology. In particular, generative pretrained transformers have lately achieved remarkable development, leading to unprecedented computational power and model complexity surges, which presents both potential and challenges for TN ML. In the face of the new artificial intelligence (AI) of generative pretrained transformers, the capacity to interpret results will be more important than ever before, allowing for more effective investigations, safer control, and better utilization.
The researchers believe that as we enter the period of true quantum computing and the present NISQ era, the TN is quickly becoming a leading mathematical tool for investigating quantum AI from various angles, including theories, models, algorithms, software, hardware, and applications.
Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.
Credit: Source link