Artificial intelligence (AI) has been gaining significant attention in various fields to reduce costs, increase revenue, and improve customer satisfaction. AI can be particularly beneficial in enhancing decision-making processes for complex and ill-structured problems that lack transparency and have unclear goals. Most AI algorithms require labeled datasets to learn the problem characteristics, draw decision boundaries, and generalize. However, most datasets collected to solve complex and ill-structured problems do not have labels. Additionally, most AI algorithms are opaque and not easily interpretable, making it hard for decision-makers to obtain model insights for developing effective solution strategies. To this end, we examine existing AI paradigms, mainly symbolic AI (SAI) guided by human domain knowledge and data-driven AI (DAI) guided by data. We propose an approach called informed AI (IAI) by integrating human domain knowledge into AI to develop effective and reliable data labeling and model explainability processes. We demonstrate and validate the use of IAI by applying it to a social media dataset comprised of conversations between customers and customer support agents to construct a solution – IAI defect explorer (I-AIDE). I-AIDE is utilized to identify product defects and extract the voice of customers to help managers make decisions to improve quality and enhance customer satisfaction.
- Artificial intelligence
- Data labeling
- Dynamic decision-making environments
- Explainable artificial intelligence (XAI)