Abstract
Over the recent years, we have been witnessing spectacular achievements of Artificial Intelligence (AI) and Machine Learning (ML), in particular. We have seen highly visible accomplishments encountered in natural language processing and computer vision impacting numerous areas of human endeavours. Being driven inherently by the technologically advanced learning and architectural developments, ML constructs are highly impactful coming with far reaching consequences; just to mention autonomous vehicles, health care imaging, decision-making processes in critical areas, among others. The quality of ML architectures and credibility of generated results are inherently implied by the nature, quality, and amount of available data. The credibility of ML models and confidence quantified their results are also of paramount concern to any critical application. In this study, we advocate that the credibility (confidence) of results produced by ML constructs is inherently expressed in the form of information granules. Several development scenarios are carefully revisited including those involving constructs in statistics (confidence and prediction intervals), probability (Gaussian process models), and granular parameters (fuzzy sets and interval techniques). We augment the commonly encountered and challenging category of applications of ML referred to as federated learning where the aspect of quality of the model and its results calls for a thorough assessment.