Computing Reviews

TSLAM:A Trust-enabled Self-Learning Agent Model for Service Matching in the Cloud Market
Li W., Cao J., Qian S., Buyya R. ACM Transactions on Autonomous and Adaptive Systems13(4):1-41,2019.Type:Article
Date Reviewed: 02/15/22

Cloud services and cloud computing in general have experienced explosive growth. Undoubtedly it is easier for users, even sophisticated ones, to leave the task of running information and communications technology (ICT) infrastructure to specialized professionals, and to instead concentrate on the data and data flows essential to their businesses. These days, however, more often than not, it’s a little bit like the Old West out there: users are often not fully aware of their own needs, while some service providers are not completely trustworthy in what they offer; the net result is much time and money wasted in search of the right combination of resources. Systems already exist that match users and the resources they need; most of them are centralized, meaning they are prone to single point of failure (SPOF) problems.

This paper proposes a different model where users, service providers, and brokers all collaborate to draw the most benefits for everybody. The trust-enabled self-learning agent model (TSLAM) for service matching relies on several concepts that work together. First, trust--defined as an evaluation of the “reputation and reliability” of users, providers, and services--is a powerful barrier against fraud, slowly pushing untrustworthy providers out of the loop. Next is self-learning: the system evolves over time, and its recommendations grow more and more reliable. Finally, agents: in the real world, we may have real people searching for, offering, or using ICT services; in this model they are replaced by software agents that roam the cloud on their behalf and try to fulfill their needs.

The paper starts by describing existing centralized models where brokers match users, service providers, and resources. It then proposes its own model, where different kinds of agents cooperate to define resource ecosystems. Provider agents start by proposing their services to a larger possible selection of broker agents; as time goes by, broker agents cull untrustworthy agents and their services, following the concept of trust as defined in the model. Similarly, user agents first address the larger possible selection of brokers and then learn to trust the best brokers and drop the others. Trust is logically defined, with formulae to increase or decrease its level; the higher the trust between agents, the more interactions between them. Agents interact via different kinds of messages; these too are logically defined. In addition to formal definitions, diagrams, flowcharts, and pseudocode help readers understand the model in a more intuitive way.

The presented experiments show improved efficiency and effectiveness under different learning strategies, and the authors indicate directions for future work. As such indications can also act as research suggestions for different teams, here lays the most useful lesson of this paper: the cooperation described here among software user agents can also be extended to real people, greatly benefiting their everyday tasks.

Reviewer:  Andrea Paramithiotti Review #: CR147408

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy