95 resultados para Multi-classifier systems
Resumo:
The current ubiquitous network access and increase in network bandwidth are driving the sales of mobile location-aware user devices and, consequently, the development of context-aware applications, namely location-based services. The goal of this project is to provide consumers of location-based services with a richer end-user experience by means of service composition, personalization, device adaptation and continuity of service. Our approach relies on a multi-agent system composed of proxy agents that act as mediators and providers of personalization meta-services, device adaptation and continuity of service for consumers of pre-existing location-based services. These proxy agents, which have Web services interfaces to ensure a high level of interoperability, perform service composition and take in consideration the preferences of the users, the limitations of the user devices, making the usage of different types of devices seamless for the end-user. To validate and evaluate the performance of this approach, use cases were defined, tests were conducted and results gathered which demonstrated that the initial goals were successfully fulfilled.
Resumo:
The environmental management domain is vast and encompasses many identifiable activities: impact assessment, planning, project evaluation, etc. In particular, this paper focusses on the modelling of the project evaluation activity. The environmental decision support system under development aims to provide assistance to project developers in the selection of adequate locations, guaranteeing the compliance with the applicable regulations and the existing development plans as well as satisfying the specified project requirements. The inherent multidisciplinarity features of this activity lead to the adoption of the Multi-Agent paradigm, and, in particular, to the modelling of the involved agencies as a community of cooperative autonomous agents, where each agency contributes with its share of problem solving to the final system’s recommendation. To achieve this behaviour the many conclusions of the individual agencies have to be justifiably accommodated: not only they may differ, but can be interdependent, complementary, irreconcilable, or simply, independent. We propose different solutions (involving both local and global consistency) to support the adequate merge of the distinct perspectives that inevitably arise during this type of decision making.
Resumo:
In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support for conflict resolution in the analysis of the validity of the candidate beams to be produced in the CERN particle accelerators. This CERN multiagent system contains a higher hierarchy agent, the Specialist agent, which makes use of meta-knowledge (on how the con- flicting beliefs have been produced by the other agents) in order to detect which beliefs should be abandoned. Upon solving a conflict, the Specialist instructs the involved agents to revise their beliefs accordingly. Conflicts in the problem domain are mapped into conflicting beliefs of the distributed belief revision system, where they can be handled by proven formal methods. This technique builds on well established concepts and combines them in a new way to solve important problems. We find this approach generally applicable in several domains.
Resumo:
Electricity markets are complex environments with very particular characteristics. A critical issue concerns the constant changes they are subject to. This is a result of the electricity markets’ restructuring, performed so that the competitiveness could be increased, but with exponential implications in the increase of the complexity and unpredictability in those markets’ scope. The constant growth in markets unpredictability resulted in an amplified need for market intervenient entities in foreseeing market behavior. The need for understanding the market mechanisms and how the involved players’ interaction affects the outcomes of the markets, contributed to the growth of usage of simulation tools. Multi-agent based software is particularly well fitted to analyze dynamic and adaptive systems with complex interactions among its constituents, such as electricity markets. This paper presents the Multi-Agent System for Competitive Electricity Markets (MASCEM) – a simulator based on multi-agent technology that provides a realistic platform to simulate electricity markets, the numerous negotiation opportunities and the participating entities.
Resumo:
Electricity markets worldwide suffered profound transformations. The privatization of previously nationally owned systems; the deregulation of privately owned systems that were regulated; and the strong interconnection of national systems, are some examples of such transformations [1, 2]. In general, competitive environments, as is the case of electricity markets, require good decision-support tools to assist players in their decisions. Relevant research is being undertaken in this field, namely concerning player modeling and simulation, strategic bidding and decision-support.
Resumo:
Multi-agent approaches have been widely used to model complex systems of distributed nature with a large amount of interactions between the involved entities. Power systems are a reference case, mainly due to the increasing use of distributed energy sources, largely based on renewable sources, which have potentiated huge changes in the power systems’ sector. Dealing with such a large scale integration of intermittent generation sources led to the emergence of several new players, as well as the development of new paradigms, such as the microgrid concept, and the evolution of demand response programs, which potentiate the active participation of consumers. This paper presents a multi-agent based simulation platform which models a microgrid environment, considering several different types of simulated players. These players interact with real physical installations, creating a realistic simulation environment with results that can be observed directly in the reality. A case study is presented considering players’ responses to a demand response event, resulting in an intelligent increase of consumption in order to face the wind generation surplus.
Resumo:
An ever increasing need for extra functionality in a single embedded system demands for extra Input/Output (I/O) devices, which are usually connected externally and are expensive in terms of energy consumption. To reduce their energy consumption, these devices are equipped with power saving mechanisms. While I/O device scheduling for real-time (RT) systems with such power saving features has been studied in the past, the use of energy resources by these scheduling algorithms may be improved. Technology enhancements in the semiconductor industry have allowed the hardware vendors to reduce the device transition and energy overheads. The decrease in overhead of sleep transitions has opened new opportunities to further reduce the device energy consumption. In this research effort, we propose an intra-task device scheduling algorithm for real-time systems that wakes up a device on demand and reduces its active time while ensuring system schedulability. This intra-task device scheduling algorithm is extended for devices with multiple sleep states to further minimise the overall device energy consumption of the system. The proposed algorithms have less complexity when compared to the conservative inter-task device scheduling algorithms. The system model used relaxes some of the assumptions commonly made in the state-of-the-art that restrict their practical relevance. Apart from the aforementioned advantages, the proposed algorithms are shown to demonstrate the substantial energy savings.
Resumo:
Over the past decades several approaches for schedulability analysis have been proposed for both uni-processor and multi-processor real-time systems. Although different techniques are employed, very little has been put forward in using formal specifications, with the consequent possibility for mis-interpretations or ambiguities in the problem statement. Using a logic based approach to schedulability analysis in the design of hard real-time systems eases the synthesis of correct-by-construction procedures for both static and dynamic verification processes. In this paper we propose a novel approach to schedulability analysis based on a timed temporal logic with time durations. Our approach subsumes classical methods for uni-processor scheduling analysis over compositional resource models by providing the developer with counter-examples, and by ruling out schedules that cause unsafe violations on the system. We also provide an example showing the effectiveness of our proposal.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
The 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015). 13 to 17, Apr, 2015, Embedded Systems. Salamanca, Spain.
Resumo:
While fractional calculus (FC) is as old as integer calculus, its application has been mainly restricted to mathematics. However, many real systems are better described using FC equations than with integer models. FC is a suitable tool for describing systems characterised by their fractal nature, long-term memory and chaotic behaviour. It is a promising methodology for failure analysis and modelling, since the behaviour of a failing system depends on factors that increase the model’s complexity. This paper explores the proficiency of FC in modelling complex behaviour by tuning only a few parameters. This work proposes a novel two-step strategy for diagnosis, first modelling common failure conditions and, second, by comparing these models with real machine signals and using the difference to feed a computational classifier. Our proposal is validated using an electrical motor coupled with a mechanical gear reducer.
Resumo:
Distributed real-time systems such as automotive applications are becoming larger and more complex, thus, requiring the use of more powerful hardware and software architectures. Furthermore, those distributed applications commonly have stringent real-time constraints. This implies that such applications would gain in flexibility if they were parallelized and distributed over the system. In this paper, we consider the problem of allocating fixed-priority fork-join Parallel/Distributed real-time tasks onto distributed multi-core nodes connected through a Flexible Time Triggered Switched Ethernet network. We analyze the system requirements and present a set of formulations based on a constraint programming approach. Constraint programming allows us to express the relations between variables in the form of constraints. Our approach is guaranteed to find a feasible solution, if one exists, in contrast to other approaches based on heuristics. Furthermore, approaches based on constraint programming have shown to obtain solutions for these type of formulations in reasonable time.
Resumo:
In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].
Resumo:
Article in Press, Corrected Proof
Resumo:
This paper studies periodic gaits of multi-legged locomotion systems based on dynamic models. The purpose is to determine the system performance during walking and the best set of locomotion variables. For that objective the prescribed motion of the robot is completely characterized in terms of several locomotion variables such as gait, duty factor, body height, step length, stroke pitch, foot clearance, legs link lengths, foot-hip offset, body and legs mass and cycle time. In this perspective, we formulate three performance measures of the walking robot namely, the mean absolute energy, the mean power dispersion and the mean power lost in the joint actuators per walking distance. A set of model-based experiments reveals the influence of the locomotion variables in the proposed indices.