128 resultados para Embedded systems
Resumo:
Developing an efficient server-based real-time scheduling solution that supports dynamic task-level parallelism is now relevant to even the desktop and embedded domains and no longer only to the high performance computing market niche. This paper proposes a novel approach that combines the constantbandwidth server abstraction with a work-stealing load balancing scheme which, while ensuring isolation among tasks, enables a task to be executed on more than one processor at a given time instant.
Resumo:
Learning management systems are routinely used for presenting, solving and grading exercises with large classes. However, teachers are constrained to use questions with pre-defined answers, such as multiple-choice, to automatically correct the exercises of their students. Complex exercises cannot be evaluated automatically by the LMS and require the coordination of a set of heterogeneous systems. For instance, programming exercises require a specialized exercise resolution environment and automatic evaluation features, each provided by a different type of system. In this paper, the authors discuss an approach for the coordination of a network of eLearning systems supporting the resolution of exercises. The proposed approach is based on a pivot component embedded in the LMS and has two main roles: 1) provide an exercise resolution environment, and 2) coordinate communication between the LMS and other systems, exposing their functions as web services. The integration of the pivot component in the LMS relies on Learning Tools Interoperability (LTI). This paper presents an architecture to coordinate a network of eLearning systems and validate the proposed approach by creating such a network integrated with LMS from two different vendors.
Resumo:
The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.
Resumo:
Learning Management Systems (LMS) are used all over Higher Education Institutions (HEI) and the need to know and understand its adoption and usage arises. However, there is a lack of information about how LMSs are being used, which are the most adopted, whether there is a country adoption standard and which countries use more LMSs. A research team is developing a project that tries to fill this lack of information and provide the needed answers. With this purpose, on a first phase, it a survey was taken place. The results of this survey are presented in this paper. Another purpose of this paper is to disseminate the ongoing project.
Resumo:
Business Intelligence (BI) is one emergent area of the Decision Support Systems (DSS) discipline. Over the last years, the evolution in this area has been considerable. Similarly, in the last years, there has been a huge growth and consolidation of the Data Mining (DM) field. DM is being used with success in BI systems, but a truly DM integration with BI is lacking. Therefore, a lack of an effective usage of DM in BI can be found in some BI systems. An architecture that pretends to conduct to an effective usage of DM in BI is presented.
Resumo:
7th Mediterranean Conference on Information Systems, MCIS 2012, Guimaraes, Portugal, September 8-10, 2012, Proceedings Series: Lecture Notes in Business Information Processing, Vol. 129
Resumo:
We are working on the confluence of knowledge management, organizational memory and emergent knowledge with the lens of complex adaptive systems. In order to be fundamentally sustainable organizations search for an adaptive need for managing ambidexterity of day-to-day work and innovation. An organization is an entity of a systemic nature, composed of groups of people who interact to achieve common objectives, making it necessary to capture, store and share interactions knowledge with the organization, this knowledge can be generated in intra-organizational or inter-organizational level. The organizations have organizational memory of knowledge of supported on the Information technology and systems. Each organization, especially in times of uncertainty and radical changes, to meet the demands of the environment, needs timely and sized knowledge on the basis of tacit and explicit. This sizing is a learning process resulting from the interaction that emerges from the relationship between the tacit and explicit knowledge and which we are framing within an approach of Complex Adaptive Systems. The use of complex adaptive systems for building the emerging interdependent relationship, will produce emergent knowledge that will improve the organization unique developing.
Resumo:
Effective legislation and standards for the coordination procedures between consumers, producers and the system operator supports the advances in the technologies that lead to smart distribution systems. In short-term (ST) maintenance scheduling procedure, the energy producers in a distribution system access to the long-term (LT) outage plan that is released by the distribution system operator (DSO). The impact of this additional information on the decision-making procedure of producers in ST maintenance scheduling is studied in this paper. The final ST maintenance plan requires the approval of the DSO that has the responsibility to secure the network reliability and quality, and other players have to follow the finalized schedule. Maintenance scheduling in the producers’ layer and the coordination procedure between them and the DSO is modelled in this paper. The proposed method is applied to a 33-bus distribution system.
Resumo:
Scheduling is a critical function that is present throughout many industries and applications. A great need exists for developing scheduling approaches that can be applied to a number of different scheduling problems with significant impact on performance of business organizations. A challenge is emerging in the design of scheduling support systems for manufacturing environments where dynamic adaptation and optimization become increasingly important. In this paper, we describe a Self-Optimizing Mechanism for Scheduling System through Nature Inspired Optimization Techniques (NIT).
Resumo:
In this abstract is presented an energy management system included in a SCADA system existent in a intelligent home. The system control the home energy resources according to the players definitions (electricity consumption and comfort levels), the electricity prices variation in real time mode and the DR events proposed by the aggregators.
Resumo:
Distribution systems are the first volunteers experiencing the benefits of smart grids. The smart grid concept impacts the internal legislation and standards in grid-connected and isolated distribution systems. Demand side management, the main feature of smart grids, acquires clear meaning in low voltage distribution systems. In these networks, various coordination procedures are required between domestic, commercial and industrial consumers, producers and the system operator. Obviously, the technical basis for bidirectional communication is the prerequisite of developing such a coordination procedure. The main coordination is required when the operator tries to dispatch the producers according to their own preferences without neglecting its inherent responsibility. Maintenance decisions are first determined by generating companies, and then the operator has to check and probably modify them for final approval. In this paper the generation scheduling from the viewpoint of a distribution system operator (DSO) is formulated. The traditional task of the DSO is securing network reliability and quality. The effectiveness of the proposed method is assessed by applying it to a 6-bus and 9-bus distribution system.
Resumo:
This paper proposes a new methodology to reduce the probability of occurring states that cause load curtailment, while minimizing the involved costs to achieve that reduction. The methodology is supported by a hybrid method based on Fuzzy Set and Monte Carlo Simulation to catch both randomness and fuzziness of component outage parameters of transmission power system. The novelty of this research work consists in proposing two fundamentals approaches: 1) a global steady approach which deals with building the model of a faulted transmission power system aiming at minimizing the unavailability corresponding to each faulted component in transmission power system. This, results in the minimal global cost investment for the faulted components in a system states sample of the transmission network; 2) a dynamic iterative approach that checks individually the investment’s effect on the transmission network. A case study using the Reliability Test System (RTS) 1996 IEEE 24 Buses is presented to illustrate in detail the application of the proposed methodology.
Resumo:
In this paper we present VERITAS, a tool that focus time maintenance, that is one of the most important processes in the engineering of the time during the development of KBS. The verification and validation (V&V) process is part of a wider process denominated knowledge maintenance, in which an enterprise systematically gathers, organizes, shares, and analyzes knowledge to accomplish its goals and mission. The V&V process states if the software requirements specifications have been correctly and completely fulfilled. The methodologies proposed in software engineering have showed to be inadequate for Knowledge Based Systems (KBS) validation and verification, since KBS present some particular characteristics. VERITAS is an automatic tool developed for KBS verification which is able to detect a large number of knowledge anomalies. It addresses many relevant aspects considered in real applications, like the usage of rule triggering selection mechanisms and temporal reasoning.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simu-lator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM pro-vides several dynamic strategies for agents’ behaviour. This paper presents a method that aims to provide market players strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses an auxiliary forecasting tool, e.g. an Artificial Neural Net-work, to predict the electricity market prices, and analyses its forecasting error patterns. Through the recognition of such patterns occurrence, the method predicts the expected error for the next forecast, and uses it to adapt the actual forecast. The goal is to approximate the forecast to the real value, reducing the forecasting error.
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.