935 resultados para Shared ServiceCenter
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
Mestrado em Gestão e Empreendedorismo
Resumo:
We present a 12(1 + 3R/(4m)) competitive algorithm for scheduling implicit-deadline sporadic tasks on a platform comprising m processors, where a task may request one of R shared resources.
Resumo:
This paper focuses on the scheduling of tasks with hard and soft real-time constraints in open and dynamic real-time systems. It starts by presenting a capacity sharing and stealing (CSS) strategy that supports the coexistence of guaranteed and non-guaranteed bandwidth servers to efficiently handle soft-tasks’ overloads by making additional capacity available from two sources: (i) reclaiming unused reserved capacity when jobs complete in less than their budgeted execution time and (ii) stealing reserved capacity from inactive non-isolated servers used to schedule best-effort jobs. CSS is then combined with the concept of bandwidth inheritance to efficiently exchange reserved bandwidth among sets of inter-dependent tasks which share resources and exhibit precedence constraints, assuming no previous information on critical sections and computation times is available. The proposed Capacity Exchange Protocol (CXP) has a better performance and a lower overhead when compared against other available solutions and introduces a novel approach to integrate precedence constraints among tasks of open real-time systems.
Resumo:
Consider a communication medium shared among a set of computer nodes; these computer nodes issue messages that are requested to be transmitted and they must finish their transmission before their respective deadlines. TDMA/SS is a protocol that solves this problem; it is a specific type of Time Division Multiple Access (TDMA) where a computer node is allowed to skip its time slot and then this time slot can be used by another computer node. We present an algorithm that computes exact queuing times for TDMA/SS in conjunction with Rate-Monotonic (RM) or Earliest- Deadline-First (EDF).
Resumo:
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming no precise information on critical sections and computation times is available. The concept of bandwidth inheritance is combined with a capacity sharing and stealing mechanism to efficiently exchange bandwidth among tasks to minimise the degree of deviation from the ideal system’s behaviour caused by inter-application blocking. The proposed Capacity Exchange Protocol (CXP) is simpler than other proposed solutions for sharing resources in open real-time systems since it does not attempt to return the inherited capacity in the same exact amount to blocked servers. This loss of optimality is worth the reduced complexity as the protocol’s behaviour nevertheless tends to be fair and outperforms the previous solutions in highly dynamic scenarios as demonstrated by extensive simulations. A formal analysis of CXP is presented and the conditions under which it is possible to guarantee hard real-time tasks are discussed.
Resumo:
Mestrado em Educação e Intervenção Social - Desenvolvimento Comunitário e Educação de Adultos
Resumo:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
Resumo:
OBJECTIVE To analyze the dynamics of operation of the Bipartite Committees in health care in the Brazilian states.METHODS The research included visits to 24 states, direct observation, document analysis, and performance of semi-structured interviews with state and local leaders. The characterization of each committee was performed between 2007 and 2010, and four dimensions were considered: (i) level of institutionality, classified as advanced, intermediate, or incipient; (ii) agenda of intergovernmental negotiations, classified as diversified/restricted, adapted/not adapted to the reality of each state, and shared/unshared between the state and municipalities; (iii) political processes, considering the character and scope of intergovernmental relations; and (iv) capacity of operation, assessed as high, moderate, or low.RESULTS Ten committees had advanced level of institutionality. The agenda of the negotiations was diversified in all states, and most of them were adapted to the state reality. However, one-third of the committees showed power inequalities between the government levels. Cooperative and interactive intergovernmental relations predominated in 54.0% of the states. The level of institutionality, scope of negotiations, and political processes influenced Bipartite Committees’ ability to formulate policies and coordinate health care at the federal level. Bipartite Committees with a high capacity of operation predominated in the South and Southeast regions, while those with a low capacity of operations predominated in the North and Northeast.CONCLUSIONS The regional differences in operation among Bipartite Interagency Committees suggest the influence of historical-structural variables (socioeconomic development, geographic barriers, characteristics of the health care system) in their capacity of intergovernmental health care management. However, structural problems can be overcome in some states through institutional and political changes. The creation of federal investments, varied by regions and states, is critical in overcoming the structural inequalities that affect political institutions. The operation of Bipartite Committees is a step forward; however, strengthening their ability to coordinate health care is crucial in the regional organization of the health care system in the Brazilian states.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica
Resumo:
OBJECTIVE To analyze whether the level of institutional and matrix support is associated with better certification of primary healthcare teams.METHODS In this cross-sectional study, we evaluated two kinds of primary healthcare support – 14,489 teams received institutional support and 14,306 teams received matrix support. Logistic regression models were applied. In the institutional support model, the independent variable was “level of support” (as calculated by the sum of supporting activities for both modalities). In the matrix support model, in turn, the independent variables were the supporting activities. The multivariate analysis has considered variables with p < 0.20. The model was adjusted by the Hosmer-Lemeshow test.RESULTS The teams had institutional and matrix supporting activities (84.0% and 85.0%), respectively, with 55.0% of them performing between six and eight activities. For the institutional support, we have observed 1.96 and 3.77 chances for teams who had medium and high levels of support to have very good or good certification, respectively. For the matrix support, the chances of their having very good or good certification were 1.79 and 3.29, respectively. Regarding to the association between institutional support activities and the certification, the very good or good certification was positively associated with self-assessment (OR = 1.95), permanent education (OR = 1.43), shared evaluation (OR = 1.40), and supervision and evaluation of indicators (OR = 1.37). In regards to the matrix support, the very good or good certification was positively associated with permanent education (OR = 1.50), interventions in the territory (OR = 1.30), and discussion in the work processes (OR = 1.23).CONCLUSIONS In Brazil, supporting activities are being incorporated in primary healthcare, and there is an association between the level of support, both matrix and institutional, and the certification result.
Resumo:
Weblabs are spreading their influence in Science and Engineering (S&E) courses providing a way to remotely conduct real experiments. Typically, they are implemented by different architectures and infrastructures supported by Instruments and Modules (I&Ms) able to be remotely controlled and observed. Besides the inexistence of a standard solution for implementing weblabs, their reconfiguration is limited to a setup procedure that enables interconnecting a set of preselected I&Ms into an Experiment Under Test (EUT). Moreover, those I&Ms are not able to be replicated or shared by different weblab infrastructures, since they are usually based on hardware platforms. Thus, to overcome these limitations, this paper proposes a standard solution that uses I&Ms embedded into Field-Programmable Gate Array (FPGAs) devices. It is presented an architecture based on the IEEE1451.0 Std. supported by a FPGA-based weblab infrastructure able to be remotely reconfigured with I&Ms, described through standard Hardware Description Language (HDL) files, using a Reconfiguration Tool (RecTool).
Resumo:
It is of crucial importance the integration of practical sessions in engineering curricula owing to their significant role in understanding engineering concepts and scientific phenomena. However, the lack of practical sessions due to the high costs of the equipment and the unavailability of instructors has caused a significant declination in experimentation in engineering education. Remote laboratories have tackled this issues providing online reusable and shared workbenches unconstrained by neither geographical nor time considerations. Thereby, they have extremely proliferated among universities and integrated into engineering curricula over the last decade. This contribution compiles diverse experiences based on the deployment of the remote laboratory, Virtual Instrument Systems in Reality (VISIR), on the practices of undergraduate engineering grades at various universities within the VISIR community. It aims to show the impact of its usage on engineering education concerning the assessments of students and teachers as well. In addition, the paper address the next challenges and future works carried out at several universities within the VISIR community.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
PLoS ONE - www.plosone.org, V.9, e886