915 resultados para Service system
Resumo:
Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.
Resumo:
In this report it was designed an innovative satellite-based monitoring approach applied on the Iraqi Marshlands to survey the extent and distribution of marshland re-flooding and assess the development of wetland vegetation cover. The study, conducted in collaboration with MEEO Srl , makes use of images collected from the sensor (A)ATSR onboard ESA ENVISAT Satellite to collect data at multi-temporal scales and an analysis was adopted to observe the evolution of marshland re-flooding. The methodology uses a multi-temporal pixel-based approach based on classification maps produced by the classification tool SOIL MAPPER ®. The catalogue of the classification maps is available as web service through the Service Support Environment Portal (SSE, supported by ESA). The inundation of the Iraqi marshlands, which has been continuous since April 2003, is characterized by a high degree of variability, ad-hoc interventions and uncertainty. Given the security constraints and vastness of the Iraqi marshlands, as well as cost-effectiveness considerations, satellite remote sensing was the only viable tool to observe the changes taking place on a continuous basis. The proposed system (ALCS – AATSR LAND CLASSIFICATION SYSTEM) avoids the direct use of the (A)ATSR images and foresees the application of LULCC evolution models directly to „stock‟ of classified maps. This approach is made possible by the availability of a 13 year classified image database, conceived and implemented in the CARD project (http://earth.esa.int/rtd/Projects/#CARD).The approach here presented evolves toward an innovative, efficient and fast method to exploit the potentiality of multi-temporal LULCC analysis of (A)ATSR images. The two main objectives of this work are both linked to a sort of assessment: the first is to assessing the ability of modeling with the web-application ALCS using image-based AATSR classified with SOIL MAPPER ® and the second is to evaluate the magnitude, the character and the extension of wetland rehabilitation.
Resumo:
The European External Action Service (EEAS or Service) is one of the most significant and most debated innovations introduced by the Lisbon Treaty. This analysis intends to explain the anomalous design of the EEAS in light of its function, which consists in the promotion of external action coherence. Coherence is a principle of the EU legal system, which requires synergy in the external actions of the Union and its Members. It can be enforced only through the coordination of European policy-makers' initiatives, by bridging the gap between the 'Communitarian' and intergovernmental approaches. This is the 'Union method' envisaged by A. Merkel: "coordinated action in a spirit of solidarity - each of us in the area for which we are responsible but all working towards the same goal". The EEAS embodies the 'Union method', since it is institutionally linked to both Union organs and Member States. It is also capable of enhancing synergy in policy management and promoting unity in international representation, since its field of action is delimited not by an abstract concern for institutional balance but by a pragmatic assessment of the need for coordination in each sector. The challenge is now to make sure that this pragmatic approach is applied with respect to all the activities of the Service, in order to reinforce its effectiveness. The coordination brought by the EEAS is in fact the only means through which a European foreign policy can come into being: the choice is not between the Community method and the intergovernmental method, but between a coordinated position and nothing at all.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
In this thesis we focus on optimization and simulation techniques applied to solve strategic, tactical and operational problems rising in the healthcare sector. At first we present three applications to Emilia-Romagna Public Health System (SSR) developed in collaboration with Agenzia Sanitaria e Sociale dell'Emilia-Romagna (ASSR), a regional center for innovation and improvement in health. Agenzia launched a strategic campaign aimed at introducing Operations Research techniques as decision making tools to support technological and organizational innovations. The three applications focus on forecast and fund allocation of medical specialty positions, breast screening program extension and operating theater planning. The case studies exploit the potential of combinatorial optimization, discrete event simulation and system dynamics techniques to solve resource constrained problem arising within Emilia-Romagna territory. We then present an application in collaboration with Dipartimento di Epidemiologia del Lazio that focuses on population demand of service allocation to regional emergency departments. Finally, a simulation-optimization approach, developed in collaboration with INESC TECH center of Porto, to evaluate matching policies for the kidney exchange problem is discussed.
Resumo:
Resource management is of paramount importance in network scenarios and it is a long-standing and still open issue. Unfortunately, while technology and innovation continue to evolve, our network infrastructure system has been maintained almost in the same shape for decades and this phenomenon is known as “Internet ossification”. Software-Defined Networking (SDN) is an emerging paradigm in computer networking that allows a logically centralized software program to control the behavior of an entire network. This is done by decoupling the network control logic from the underlying physical routers and switches that forward traffic to the selected destination. One mechanism that allows the control plane to communicate with the data plane is OpenFlow. The network operators could write high-level control programs that specify the behavior of an entire network. Moreover, the centralized control makes it possible to define more specific and complex tasks that could involve many network functionalities, e.g., security, resource management and control, into a single framework. Nowadays, the explosive growth of real time applications that require stringent Quality of Service (QoS) guarantees, brings the network programmers to design network protocols that deliver certain performance guarantees. This thesis exploits the use of SDN in conjunction with OpenFlow to manage differentiating network services with an high QoS. Initially, we define a QoS Management and Orchestration architecture that allows us to manage the network in a modular way. Then, we provide a seamless integration between the architecture and the standard SDN paradigm following the separation between the control and data planes. This work is a first step towards the deployment of our proposal in the University of California, Los Angeles (UCLA) campus network with differentiating services and stringent QoS requirements. We also plan to exploit our solution to manage the handoff between different network technologies, e.g., Wi-Fi and WiMAX. Indeed, the model can be run with different parameters, depending on the communication protocol and can provide optimal results to be implemented on the campus network.
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Night Vision Imaging System (NVIS) certification requirements analysis of an Airbus Helicopters H135
Resumo:
The safe operation of nighttime flight missions would be enhanced using Night Vision Imaging Systems (NVIS) equipment. This has been clear to the military since 1970s and to the civil helicopters since 1990s. In these last months, even Italian Emergency Medical Service (EMS) operators require Night Vision Goggles (NVG) devices that therefore amplify the ambient light. In order to fly with this technology, helicopters have to be NVIS-approved. The author have supported a company, to quantify the potentiality of undertaking the certification activity, through a feasibility study. Even before, NVG description and working principles have been done, then specifications analysis about the processes to make a helicopter NVIS-approved has been addressed. The noteworthy difference between military specifications and the civilian ones highlights non-irrevelant lacks in the latter. The activity of NVIS certification could be a good investment because the following targets have been achieved: Reductions of the certification cost, of the operating time and of the number of non-compliance.
Resumo:
The purpose of the present manuscript is to present the advances performed in medicine using a Personalized Decision Support System (PDSS). The models used in Decision Support Systems (DSS) are examined in combination with Genome Information and Biomarkers to produce personalized result for each individual. The concept of personalize medicine is described in depth and application of PDSS for Cardiovascular Diseases (CVD) and Type-1 Diabetes Mellitus (T1DM) are analyzed. Parameters extracted from genes, biomarkers, nutrition habits, lifestyle and biological measurements feed DSSs, incorporating Artificial Intelligence Modules (AIM), to provide personalized advice, medication and treatment.
Resumo:
In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data was gathered on students’ usage of these tools. Regardless of the search system, students exhibited a marked inability to effectively evaluate sources and a heavy reliance on default search settings. On the quantitative benchmarks measured by this study, the EBSCO Discovery Service tool outperformed the other search systems in almost every category. This article describes these results and makes recommendations for libraries considering these tools.
Resumo:
The new Swiss implant system SPI became available three years ago and is used in combination with fixed and removable prosthetic reconstructions. In a pilot study the clinical procedures were evaluated and data of prosthetic complications of maintenance service were collected. 25 patients participated in the study with a total of 79 SPI implants during the time period from 2003-2004. 37 implants were located in the maxilla and 42 implants in the mandible. Two implants failed during the healing period, but no loaded implant was lost. Thus, the survival rate was 97.5% (77/79). 44 implants supported a fixed prosthesis, including nine single crowns and 33 implants were used in combination with removable partial denture. Four implants were used with ball anchor retention, 29 with bar support. The ELEMENT implant with the low implant shoulder allows very good esthetics. Prosthetic complications and maintenance service during the first year of function was comparable with other implant systems. Since the design of the abutment screws, healing caps and screwdriver was changed, the system has become easier in its application.
Resumo:
Data from 59 farms with complaints of udder health problems and insufficient quality of delivered milk that had been assessed by the Swiss Bovine Health Service (BHS) between 1999 and 2004 were retrospectively analysed. Data evaluated included farm characteristics such as farm size, herd size, average milk yield, milking system and housing system, deficits of the milking equipment and the milking practices, and bacteriological results of milk samples from all cows in lactation. The average size of the farms assessed by the BHS was larger than the size of the were evaluated, 42 showed obvious failures which the farm managers could have noticed. Only 5 of the 57 milkers carried out their work according to the generally valid guidelines of the National Mastitis Council. More than 2 basic mistakes were observed in the milking practices of 36 milkers. In 51 farms, mixed infections with several problem bacteria (those present in at least 20 % of the tested cows on a farm) were found. Staphylococcus aureus proved to be the most common problem germ. As the bacteria responsible for the herd problem (the sole problem bacteria detectable on a particular farm) Staphylococcus aureus was detected in 4 farms. The current study revealed that education in the area of milking techniques and milking practices of farmers should be improved in order to reduce the incidence of udder health problems on herd level. Staphylococcus aureus is the most important problem bacteria involved in herds with udder health problems in Switzerland. Staphylococcus aureus might be used in practice as the indicator germ for early recognition of management problems in dairy farms.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
In this paper, an Insulin Infusion Advisory System (IIAS) for Type 1 diabetes patients, which use insulin pumps for the Continuous Subcutaneous Insulin Infusion (CSII) is presented. The purpose of the system is to estimate the appropriate insulin infusion rates. The system is based on a Non-Linear Model Predictive Controller (NMPC) which uses a hybrid model. The model comprises a Compartmental Model (CM), which simulates the absorption of the glucose to the blood due to meal intakes, and a Neural Network (NN), which simulates the glucose-insulin kinetics. The NN is a Recurrent NN (RNN) trained with the Real Time Recurrent Learning (RTRL) algorithm. The output of the model consists of short term glucose predictions and provides input to the NMPC, in order for the latter to estimate the optimum insulin infusion rates. For the development and the evaluation of the IIAS, data generated from a Mathematical Model (MM) of a Type 1 diabetes patient have been used. The proposed control strategy is evaluated at multiple meal disturbances, various noise levels and additional time delays. The results indicate that the implemented IIAS is capable of handling multiple meals, which correspond to realistic meal profiles, large noise levels and time delays.