913 resultados para Monitoring, SLA, JBoss, Middleware, J2EE, Java, Service Level Agreements
Resumo:
Studies of street-level bureaucracy have introduced a variety of conceptualizations, research approaches, and causal inferences. While this research has produced several insights, the impact of variety in the institutional context has not been adequately explored. We present the construct of a public service gap as a way to incorporate contextual factors and facilitate comparison. This construct addresses the differences between what is asked of and what is offered to public servants working at the street level. The heuristic enables the systematic capture of macro- and meso-contextual influences, thus enhancing comparative research on street-level bureaucracy.
Resumo:
Digital holography microscopy (DHM) is an optical microscopy technique which allows recording non-invasively the phase shift induced by living cells with nanometric sensitivity. Here, we exploit the phase signal as an indicator of dry mass (related to the protein concentration). This parameter allows monitoring the protein production rate and its evolution during the cell cycle. ©2008 COPYRIGHT SPIE
Resumo:
Introduction: Streptomycin, as other aminoglycosides, exhibits concentration-dependent bacterial killing but has a narrow therapeutic window. It is primarily eliminated unchanged by the kidneys. Data and dosing information to achieve a safe regimen in patients with chronic renal failure undergoing hemodialysis (HD) are scarce. Although main adverse reactions are related to prolonged, elevated serum concentrations, literature recommendation is to administer streptomycin after each HD. Patients (or Materials) and Methods: We report the case of a patient with end-stage renal failure, undergoing HD, who was successfully treated with streptomycin for gentamicin-resistant Enterococcus faecalis bacteremia with prosthetic arteriovenous fistula infection. Streptomycin was administered intravenously 7.5 mg/kg, 3 hours before each dialysis (3 times a week) during 6 weeks in combination with amoxicillin. Streptomycin plasma levels were monitored with repeated blood sampling before, after, and between HD sessions. A 2-compartment model was used to reconstruct the concentration time profile over days on and off HD. Results: Streptomycin trough plasma-concentration was 2.8 mg/L. It peaked to 21.4 mg/L 30 minutes after intravenous administration, decreased to 18.2 mg/L immediately before HD, and dropped to 4.5 mg/L at the end of a 4-hour HD session. Plasma level increased again to 5.7 mg/L 2 hours after the end of HD and was 2.8 mg/L 48 hours later, before the next administration and HD. The pharmacokinetics of streptomycin was best described with a 2-compartment model. The computer simulation fitted fairly well to the observed concentrations during or between HD sessions. Redistribution between the 2 compartments after the end of HD reproduced the rebound of plasma concentrations after HD. No significant toxicity was observed during treatment. The outcome of the infection was favorable, and no sign of relapse was observed after a follow-up of 3 months. Conclusion: Streptomycin administration of 7.5 mg/kg 3 hours before HD sessions in a patient with end-stage renal failure resulted in an effective and safe dosing regimen. Monitoring plasma levels along with pharmacokinetic simulation document the suitability of this dosing scheme, which should replace current dosage recommendations for streptomycin in HD.
Resumo:
Työssä esitellään sovellusintegraation osia ja merkitystä sähköisessä liiketoiminnassa sekä toteutetaan toiminnanohjausjärjestelmän tietoja yrityksen asiakkaalle välittävä palvelu. Yritykset yhdistävät järjestelmänsä asiakkaiden ja liiketoimintakumppanien kanssa integroimalla. Työn teoriaosassa määritellään verkkoliiketoiminta, integraatioratkaisun osa-alueet ja integroinnin merkitys yrityksen kannalta sekä esitellään integrointiin käytettäviä perusteknologioita. Soveltavassa osassa toteutetaan paperiteollisuusyrityksen tilaus- ja toimitustietoja XML:n avulla lähettävä Java-pohjainen järjestelmä. Työssä tarkastellaan integraation merkitystä perusteknologian yrityksille ja integraation käytännön toteutusta. Tarkastelun tuloksena todetaan, että liiketoimintajärjestelmien integrointi on välttämätöntä yritysten kannattavuuden ja tehokkuuden kannalta. Integraatioprosessi on monimutkainen, joten sen toteutus vaatii tarkkaa suunnittelua, hallintaa ja aikaa.
Resumo:
Chromium(III) at the ng L-1 level was extracted using partially silylated MCM-41 modified by a tetraazamacrocyclic compound (TAMC) and determined by inductively coupled plasma optical emision spectrometry (ICP OES). The extraction time and efficiency, pH and flow rate, type and minimum amount of stripping acid, and break- through volume were investigated. The method's enrichment factor and detection limit are 300 and 45.5 pg mL-1, respectively. The maximum capacity of the 10 mg of modified silylated MCM-41 was found to be 400.5±4.7 µg for Cr(III). The method was applied to the determination of Cr(III) and Cr(VI) in the wastewater of the chromium electroplating industry and in environmental and biological samples (black tea, hot and black pepper).
Resumo:
In this thesis, a Peer-to-Peer communication middleware for mobile environment is developed using the Qt framework and the Qt Mobility extension. The Peer-to-Peer middleware – called as PeerHood – is for service sharing in network neighborhood. In addition, the PeerHood enables service connectivity and device monitoring functionalities. The concept of the PeerHood is already available in native C++ implementation on Linux platform using services from the platform. In this work, the PeerHood concept is remade to be based on use of the Qt framework. The objective of the new solution is to increase PeerHood quality with using functionalities from the Qt framework and the Qt Mobility extension. Furthermore, by using the Qt framework, the PeerHood middleware can be implemented to be portable cross-platform middleware. The quality of the new PeerHood implementation is evaluated with defined quality factors and compared with the existing PeerHood. Reliability, CPU usage, memory usage and static code analysis metrics are used in evaluation. The new PeerHood is shown to be more reliable and flexible that the existing one.
Resumo:
Wireless sensor networks and its applications have been widely researched and implemented in both commercial and non commercial areas. The usage of wireless sensor network has developed its market from military usage to daily use of human livings. Wireless sensor network applications from monitoring prospect are used in home monitoring, farm fields and habitant monitoring to buildings structural monitoring. As the usage boundaries of wireless sensor networks and its applications are emerging there are definite ongoing research, such as lifetime for wireless sensor network, security of sensor nodes and expanding the applications with modern day scenarios of applications as web services. The main focus in this thesis work is to study and implement monitoring application for infrastructure based sensor network and expand its usability as web service to facilitate mobile clients. The developed application is implemented for wireless sensor nodes information collection and monitoring purpose enabling home or office environment remote monitoring for a user.
Resumo:
In the vision of Mark Weiser on ubiquitous computing, computers are disappearing from the focus of the users and are seamlessly interacting with other computers and users in order to provide information and services. This shift of computers away from direct computer interaction requires another way of applications to interact without bothering the user. Context is the information which can be used to characterize the situation of persons, locations, or other objects relevant for the applications. Context-aware applications are capable of monitoring and exploiting knowledge about external operating conditions. These applications can adapt their behaviour based on the retrieved information and thus to replace (at least a certain amount) the missing user interactions. Context awareness can be assumed to be an important ingredient for applications in ubiquitous computing environments. However, context management in ubiquitous computing environments must reflect the specific characteristics of these environments, for example distribution, mobility, resource-constrained devices, and heterogeneity of context sources. Modern mobile devices are equipped with fast processors, sufficient memory, and with several sensors, like Global Positioning System (GPS) sensor, light sensor, or accelerometer. Since many applications in ubiquitous computing environments can exploit context information for enhancing their service to the user, these devices are highly useful for context-aware applications in ubiquitous computing environments. Additionally, context reasoners and external context providers can be incorporated. It is possible that several context sensors, reasoners and context providers offer the same type of information. However, the information providers can differ in quality levels (e.g. accuracy), representations (e.g. position represented in coordinates and as an address) of the offered information, and costs (like battery consumption) for providing the information. In order to simplify the development of context-aware applications, the developers should be able to transparently access context information without bothering with underlying context accessing techniques and distribution aspects. They should rather be able to express which kind of information they require, which quality criteria this information should fulfil, and how much the provision of this information should cost (not only monetary cost but also energy or performance usage). For this purpose, application developers as well as developers of context providers need a common language and vocabulary to specify which information they require respectively they provide. These descriptions respectively criteria have to be matched. For a matching of these descriptions, it is likely that a transformation of the provided information is needed to fulfil the criteria of the context-aware application. As it is possible that more than one provider fulfils the criteria, a selection process is required. In this process the system has to trade off the provided quality of context and required costs of the context provider against the quality of context requested by the context consumer. This selection allows to turn on context sources only if required. Explicitly selecting context services and thereby dynamically activating and deactivating the local context provider has the advantage that also the resource consumption is reduced as especially unused context sensors are deactivated. One promising solution is a middleware providing appropriate support in consideration of the principles of service-oriented computing like loose coupling, abstraction, reusability, or discoverability of context providers. This allows us to abstract context sensors, context reasoners and also external context providers as context services. In this thesis we present our solution consisting of a context model and ontology, a context offer and query language, a comprehensive matching and mediation process and a selection service. Especially the matching and mediation process and the selection service differ from the existing works. The matching and mediation process allows an autonomous establishment of mediation processes in order to transfer information from an offered representation into a requested representation. In difference to other approaches, the selection service selects not only a service for a service request, it rather selects a set of services in order to fulfil all requests which also facilitates the sharing of services. The approach is extensively reviewed regarding the different requirements and a set of demonstrators shows its usability in real-world scenarios.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
In any wide-area distributed system there is a need to communicate and interact with a range of networked devices and services ranging from computer-based ones (CPU, memory and disk), to network components (hubs, routers, gateways) and specialised data sources (embedded devices, sensors, data-feeds). In order for the ensemble of underlying technologies to provide an environment suitable for virtual organisations to flourish, the resources that comprise the fabric of the Grid must be monitored in a seamless manner that abstracts away from the underlying complexity. Furthermore, as various competing Grid middleware offerings are released and evolve, an independent overarching monitoring service should act as a corner stone that ties these systems together. GridRM is a standards-based approach that is independent of any given middleware and that can utilise legacy and emerging resource-monitoring technologies. The main objective of the project is to produce a standardised and extensible architecture that provides seamless mechanisms to interact with native monitoring agents across heterogeneous resources.
Resumo:
One of the most pervasive classes of services needed to support e-Science applications are those responsible for the discovery of resources. We have developed a solution to the problem of service discovery in a Semantic Web/Grid setting. We do this in the context of bioinformatics, which is the use of computational and mathematical techniques to store, manage, and analyse the data from molecular biology in order to answer questions about biological phenomena. Our specific application is myGrid (www.mygrid.org.uk) that is developing open source, service-based middleware upon which bioinformatics applications can be built. myGrid is specifically targeted at developing open source high-level service Grid middleware for bioinformatics.
Resumo:
One of the most pervasive classes of services needed to support e-Science applications are those responsible for the discovery of resources. We have developed a solution to the problem of service discovery in a Semantic Web/Grid setting. We do this in the context of bioinformatics, which is the use of computational and mathematical techniques to store, manage, and analyse the data from molecular biology in order to answer questions about biological phenomena. Our specific application is myGrid (http: //www.mygrid.org.uk) that is developing open source, service-based middleware upon which bioin- formatics applications can be built. myGrid is specif- ically targeted at developing open source high-level service Grid middleware for bioinformatics.