864 resultados para Monitoring, SLA, JBoss, Middleware, J2EE, Java, Service Level Agreements


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives: To explore the views of eye health professionals and service users on shared community and hospital care for wet or neovascular age-related macular degeneration (nAMD).

Method: Using maximum variation sampling, 5 focus groups and 10 interviews were conducted with 23 service users and 24 eye health professionals from across the UK (consisting of 8 optometrists, 6 ophthalmologists, 6 commissioners, 2 public health representatives and 2 clinical eye care advisors to local Clinical Commissioning Groups). Data were transcribed verbatim and analysed thematically using constant comparative techniques derived from grounded theory methodology.

Results: The needs and preferences of those with nAMD appear to be at odds with the current service being provided. There was enthusiasm among health professionals and service users about the possibility of shared care for nAMD as it was felt to have the potential to relieve hospital eye service burden and represent a more patient-centred option, but there were a number of perceived barriers to implementation. Some service users and ophthalmologists voiced concerns about optometrist competency and the potential for delays with referrals to secondary care if stable nAMD became active again. The health professionals were divided as to whether shared care was financially more efficient than the current model of care. Specialist training for optometrists, under the supervision of ophthalmologists, was deemed to be the most effective method of training and was perceived to have the potential to improve the communication and trust that shared care would require.

Conclusions: While shared care is perceived to represent a promising model of nAMD care, voiced concerns suggest that there would need to be greater collaboration between ophthalmology and optometry, in terms of interprofessional trust and communication.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Highway structures such as bridges are subject to continuous degradation primarily due to ageing, loading and environmental factors. A rational transport policy must monitor and provide adequate maintenance to this infrastructure to guarantee the required levels of transport service and safety. Increasingly in recent years, bridges are being instrumented and monitored on an ongoing basis due to the implementation of Bridge Management Systems. This is very effective and provides a high level of protection to the public and early warning if the bridge becomes unsafe. However, the process can be expensive and time consuming, requiring the installation of sensors and data acquisition electronics on the bridge. This paper investigates the use of an instrumented 2-axle vehicle fitted with accelerometers to monitor the dynamic behaviour of a bridge network in a simple and cost-effective manner. A simplified half car-beam interaction model is used to simulate the passage of a vehicle over a bridge. This investigation involves the frequency domain analysis of the axle accelerations as the vehicle crosses the bridge. The spectrum of the acceleration record contains noise, vehicle, bridge and road frequency components. Therefore, the bridge dynamic behaviour is monitored in simulations for both smooth and rough road surfaces. The vehicle mass and axle spacing are varied in simulations along with bridge structural damping in order to analyse the sensitivity of the vehicle accelerations to a change in bridge properties. These vehicle accelerations can be obtained for different periods of time and serve as a useful tool to monitor the variation of bridge frequency and damping with time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As tecnologias de informação e comunicação na área da saúde não são só um instrumento para a boa gestão de informação, mas antes um fator estratégico para uma prestação de cuidados mais eficiente e segura. As tecnologias de informação são um pilar para que os sistemas de saúde evoluam em direção a um modelo centrado no cidadão, no qual um conjunto abrangente de informação do doente deve estar automaticamente disponível para as equipas que lhe prestam cuidados, independentemente de onde foi gerada (local geográfico ou sistema). Este tipo de utilização segura e agregada da informação clínica é posta em causa pela fragmentação generalizada das implementações de sistemas de informação em saúde. Várias aproximações têm sido propostas para colmatar as limitações decorrentes das chamadas “ilhas de informação” na saúde, desde a centralização total (um sistema único), à utilização de redes descentralizadas de troca de mensagens clínicas. Neste trabalho, propomos a utilização de uma camada de unificação baseada em serviços, através da federação de fontes de informação heterogéneas. Este agregador de informação clínica fornece a base necessária para desenvolver aplicações com uma lógica regional, que demostrámos com a implementação de um sistema de registo de saúde eletrónico virtual. Ao contrário dos métodos baseados em mensagens clínicas ponto-a-ponto, populares na integração de sistemas em saúde, desenvolvemos um middleware segundo os padrões de arquitetura J2EE, no qual a informação federada é expressa como um modelo de objetos, acessível através de interfaces de programação. A arquitetura proposta foi instanciada na Rede Telemática de Saúde, uma plataforma instalada na região de Aveiro que liga oito instituições parceiras (dois hospitais e seis centros de saúde), cobrindo ~350.000 cidadãos, utilizada por ~350 profissionais registados e que permite acesso a mais de 19.000.000 de episódios. Para além da plataforma colaborativa regional para a saúde (RTSys), introduzimos uma segunda linha de investigação, procurando fazer a ponte entre as redes para a prestação de cuidados e as redes para a computação científica. Neste segundo cenário, propomos a utilização dos modelos de computação Grid para viabilizar a utilização e integração massiva de informação biomédica. A arquitetura proposta (não implementada) permite o acesso a infraestruturas de e-Ciência existentes para criar repositórios de informação clínica para aplicações em saúde.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the vision of Mark Weiser on ubiquitous computing, computers are disappearing from the focus of the users and are seamlessly interacting with other computers and users in order to provide information and services. This shift of computers away from direct computer interaction requires another way of applications to interact without bothering the user. Context is the information which can be used to characterize the situation of persons, locations, or other objects relevant for the applications. Context-aware applications are capable of monitoring and exploiting knowledge about external operating conditions. These applications can adapt their behaviour based on the retrieved information and thus to replace (at least a certain amount) the missing user interactions. Context awareness can be assumed to be an important ingredient for applications in ubiquitous computing environments. However, context management in ubiquitous computing environments must reflect the specific characteristics of these environments, for example distribution, mobility, resource-constrained devices, and heterogeneity of context sources. Modern mobile devices are equipped with fast processors, sufficient memory, and with several sensors, like Global Positioning System (GPS) sensor, light sensor, or accelerometer. Since many applications in ubiquitous computing environments can exploit context information for enhancing their service to the user, these devices are highly useful for context-aware applications in ubiquitous computing environments. Additionally, context reasoners and external context providers can be incorporated. It is possible that several context sensors, reasoners and context providers offer the same type of information. However, the information providers can differ in quality levels (e.g. accuracy), representations (e.g. position represented in coordinates and as an address) of the offered information, and costs (like battery consumption) for providing the information. In order to simplify the development of context-aware applications, the developers should be able to transparently access context information without bothering with underlying context accessing techniques and distribution aspects. They should rather be able to express which kind of information they require, which quality criteria this information should fulfil, and how much the provision of this information should cost (not only monetary cost but also energy or performance usage). For this purpose, application developers as well as developers of context providers need a common language and vocabulary to specify which information they require respectively they provide. These descriptions respectively criteria have to be matched. For a matching of these descriptions, it is likely that a transformation of the provided information is needed to fulfil the criteria of the context-aware application. As it is possible that more than one provider fulfils the criteria, a selection process is required. In this process the system has to trade off the provided quality of context and required costs of the context provider against the quality of context requested by the context consumer. This selection allows to turn on context sources only if required. Explicitly selecting context services and thereby dynamically activating and deactivating the local context provider has the advantage that also the resource consumption is reduced as especially unused context sensors are deactivated. One promising solution is a middleware providing appropriate support in consideration of the principles of service-oriented computing like loose coupling, abstraction, reusability, or discoverability of context providers. This allows us to abstract context sensors, context reasoners and also external context providers as context services. In this thesis we present our solution consisting of a context model and ontology, a context offer and query language, a comprehensive matching and mediation process and a selection service. Especially the matching and mediation process and the selection service differ from the existing works. The matching and mediation process allows an autonomous establishment of mediation processes in order to transfer information from an offered representation into a requested representation. In difference to other approaches, the selection service selects not only a service for a service request, it rather selects a set of services in order to fulfil all requests which also facilitates the sharing of services. The approach is extensively reviewed regarding the different requirements and a set of demonstrators shows its usability in real-world scenarios.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In any wide-area distributed system there is a need to communicate and interact with a range of networked devices and services ranging from computer-based ones (CPU, memory and disk), to network components (hubs, routers, gateways) and specialised data sources (embedded devices, sensors, data-feeds). In order for the ensemble of underlying technologies to provide an environment suitable for virtual organisations to flourish, the resources that comprise the fabric of the Grid must be monitored in a seamless manner that abstracts away from the underlying complexity. Furthermore, as various competing Grid middleware offerings are released and evolve, an independent overarching monitoring service should act as a corner stone that ties these systems together. GridRM is a standards-based approach that is independent of any given middleware and that can utilise legacy and emerging resource-monitoring technologies. The main objective of the project is to produce a standardised and extensible architecture that provides seamless mechanisms to interact with native monitoring agents across heterogeneous resources.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the most pervasive classes of services needed to support e-Science applications are those responsible for the discovery of resources. We have developed a solution to the problem of service discovery in a Semantic Web/Grid setting. We do this in the context of bioinformatics, which is the use of computational and mathematical techniques to store, manage, and analyse the data from molecular biology in order to answer questions about biological phenomena. Our specific application is myGrid (www.mygrid.org.uk) that is developing open source, service-based middleware upon which bioinformatics applications can be built. myGrid is specifically targeted at developing open source high-level service Grid middleware for bioinformatics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the most pervasive classes of services needed to support e-Science applications are those responsible for the discovery of resources. We have developed a solution to the problem of service discovery in a Semantic Web/Grid setting. We do this in the context of bioinformatics, which is the use of computational and mathematical techniques to store, manage, and analyse the data from molecular biology in order to answer questions about biological phenomena. Our specific application is myGrid (http: //www.mygrid.org.uk) that is developing open source, service-based middleware upon which bioin- formatics applications can be built. myGrid is specif- ically targeted at developing open source high-level service Grid middleware for bioinformatics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Context-aware applications are typically dynamic and use services provided by several sources, with different quality levels. Context information qualities are expressed in terms of Quality of Context (QoC) metadata, such as precision, correctness, refreshment, and resolution. On the other hand, service qualities are expressed via Quality of Services (QoS) metadata such as response time, availability and error rate. In order to assure that an application is using services and context information that meet its requirements, it is essential to continuously monitor the metadata. For this purpose, it is needed a QoS and QoC monitoring mechanism that meet the following requirements: (i) to support measurement and monitoring of QoS and QoC metadata; (ii) to support synchronous and asynchronous operation, thus enabling the application to periodically gather the monitored metadata and also to be asynchronously notified whenever a given metadata becomes available; (iii) to use ontologies to represent information in order to avoid ambiguous interpretation. This work presents QoMonitor, a module for QoS and QoC metadata monitoring that meets the abovementioned requirement. The architecture and implementation of QoMonitor are discussed. To support asynchronous communication QoMonitor uses two protocols: JMS and Light-PubSubHubbub. In order to illustrate QoMonitor in the development of ubiquitous application it was integrated to OpenCOPI (Open COntext Platform Integration), a Middleware platform that integrates several context provision middleware. To validate QoMonitor we used two applications as proofof- concept: an oil and gas monitoring application and a healthcare application. This work also presents a validation of QoMonitor in terms of performance both in synchronous and asynchronous requests

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Semantic Web technologies are strategic in order to fulfill the openness requirement of Self-Aware Pervasive Service Ecosystems. In fact they provide agents with the ability to cope with distributed data, using RDF to represent information, ontologies to describe relations between concepts from any domain (e.g. equivalence, specialization/extension, and so on) and reasoners to extract implicit knowledge. The aim of this thesis is to study these technologies and design an extension of a pervasive service ecosystems middleware capable of exploiting semantic power, and deepening performance implications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.