991 resultados para Cluster monitoring
Resumo:
Drawing on extensive academic research and theory on clusters and their analysis, the methodology employed in this pilot study (sponsored by the Welsh Assembly Government’s Economic Research Grants Assessment Board) seeks to create a framework for reviewing and monitoring clusters in Wales on an ongoing basis, and generate the information necessary for successful cluster development policy to occur. The multi-method framework developed and tested in the pilot study is designed to map existing Welsh sectors with cluster characteristics, uncover existing linkages, and better understand areas of strength and weakness. The approach adopted relies on synthesising both quantitative and qualitative evidence. Statistical measures, including the size of potential clusters, are united with other evidence on input-output derived inter-linkages within clusters and to other sectors in Wales and the UK, as well as the export and import intensity of the cluster. Multi Sector Qualitative Analysis is then designed for competencies/capacity, risk factors, markets, types and crucially, the perceived strengths of cluster structures and relationships. The approach outlined above can, with the refinements recommended through the review process, provide policy-makers with a valuable tool for reviewing and monitoring individual sectors and ameliorating problems in sectors likely to decline further.
Resumo:
BACKGROUND: Poor long-term adherence is an important cause of uncontrolled hypertension. We examined whether monitoring drug adherence with an electronic system improves long-term blood pressure (BP) control in hypertensive patients followed by general practitioners (GPs). METHODS: A pragmatic cluster randomised controlled study was conducted over one year in community pharmacists/GPs' networks randomly assigned either to usual care (UC) where drugs were dispensed as usual, or to intervention (INT) group where drug adherence could be monitored with an electronic system (Medication Event Monitoring System). No therapy change was allowed during the first 2 months in both groups. Thereafter, GPs could modify therapy and use electronic monitors freely in the INT group. The primary outcome was a target office BP<140/90 mmHg. RESULTS: Sixty-eight treated uncontrolled hypertensive patients (UC: 34; INT: 34) were enrolled. Over the 12-month period, the likelihood of reaching the target BP was higher in the INT group compared to the UC group (p<0.05). At 4 months, 38% in the INT group reached the target BP vs. 12% in the UC group (p<0.05), and 21% vs. 9% at 12 months (p: ns). Multivariate analyses, taking account of baseline characteristics, therapy modification during follow-up, and clustering effects by network, indicate that being allocated to the INT group was associated with a greater odds of reaching the target BP at 4 months (p<0.01) and at 12 months (p=0.051). CONCLUSION: GPs monitoring drug adherence in collaboration with pharmacists achieved a better BP control in hypertensive patients, although the impact of monitoring decreased with time.
Resumo:
Our efforts are directed towards the understanding of the coscheduling mechanism in a NOW system when a parallel job is executed jointly with local workloads, balancing parallel performance against the local interactive response. Explicit and implicit coscheduling techniques in a PVM-Linux NOW (or cluster) have been implemented. Furthermore, dynamic coscheduling remains an open question when parallel jobs are executed in a non-dedicated Cluster. A basis model for dynamic coscheduling in Cluster systems is presented in this paper. Also, one dynamic coscheduling algorithm for this model is proposed. The applicability of this algorithm has been proved and its performance analyzed by simulation. Finally, a new tool (named Monito) for monitoring the different queues of messages in such an environments is presented. The main aim of implementing this facility is to provide a mean of capturing the bottlenecks and overheads of the communication system in a PVM-Linux cluster.
Resumo:
Optical monitoring systems are necessary to manufacture multilayer thin-film optical filters with low tolerance on spectrum specification. Furthermore, to have better accuracy on the measurement of film thickness, direct monitoring is a must. Direct monitoring implies acquiring spectrum data from the optical component undergoing the film deposition itself, in real time. In making film depositions on surfaces of optical components, the high vacuum evaporator chamber is the most popular equipment. Inside the evaporator, at the top of the chamber, there is a metallic support with several holes where the optical components are assembled. This metallic support has rotary motion to promote film homogenization. To acquire a measurement of the spectrum of the film in deposition, it is necessary to pass a light beam through a glass witness undergoing the film deposition process, and collect a sample of the light beam using a spectrometer. As both the light beam and the light collector are stationary, a synchronization system is required to identify the moment at which the optical component passes through the light beam.
Resumo:
In this work, chemometric methods are reported as potential tools for monitoring the authenticity of Brazilian ultra-high temperature (UHT) milk processed in industrial plants located in different regions of the country. A total of 100 samples were submitted to the qualitative analysis of adulterants such as starch, chlorine, formal. hydrogen peroxide and urine. Except for starch, all the samples reported, at least, the presence of one adulterant. The use of chemometric methodologies such as the Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) enabled the verification of the occurrence of certain adulterations in specific regions. The proposed multivariate approaches may allow the sanitary agency authorities to optimise materials, human and financial resources, as they associate the occurrence of adulterations to the geographical location of the industrial plants. (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The Canoparmelia texana epiphytic lichenized fungi was used to monitor atmospheric pollution in the Sao Paulo metropolitan region, SP, Brazil. The cluster analysis applied to the element concentration values confirmed the site groups of different levels of pollution due to industrial and vehicular emissions. In the distribution maps of element concentrations, higher concentrations of Ba and Mn were observed in the vicinity of industries and of a petrochemical complex. The highest concentration of Co found in lichens from the Sao Miguel Paulista site is due to the emissions from a metallurgical processing plant that produces this element. For Br and Zn, the highest concentrations could be associated both to vehicular and industrial emissions. Exploratory analyses revealed that the accumulation of toxic elements in C. texana may be of use in evaluating the human risk of cardiopulmonary mortality due to prolonged exposure to ambient levels of air pollution. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
While Cluster-Tree network topologies look promising for WSN applications with timeliness and energy-efficiency requirements, we are yet to witness its adoption in commercial and academic solutions. One of the arguments that hinder the use of these topologies concerns the lack of flexibility in adapting to changes in the network, such as in traffic flows. This paper presents a solution to enable these networks with the ability to self-adapt their clusters’ duty-cycle and scheduling, to provide increased quality of service to multiple traffic flows. Importantly, our approach enables a network to change its cluster scheduling without requiring long inaccessibility times or the re-association of the nodes. We show how to apply our methodology to the case of IEEE 802.15.4/ZigBee cluster-tree WSNs without significant changes to the protocol. Finally, we analyze and demonstrate the validity of our methodology through a comprehensive simulation and experimental validation using commercially available technology on a Structural Health Monitoring application scenario.
Resumo:
This is the third edition of the compendium. It documents the status of important projects on nanomaterial toxicity and exposure monitoring, integrated risk management, research infrstructure and coordination and support activities. The compendium is not intended to be a guidance document for human health and environmental safety management of nanotechnologies, as such guidance documents already exist and are widely available. Neither is the compendium intended to be a medium for the publication of scientific papers and research results, as this task is covered by scientific conferences and the reviewed press. The compendium aims to bring researchers closer together and show them the potential for synergy in their work. It is a means to establish links and communication between them during the actual research phase and well before the publication of their results. It thus focuses on the communication of projects' strategic aims, extensively covers specific work objectives and the methods used in research, and documents human capacities and available laboratory infrastructure. As such, the compendium supports collaboration on common goals and the joint elaboration of future plans, whilst compromising neither the potential for scientific publication, nor intellectual property rights. [Auteurs]
Resumo:
This is the second edition of the compendium. Since the first edition a number of important initiatives have been launched in the shape of large projects targeting integration of research infrastructure and new technology for toxicity studies and exposure monitoring.The demand for research in the area of human health and environmental safety management of nanotechnologies is present since a decade and identified by several landmark reports and studies. Several guidance documents have been published. It is not the intention of this compendium to report on these as they are widely available.It is also not the intention to publish scientific papers and research results as this task is covered by scientific conferences and the peer reviewed press.The intention of the compendium is to bring together researchers, create synergy in their work, and establish links and communication between them mainly during the actual research phase before publication of results. Towards this purpose we find useful to give emphasis to communication of projects strategic aims, extensive coverage of specific work objectives and of methods used in research, strengthening human capacities and laboratories infrastructure, supporting collaboration for common goals and joint elaboration of future plans, without compromising scientific publication potential or IP Rights.These targets are far from being achieved with the publication in its present shape. We shall continue working, though, and hope with the assistance of the research community to make significant progress. The publication will take the shape of a dynamic, frequently updated, web-based document available free of charge to all interested parties. Researchers in this domain are invited to join the effort, communicating the work being done. [Auteurs]
Resumo:
This is the fourth edition of the Nanosafety Cluster compendium. It documents the status of important projects on nanomaterial toxicity and exposure monitoring, integrated risk management, research infrastructure and coordination and support activities. The compendium is not intended to be a guidance document for human health and environmental safety management of nanotechnologies, as such guidance documents already exist and are widely available. Neither is the compendium intended to be a medium for the publication of scientific papers and research results, as this task is covered by scientific conferences and the peer reviewed press. The compendium aims to bring researchers closer together and show them the potential for synergy in their work. It is a means to establish links and communication between them during the actual research phase and well before the publication of their results. It thus focuses on the communication of projects' strategic aims, extensively covers specific work objectives and the methods used in research, and documents human capacities and available laboratory infrastructure. As such, the compendium supports collaboration on common goals and the joint elaboration of future plans, whilst compromising neither the potential for scientific publication, nor intellectual property rights.
Resumo:
Among the tools proposed to assess the athlete's "fatigue," the analysis of heart rate variability (HRV) provides an indirect evaluation of the settings of autonomic control of heart activity. HRV analysis is performed through assessment of time-domain indices, the square root of the mean of the sum of the squares of differences between adjacent normal R-R intervals (RMSSD) measured during short (5 min) recordings in supine position upon awakening in the morning and particularly the logarithm of RMSSD (LnRMSSD) has been proposed as the most useful resting HRV indicator. However, if RMSSD can help the practitioner to identify a global "fatigue" level, it does not allow discriminating different types of fatigue. Recent results using spectral HRV analysis highlighted firstly that HRV profiles assessed in supine and standing positions are independent and complementary; and secondly that using these postural profiles allows the clustering of distinct sub-categories of "fatigue." Since, cardiovascular control settings are different in standing and lying posture, using the HRV figures of both postures to cluster fatigue state embeds information on the dynamics of control responses. Such, HRV spectral analysis appears more sensitive and enlightening than time-domain HRV indices. The wealthier information provided by this spectral analysis should improve the monitoring of the adaptive training-recovery process in athletes.
Resumo:
Among the tools proposed to assess the athlete's "fatigue," the analysis of heart rate variability (HRV) provides an indirect evaluation of the settings of autonomic control of heart activity. HRV analysis is performed through assessment of time-domain indices, the square root of the mean of the sum of the squares of differences between adjacent normal R-R intervals (RMSSD) measured during short (5 min) recordings in supine position upon awakening in the morning and particularly the logarithm of RMSSD (LnRMSSD) has been proposed as the most useful resting HRV indicator. However, if RMSSD can help the practitioner to identify a global "fatigue" level, it does not allow discriminating different types of fatigue. Recent results using spectral HRV analysis highlighted firstly that HRV profiles assessed in supine and standing positions are independent and complementary; and secondly that using these postural profiles allows the clustering of distinct sub-categories of "fatigue." Since, cardiovascular control settings are different in standing and lying posture, using the HRV figures of both postures to cluster fatigue state embeds information on the dynamics of control responses. Such, HRV spectral analysis appears more sensitive and enlightening than time-domain HRV indices. The wealthier information provided by this spectral analysis should improve the monitoring of the adaptive training-recovery process in athletes.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.