399 resultados para error monitoring
Resumo:
The liberalization of international trade and foreign direct investment through multilateral, regional and bilateral agreements has had profound implications for the structure and nature of food systems, and therefore, for the availability, nutritional quality, accessibility, price and promotion of foods in different locations. Public health attention has only relatively recently turned to the links between trade and investment agreements, diets and health, and there is currently no systematic monitoring of this area. This paper reviews the available evidence on the links between trade agreements, food environments and diets from an obesity and non-communicable disease (NCD) perspective. Based on the key issues identified through the review, the paper outlines an approach for monitoring the potential impact of trade agreements on food environments and obesity/NCD risks. The proposed monitoring approach encompasses a set of guiding principles, recommended procedures for data collection and analysis, and quantifiable ‘minimal’, ‘expanded’ and ‘optimal’ measurement indicators to be tailored to national priorities, capacity and resources. Formal risk assessment processes of existing and evolving trade and investment agreements, which focus on their impacts on food environments will help inform the development of healthy trade policy, strengthen domestic nutrition and health policy space and ultimately protect population nutrition.
Resumo:
INFORMAS (International Network for Food and Obesity/non-communicable diseases Research, Monitoring and Action Support) aims to monitor and benchmark the healthiness of food environments globally. In order to assess the impact of food environments on population diets, it is necessary to monitor population diet quality between countries and over time. This paper reviews existing data sources suitable for monitoring population diet quality, and assesses their strengths and limitations. A step-wise framework is then proposed for monitoring population diet quality. Food balance sheets (FBaS), household budget and expenditure surveys (HBES) and food intake surveys are all suitable methods for assessing population diet quality. In the proposed ‘minimal’ approach, national trends of food and energy availability can be explored using FBaS. In the ‘expanded’ and ‘optimal’ approaches, the dietary share of ultra-processed products is measured as an indicator of energy-dense, nutrient-poor diets using HBES and food intake surveys, respectively. In addition, it is proposed that pre-defined diet quality indices are used to score diets, and some of those have been designed for application within all three monitoring approaches. However, in order to enhance the value of global efforts to monitor diet quality, data collection methods and diet quality indicators need further development work.
Resumo:
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.
Resumo:
Persistent monitoring of the ocean is not optimally accomplished by repeatedly executing a fixed path in a fixed location. The ocean is dynamic, and so should the executed paths to monitor and observe it. An open question merging autonomy and optimal sampling is how and when to alter a path/decision, yet achieve desired science objectives. Additionally, many marine robotic deployments can last multiple weeks to months; making it very difficult for individuals to continuously monitor and retask them as needed. This problem becomes increasingly more complex when multiple platforms are operating simultaneously. There is a need for monitoring and adaptation of the robotic fleet via teams of scientists working in shifts; crowds are ideal for this task. In this paper, we present a novel application of crowd-sourcing to extend the autonomy of persistent-monitoring vehicles to enable nonrepetitious sampling over long periods of time. We present a framework that enables the control of a marine robot by anybody with an internet-enabled device. Voters are provided current vehicle location, gathered science data and predicted ocean features through the associated decision support system. Results are included from a simulated implementation of our system on a Wave Glider operating in Monterey Bay with the science objective to maximize the sum of observed nitrate values collected.
Resumo:
Design of hydraulic turbines has often to deal with hydraulic instability. It is well-known that Francis and Kaplan types present hydraulic instability in their design power range. Even if modern CFD tools may help to define these dangerous operating conditions and optimize runner design, hydraulic instabilities may fortuitously arise during the turbine life and should be timely detected in order to assure a long-lasting operating life. In a previous paper, the authors have considered the phenomenon of helical vortex rope, which happens at low flow rates when a swirling flow, in the draft tube conical inlet, occupies a large portion of the inlet. In this condition, a strong helical vortex rope appears. The vortex rope causes mechanical effects on the runner, on the whole turbine and on the draft tube, which may eventually produce severe damages on the turbine unit and whose most evident symptoms are vibrations. The authors have already shown that vibration analysis is suitable for detecting vortex rope onset, thanks to an experimental test campaign performed during the commissioning of a 23 MW Kaplan hydraulic turbine unit. In this paper, the authors propose a sophisticated data driven approach to detect vortex rope onset at different power load, based on the analysis of the vibration signals in the order domain and introducing the so-called "residual order spectrogram", i.e. an order-rotation representation of the vibration signal. Some experimental test runs are presented and the possibility to detect instability onset, especially in real-time, is discussed.
Resumo:
This workshop was supported by the Australian Centre for Ecological Analysis and Synthesis (ACEAS, http://www.aceas.org.au/), a facility of the Australian Government-funded Terrestrial Ecosystem Research Network (http://www.tern.org.au/), a research infrastructure facility established under the National Collaborative Research Infrastructure Strategy and Education Infrastructure Fund - Super Science Initiative, through the Department of Industry, Innovation, Science, Research and Tertiary Education. Hosted by: Queensland University of Technology, Brisbane, Queensland. (QUT, http://www.qut.edu.au/) Dates: 8-11 May 2012 Report Editors: Prof Stuart Parsons (Uni. Auckland, NZ) and Dr Michael Towsey (QUT). This report is a compilation of notes and discussion summaries contributed by those attending the Workshop. They have been assembled into a logical order by the editors. Another report (with photographs) can be obtained at: http://www.aceas.org.au/index.php?option=com_content&view=article&id=94&Itemid=96
Resumo:
Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.
Resumo:
Emerging infectious diseases present a complex challenge to public health officials and governments; these challenges have been compounded by rapidly shifting patterns of human behaviour and globalisation. The increase in emerging infectious diseases has led to calls for new technologies and approaches for detection, tracking, reporting, and response. Internet-based surveillance systems offer a novel and developing means of monitoring conditions of public health concern, including emerging infectious diseases. We review studies that have exploited internet use and search trends to monitor two such diseases: influenza and dengue. Internet-based surveillance systems have good congruence with traditional surveillance approaches. Additionally, internet-based approaches are logistically and economically appealing. However, they do not have the capacity to replace traditional surveillance systems; they should not be viewed as an alternative, but rather an extension. Future research should focus on using data generated through internet-based surveillance and response systems to bolster the capacity of traditional surveillance systems for emerging infectious diseases.
Resumo:
Considering the wide spectrum of situations that it may encounter, a robot navigating autonomously in outdoor environments needs to be endowed with several operating modes, for robustness and efficiency reasons. Indeed, the terrain it has to traverse may be composed of flat or rough areas, low cohesive soils such as sand dunes, concrete road etc... Traversing these various kinds of environment calls for different navigation and/or locomotion functionalities, especially if the robot is endowed with different locomotion abilities, such as the robots WorkPartner, Hylos [4], Nomad or the Marsokhod rovers.
Resumo:
Considering the wide spectrum of situations that it may encounter, a robot navigating autonomously in outdoor environments needs to be endowed with several operating modes, for robustness and efficiency reasons. Indeed, the terrain it has to traverse may be composed of flat or rough areas, low cohesive soils such as sand dunes, concrete road etc. . .Traversing these various kinds of environment calls for different navigation and/or locomotion functionalities, especially if the robot is endowed with different locomotion abilities, such as the robots WorkPartner, Hylos [4], Nomad or the Marsokhod rovers. Numerous rover navigation techniques have been proposed, each of them being suited to a particular environment context (e.g. path following, obstacle avoidance in more or less cluttered environments, rough terrain traverses...). However, seldom contributions in the literature tackle the problem of selecting autonomously the most suited mode [3]. Most of the existing work is indeed devoted to the passive analysis of a single navigation mode, as in [2]. Fault detection is of course essential: one can imagine that a proper monitoring of the Mars Exploration Rover Opportunity could have avoided the rover to be stuck during several weeks in a dune, by detecting non-nominal behavior of some parameters. But the ability to recover the anticipated problem by switching to a better suited navigation mode would bring higher autonomy abilities, and therefore a better overall efficiency. We propose here a probabilistic framework to achieve this, which fuses environment related and robot related information in order to actively control the rover operations.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.