854 resultados para based inspection and conditional monitoring
Resumo:
Risk Based Inspection (RBI) is a risk methodology used as the basis for prioritizing and managing the efforts for an inspection program allowing the allocation of resources to provide a higher level of coverage on physical assets with higher risk. The main goal of RBI is to increase equipment availability while improving or maintaining the accepted level of risk. This paper presents the concept of risk, risk analysis and RBI methodology and shows an approach to determine the optimal inspection frequency for physical assets based on the potential risk and mainly on the quantification of the probability of failure. It makes use of some assumptions in a structured decision making process. The proposed methodology allows an optimization of inspection intervals deciding when the first inspection must be performed as well as the subsequent intervals of inspection. A demonstrative example is also presented to illustrate the application of the proposed methodology.
Resumo:
Neurocritical care depends, in part, on careful patient monitoring but as yet there are little data on what processes are the most important to monitor, how these should be monitored, and whether monitoring these processes is cost-effective and impacts outcome. At the same time, bioinformatics is a rapidly emerging field in critical care but as yet there is little agreement or standardization on what information is important and how it should be displayed and analyzed. The Neurocritical Care Society in collaboration with the European Society of Intensive Care Medicine, the Society for Critical Care Medicine, and the Latin America Brain Injury Consortium organized an international, multidisciplinary consensus conference to begin to address these needs. International experts from neurosurgery, neurocritical care, neurology, critical care, neuroanesthesiology, nursing, pharmacy, and informatics were recruited on the basis of their research, publication record, and expertise. They undertook a systematic literature review to develop recommendations about specific topics on physiologic processes important to the care of patients with disorders that require neurocritical care. This review does not make recommendations about treatment, imaging, and intraoperative monitoring. A multidisciplinary jury, selected for their expertise in clinical investigation and development of practice guidelines, guided this process. The GRADE system was used to develop recommendations based on literature review, discussion, integrating the literature with the participants' collective experience, and critical review by an impartial jury. Emphasis was placed on the principle that recommendations should be based on both data quality and on trade-offs and translation into clinical practice. Strong consideration was given to providing pragmatic guidance and recommendations for bedside neuromonitoring, even in the absence of high quality data.
Resumo:
The generation of an antigen-specific T-lymphocyte response is a complex multi-step process. Upon T-cell receptor-mediated recognition of antigen presented by activated dendritic cells, naive T-lymphocytes enter a program of proliferation and differentiation, during the course of which they acquire effector functions and may ultimately become memory T-cells. A major goal of modern immunology is to precisely identify and characterize effector and memory T-cell subpopulations that may be most efficient in disease protection. Sensitive methods are required to address these questions in exceedingly low numbers of antigen-specific lymphocytes recovered from clinical samples, and not manipulated in vitro. We have developed new techniques to dissect immune responses against viral or tumor antigens. These allow the isolation of various subsets of antigen-specific T-cells (with major histocompatibility complex [MHC]-peptide multimers and five-color FACS sorting) and the monitoring of gene expression in individual cells (by five-cell reverse transcription-polymerase chain reaction [RT-PCR]). We can also follow their proliferative life history by flow-fluorescence in situ hybridization (FISH) analysis of average telomere length. Recently, using these tools, we have identified subpopulations of CD8+ T-lymphocytes with distinct proliferative history and partial effector-like properties. Our data suggest that these subsets descend from recently activated T-cells and are committed to become differentiated effector T-lymphocytes.
Resumo:
PURPOSE: Few studies compare the variabilities that characterize environmental (EM) and biological monitoring (BM) data. Indeed, comparing their respective variabilities can help to identify the best strategy for evaluating occupational exposure. The objective of this study is to quantify the biological variability associated with 18 bio-indicators currently used in work environments. METHOD: Intra-individual (BV(intra)), inter-individual (BV(inter)), and total biological variability (BV(total)) were quantified using validated physiologically based toxicokinetic (PBTK) models coupled with Monte Carlo simulations. Two environmental exposure profiles with different levels of variability were considered (GSD of 1.5 and 2.0). RESULTS: PBTK models coupled with Monte Carlo simulations were successfully used to predict the biological variability of biological exposure indicators. The predicted values follow a lognormal distribution, characterized by GSD ranging from 1.1 to 2.3. Our results show that there is a link between biological variability and the half-life of bio-indicators, since BV(intra) and BV(total) both decrease as the biological indicator half-lives increase. BV(intra) is always lower than the variability in the air concentrations. On an individual basis, this means that the variability associated with the measurement of biological indicators is always lower than the variability characterizing airborne levels of contaminants. For a group of workers, BM is less variable than EM for bio-indicators with half-lives longer than 10-15 h. CONCLUSION: The variability data obtained in the present study can be useful in the development of BM strategies for exposure assessment and can be used to calculate the number of samples required for guiding industrial hygienists or medical doctors in decision-making.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
A comprehensive field detection method is proposed that is aimed at developing advanced capability for reliable monitoring, inspection and life estimation of bridge infrastructure. The goal is to utilize Motion-Sensing Radio Transponders (RFIDS) on fully adaptive bridge monitoring to minimize the problems inherent in human inspections of bridges. We developed a novel integrated condition-based maintenance (CBM) framework integrating transformative research in RFID sensors and sensing architecture, for in-situ scour monitoring, state-of-the-art computationally efficient multiscale modeling for scour assessment.
Resumo:
The Internet is becoming more and more popular among drug users. The use of websites and forums to obtain illicit drugs and relevant information about the means of consumption is a growing phenomenon mainly for new synthetic drugs. Gamma Butyrolactone (GBL), a chemical precursor of Gamma Hydroxy Butyric acid (GHB), is used as a "club drug" and also in drug facilitated sexual assaults. Its market takes place mainly on the Internet through online websites but the structure of the market remains unknown. This research aims to combine digital, physical and chemical information to help understand the distribution routes and the structure of the GBL market. Based on an Internet monitoring process, thirty-nine websites selling GBL, mainly in the Netherlands, were detected between January 2010 and December 2011. Seventeen websites were categorized into six groups based on digital traces (e.g. IP addresses and contact information). In parallel, twenty-five bulk GBL specimens were purchased from sixteen websites for packaging comparisons and carbon isotopic measurements. Packaging information showed a high correlation with digital data confirming the links previously established whereas chemical information revealed undetected links and provided complementary information. Indeed, while digital and packaging data give relevant information about the retailers, the supply routes and the distribution close to the consumer, the carbon isotopic data provides upstream information about the production level and in particular the synthesis pathways and the chemical precursors. A three-level structured market has been thereby identified with a production level mainly located in China and in Germany, an online distribution level mainly hosted in the Netherlands and the customers who order on the Internet.
Resumo:
The signalling function of melanin-based colouration is debated. Sexual selection theory states that ornaments should be costly to produce, maintain, wear or display to signal quality honestly to potential mates or competitors. An increasing number of studies supports the hypothesis that the degree of melanism covaries with aspects of body condition (e.g. body mass or immunity), which has contributed to change the initial perception that melanin-based colour ornaments entail no costs. Indeed, the expression of many (but not all) melanin-based colour traits is weakly sensitive to the environment but strongly heritable suggesting that these colour traits are relatively cheap to produce and maintain, thus raising the question of how such colour traits could signal quality honestly. Here I review the production, maintenance and wearing/displaying costs that can generate a correlation between melanin-based colouration and body condition, and consider other evolutionary mechanisms that can also lead to covariation between colour and body condition. Because genes controlling melanic traits can affect numerous phenotypic traits, pleiotropy could also explain a linkage between body condition and colouration. Pleiotropy may result in differently coloured individuals signalling different aspects of quality that are maintained by frequency-dependent selection or local adaptation. Colouration may therefore not signal absolute quality to potential mates or competitors (e.g. dark males may not achieve a higher fitness than pale males); otherwise genetic variation would be rapidly depleted by directional selection. As a consequence, selection on heritable melanin-based colouration may not always be directional, but mate choice may be conditional to environmental conditions (i.e. context-dependent sexual selection). Despite the interest of evolutionary biologists in the adaptive value of melanin-based colouration, its actual role in sexual selection is still poorly understood.
Resumo:
Condition monitoring systems for physical assets are constantly becoming more and more common in the industrial sector. At the same time an increasing portion of asset monitoring systems are being remotely supported. As global competitors are actively developing solutions for condition monitoring and condition-based maintenance, which it enables, Wärtsilä too feels the pressure to provide customers with more sophisticated condition-based maintenance solutions. The main aim of this thesis study is to consider Wärtsilä remote condition monitoring solutions and how they relate to similar solutions from other suppliers and end customers’ needs, in the context of offshore assets. A theoretical study is also included in the thesis, where the concepts of condition monitoring, condition-based maintenance, maintenance management and physical asset management are introduced.
Resumo:
Fluid handling systems such as pump and fan systems are found to have a significant potential for energy efficiency improvements. To deliver the energy saving potential, there is a need for easily implementable methods to monitor the system output. This is because information is needed to identify inefficient operation of the fluid handling system and to control the output of the pumping system according to process needs. Model-based pump or fan monitoring methods implemented in variable speed drives have proven to be able to give information on the system output without additional metering; however, the current model-based methods may not be usable or sufficiently accurate in the whole operation range of the fluid handling device. To apply model-based system monitoring in a wider selection of systems and to improve the accuracy of the monitoring, this paper proposes a new method for pump and fan output monitoring with variable-speed drives. The method uses a combination of already known operating point estimation methods. Laboratory measurements are used to verify the benefits and applicability of the improved estimation method, and the new method is compared with five previously introduced model-based estimation methods. According to the laboratory measurements, the new estimation method is the most accurate and reliable of the model-based estimation methods.
Resumo:
The dependence of much of Africa on rain fed agriculture leads to a high vulnerability to fluctuations in rainfall amount. Hence, accurate monitoring of near-real time rainfall is particularly useful, for example in forewarning possible crop shortfalls in drought-prone areas. Unfortunately, ground based observations are often inadequate. Rainfall estimates from satellite-based algorithms and numerical model outputs can fill this data gap, however rigorous assessment of such estimates is required. In this case, three satellite based products (NOAA-RFE 2.0, GPCP-1DD and TAMSAT) and two numerical model outputs (ERA-40 and ERA-Interim) have been evaluated for Uganda in East Africa using a network of 27 rain gauges. The study focuses on the years 2001 to 2005 and considers the main rainy season (February to June). All data sets were converted to the same temporal and spatial scales. Kriging was used for the spatial interpolation of the gauge data. All three satellite products showed similar characteristics and had a high level of skill that exceeded both model outputs. ERA-Interim had a tendency to overestimate whilst ERA-40 consistently underestimated the Ugandan rainfall.
Resumo:
Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.
Resumo:
Objective To determine the prevalence and nature of prescribing and monitoring errors in general practices in England. Design Retrospective case note review of unique medication items prescribed over a 12 month period to a 2% random sample of patients. Mixed effects logistic regression was used to analyse the data. Setting Fifteen general practices across three primary care trusts in England. Data sources Examination of 6048 unique prescription items prescribed over the previous 12 months for 1777 patients. Main outcome measures Prevalence of prescribing and monitoring errors, and severity of errors, using validated definitions. Results Prescribing and/or monitoring errors were detected in 4.9% (296/6048) of all prescription items (95% confidence interval 4.4 - 5.5%). The vast majority of errors were of mild to moderate severity, with 0.2% (11/6048) of items having a severe error. After adjusting for covariates, patient-related factors associated with an increased risk of prescribing and/or monitoring errors were: age less than 15 (Odds Ratio (OR) 1.87, 1.19 to 2.94, p=0.006) or greater than 64 years (OR 1.68, 1.04 to 2.73, p=0.035), and higher numbers of unique medication items prescribed (OR 1.16, 1.12 to 1.19, p<0.001). Conclusion Prescribing and monitoring errors are common in English general practice, although severe errors are unusual. Many factors increase the risk of error. Having identified the most common and important errors, and the factors associated with these, strategies to prevent future errors should be developed based on the study findings.
Resumo:
Wireless video sensor networks have been a hot topic in recent years; the monitoring capability is the central feature of the services offered by a wireless video sensor network can be classified into three major categories: monitoring, alerting, and information on-demand. These features have been applied to a large number of applications related to the environment (agriculture, water, forest and fire detection), military, buildings, health (elderly people and home monitoring), disaster relief, area and industrial monitoring. Security applications oriented toward critical infrastructures and disaster relief are very important applications that many countries have identified as critical in the near future. This paper aims to design a cross layer based protocol to provide the required quality of services for security related applications using wireless video sensor networks. Energy saving, delay and reliability for the delivered data are crucial in the proposed application. Simulation results show that the proposed cross layer based protocol offers a good performance in term of providing the required quality of services for the proposed application.
Resumo:
1. Bee populations and other pollinators face multiple, synergistically acting threats, which have led to population declines, loss of local species richness and pollination services, and extinctions. However, our understanding of the degree, distribution and causes of declines is patchy, in part due to inadequate monitoring systems, with the challenge of taxonomic identification posing a major logistical barrier. Pollinator conservation would benefit from a high-throughput identification pipeline. 2. We show that the metagenomic mining and resequencing of mitochondrial genomes (mitogenomics) can be applied successfully to bulk samples of wild bees. We assembled the mitogenomes of 48 UK bee species and then shotgun-sequenced total DNA extracted from 204 whole bees that had been collected in 10 pan-trap samples from farms in England and been identified morphologically to 33 species. Each sample data set was mapped against the 48 reference mitogenomes. 3. The morphological and mitogenomic data sets were highly congruent. Out of 63 total species detections in the morphological data set, the mitogenomic data set made 59 correct detections (93�7% detection rate) and detected six more species (putative false positives). Direct inspection and an analysis with species-specific primers suggested that these putative false positives were most likely due to incorrect morphological IDs. Read frequency significantly predicted species biomass frequency (R2 = 24�9%). Species lists, biomass frequencies, extrapolated species richness and community structure were recovered with less error than in a metabarcoding pipeline. 4. Mitogenomics automates the onerous task of taxonomic identification, even for cryptic species, allowing the tracking of changes in species richness and istributions. A mitogenomic pipeline should thus be able to contain costs, maintain consistently high-quality data over long time series, incorporate retrospective taxonomic revisions and provide an auditable evidence trail. Mitogenomic data sets also provide estimates of species counts within samples and thus have potential for tracking population trajectories.