939 resultados para process parameter monitoring


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Acute hemodynamic instability increases morbidity and mortality. We investigated whether early non-invasive cardiac output monitoring enhances hemodynamic stabilization and improves outcome. Methods A multicenter, randomized controlled trial was conducted in three European university hospital intensive care units in 2006 and 2007. A total of 388 hemodynamically unstable patients identified during their first six hours in the intensive care unit (ICU) were randomized to receive either non-invasive cardiac output monitoring for 24 hrs (minimally invasive cardiac output/MICO group; n = 201) or usual care (control group; n = 187). The main outcome measure was the proportion of patients achieving hemodynamic stability within six hours of starting the study. Results The number of hemodynamic instability criteria at baseline (MICO group mean 2.0 (SD 1.0), control group 1.8 (1.0); P = .06) and severity of illness (SAPS II score; MICO group 48 (18), control group 48 (15); P = .86)) were similar. At 6 hrs, 45 patients (22%) in the MICO group and 52 patients (28%) in the control group were hemodynamically stable (mean difference 5%; 95% confidence interval of the difference -3 to 14%; P = .24). Hemodynamic support with fluids and vasoactive drugs, and pulmonary artery catheter use (MICO group: 19%, control group: 26%; P = .11) were similar in the two groups. The median length of ICU stay was 2.0 (interquartile range 1.2 to 4.6) days in the MICO group and 2.5 (1.1 to 5.0) days in the control group (P = .38). The hospital mortality was 26% in the MICO group and 21% in the control group (P = .34). Conclusions Minimally-invasive cardiac output monitoring added to usual care does not facilitate early hemodynamic stabilization in the ICU, nor does it alter the hemodynamic support or outcome. Our results emphasize the need to evaluate technologies used to measure stroke volume and cardiac output--especially their impact on the process of care--before any large-scale outcome studies are attempted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although sustainable land management (SLM) is widely promoted to prevent and mitigate land degradation and desertification, its monitoring and assessment (M&A) has received much less attention. This paper compiles methodological approaches which to date have been little reported in the literature. It draws lessons from these experiences and identifies common elements and future pathways as a basis for a global approach. The paper starts with local level methods where the World Overview of Conservation Approaches and Technologies (WOCAT) framework catalogues SLM case studies. This tool has been included in the local level assessment of Land Degradation Assessment in Drylands (LADA) and in the EU-DESIRE project. Complementary site-based approaches can enhance an ecological process-based understanding of SLM variation. At national and sub-national levels, a joint WOCAT/LADA/DESIRE spatial assessment based on land use systems identifies the status and trends of degradation and SLM, including causes, drivers and impacts on ecosystem services. Expert consultation is combined with scientific evidence and enhanced where necessary with secondary data and indicator databases. At the global level, the Global Environment Facility (GEF) knowledge from the land (KM:Land) initiative uses indicators to demonstrate impacts of SLM investments. Key lessons learnt include the need for a multi-scale approach, making use of common indicators and a variety of information sources, including scientific data and local knowledge through participatory methods. Methodological consistencies allow cross-scale analyses, and findings are analysed and documented for use by decision-makers at various levels. Effective M&A of SLM [e.g. for United Nations Convention to Combat Desertification (UNCCD)] requires a comprehensive methodological framework agreed by the major players.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vibrational excitation of CO2 by a fast-moving O atom followed by infrared emission from the vibrationally excited CO2 has been shown to be an important cooling mechanism in the upper atmospheresof Venus, Earth and Mars. We are trying to determine more precisely the efficiency (rate coefficient) of the CO2-O vibrational energy transfer. For experimental ease the reverse reaction is used, i.e. collision of a vibrationally excited CO2 with atomic O, where we are able to convert to the atmospherically relevant reaction via a known equilibrium constant. The goal of this experiment was to measure the magnitudes of rate coefficients for vibrational energy states above the first excited state, a bending mode in CO2. An isotope of CO2, 13CO2, was used for experimental ease. The rate coefficients for given vibrational energy transfers in 13CO2 are not significantly different from 12CO2 at this level of precision. A slow-flowing gas mixture was flowed through a reaction cell: 13CO2 (vibrational specie of interest), O3(atomic O source), and Ar (bath gas). Transient diode laser absorption spectroscopy was used to monitor thechanging absorption of certain vibrational modes of 13CO2 after a UV pulse from a Nd:YAG laser was fired. Ozone absorbed the UV pulse in a process which vibrationally excited 13CO2 and liberated atomic O.Transient absorption signals were obtained by tuning the diode laser frequency to an appropriate ν3 transition and monitoring the population as a function of time following the Nd:YAG pulse. Transient absorption curves were obtained for various O atom concentrations to determine the rate coefficient of interest. Therotational states of the transitions used for detection were difficult to identify, though their short reequilibration timescale made the identification irrelevant for vibrational energy transfer measurements. The rate coefficient for quenching of the (1000) state was found to be (4 ± 8) x 10-12 cm3 s-1 which is the same order of magnitude as the lowest-energy bend-excited mode: (1.8 ± 0.3) x 10-12 cm3 s-1. More data is necessary before it can be certain that the numerical difference between the two is real.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most butterfly monitoring protocols rely on counts along transects (Pollard walks) to generate species abundance indices and track population trends. It is still too often ignored that a population count results from two processes: the biological process (true abundance) and the statistical process (our ability to properly quantify abundance). Because individual detectability tends to vary in space (e.g., among sites) and time (e.g., among years), it remains unclear whether index counts truly reflect population sizes and trends. This study compares capture-mark-recapture (absolute abundance) and count-index (relative abundance) monitoring methods in three species (Maculinea nausithous and Iolana iolas: Lycaenidae; Minois dryas: Satyridae) in contrasted habitat types. We demonstrate that intraspecific variability in individual detectability under standard monitoring conditions is probably the rule rather than the exception, which questions the reliability of count-based indices to estimate and compare specific population abundance. Our results suggest that the accuracy of count-based methods depends heavily on the ecology and behavior of the target species, as well as on the type of habitat in which surveys take place. Monitoring programs designed to assess the abundance and trends in butterfly populations should incorporate a measure of detectability. We discuss the relative advantages and inconveniences of current monitoring methods and analytical approaches with respect to the characteristics of the species under scrutiny and resources availability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Along a downstream stretch of River Mure , Romania, adult males of two feral fish species, European chub (Leuciscus cephalus) and sneep (Chondrostoma nasus) were sampled at four sites with different levels of contamination. Fish were analysed for the biochemical markers hsp70 (in liver and gills) and hepatic EROD activity, as well as several biometrical parameters (age, length, wet weight, condition factor). None of the biochemical markers correlated with any biometrical parameter, thus biomarker reactions were related to site-specific criteria. While the hepatic hsp70 level did not differ among the sites, significant elevation of the hsp70 level in the gills revealed proteotoxic damage in chub at the most upstream site, where we recorded the highest heavy metal contamination of the investigated stretch, and in both chub and sneep at the site right downstream of the city of Arad. In both species, significantly elevated hepatic EROD activity downstream of Arad indicated that fish from these sites are also exposed to organic chemicals. The results were indicative of impaired fish health at least at three of the four investigated sites. The approach to relate biomarker responses to analytical data on pollution was shown to fit well the recent EU demands on further enhanced efforts in the monitoring of Romanian water quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report details the outcomes of a study designed to investigate the piezoelectric properties of Portland cement paste for its possible applications in structural health monitoring. Specifically, this study provides insights into the effects on piezoelectric properties of hardened cement paste from the application of an electric field during the curing process. As part of the reporting of this study, the state of the art in structural health monitoring is reviewed. In this study it is demonstrated that application of an electric field using a spatially-coarse array of electrodes to cure cement paste was not effective in increasing the magnitude of the piezoelectric coupling, but did increase repeatability of the piezoelectric response of the hardened material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selective catalytic reduction system is a well established technology for NOx emissions control in diesel engines. A one dimensional, single channel selective catalytic reduction (SCR) model was previously developed using Oak Ridge National Laboratory (ORNL) generated reactor data for an iron-zeolite catalyst system. Calibration of this model to fit the experimental reactor data collected at ORNL for a copper-zeolite SCR catalyst is presented. Initially a test protocol was developed in order to investigate the different phenomena responsible for the SCR system response. A SCR model with two distinct types of storage sites was used. The calibration process was started with storage capacity calculations for the catalyst sample. Then the chemical kinetics occurring at each segment of the protocol was investigated. The reactions included in this model were adsorption, desorption, standard SCR, fast SCR, slow SCR, NH3 Oxidation, NO oxidation and N2O formation. The reaction rates were identified for each temperature using a time domain optimization approach. Assuming an Arrhenius form of the reaction rates, activation energies and pre-exponential parameters were fit to the reaction rates. The results indicate that the Arrhenius form is appropriate and the reaction scheme used allows the model to fit to the experimental data and also for use in real world engine studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Complete investigation of thrombophilic or hemorrhagic clinical presentations is a time-, apparatus-, and cost-intensive process. Sensitive screening tests for characterizing the overall function of the hemostatic system, or defined parts of it, would be very useful. For this purpose, we are developing an electrochemical biosensor system that allows measurement of thrombin generation in whole blood as well as in plasma. METHODS: The measuring system consists of a single-use electrochemical sensor in the shape of a strip and a measuring unit connected to a personal computer, recording the electrical signal. Blood is added to a specific reagent mixture immobilized in dry form on the strip, including a coagulation activator (e.g., tissue factor or silica) and an electrogenic substrate specific to thrombin. RESULTS: Increasing thrombin concentrations gave standard curves with progressively increasing maximal current and decreasing time to reach the peak. Because the measurement was unaffected by color or turbidity, any type of blood sample could be analyzed: platelet-poor plasma, platelet-rich plasma, and whole blood. The test strips with the predried reagents were stable when stored for several months before testing. Analysis of the combined results obtained with different activators allowed discrimination between defects of the extrinsic, intrinsic, and common coagulation pathways. Activated protein C (APC) predried on the strips allowed identification of APC-resistance in plasma and whole blood samples. CONCLUSIONS: The biosensor system provides a new method for assessing thrombin generation in plasma or whole blood samples as small as 10 microL. The assay is easy to use, thus allowing it to be performed in a point-of-care setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Meat and meat products can be contaminated with different species of bacteria resistant to various antimicrobials. The human health risk of a type of meat or meat product carry by emerging antimicrobial resistance depends on (i) the prevalence of contamination with resistant bacteria, (ii) the human health consequences of an infection with a specific bacterium resistant to a specific antimicrobial and (iii) the consumption volume of a specific product. The objective of this study was to compare the risk for consumers arising from their exposure to antibiotic resistant bacteria from meat of four different types (chicken, pork, beef and veal), distributed in four different product categories (fresh meat, frozen meat, dried raw meat products and heat-treated meat products). A semi-quantitative risk assessment model, evaluating each food chain step, was built in order to get an estimated score for the prevalence of Campylobacter spp., Enterococcus spp. and Escherichia coli in each product category. To assess human health impact, nine combinations of bacterial species and antimicrobial agents were considered based on a published risk profile. The combination of the prevalence at retail, the human health impact and the amount of meat or product consumed, provided the relative proportion of total risk attributed to each category of product, resulting in a high, medium or low human health risk. According to the results of the model, chicken (mostly fresh and frozen meat) contributed 6.7% of the overall risk in the highest category and pork (mostly fresh meat and dried raw meat products) contributed 4.0%. The contribution of beef and veal was of 0.4% and 0.1% respectively. The results were tested and discussed for single parameter changes of the model. This risk assessment was a useful tool for targeting antimicrobial resistance monitoring to those meat product categories where the expected risk for public health was greater.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intensive Family Preservation Services seek to reflect the values of focusing on client strengths and viewing clients as colleagues. To promote those values, Intensive Family Preservation Programs should include a systematic form of client self monitoring in their packages of outcome measures. This paper presents a model of idiographic self-monitoring used in time series, single system research design developed for Family Partners, a family preservation program of the School for Contemporary Education in Annandale, Virginia. The evaluation model provides a means of empowering client families to utilize their strengths and promote their status as colleague in determining their own goals, participating in the change process, and measuring their own progress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A process evaluation of the Houston Childhood Lead Poisoning Prevention Program, 1992-1995, was conducted. The Program's goal is to reduce lead poisoning prevalence. The study proposed to determine to what extent the Program was implemented as planned by measuring how well Program services were actually: (1) received by the intended target population; (2) delivered to children with elevated blood lead levels; (3) delivered in compliance with the Centers for Disease Control and Prevention and Program guidelines and timetables; and (4) able to reduce lead poisoning prevalence among those rescreened. Utilizing a program monitoring design, the Program's pre-collected computer records were reviewed. The study sample consisted of 820 children whose blood lead levels were above 15 micrograms per deciLiter, representing approximately 2.9% of the 28,406 screened over this period. Three blood lead levels from each participant were examined: the initial elevated result; the confirmatory result; and the next rescreen result, after the elevated confirmatory level. Results showed that the Program screened approximately 18% (28,406 of 161,569) of Houston's children under age 6 years for lead poisoning. Based on Chi-square tests of significance, results also showed that lead-poisoned participants were more likely to be younger than 3 years, male and Hispanic, compared to those not lead poisoned. The age, gender and ethnic differences observed were statistically significant (p =.01, p =.00, p =.00). Four of the six Program services: medical evaluations, rescreening, environmental inspections and confirmation, had satisfactory delivery completion rates of 71%-98%. Delivery timetable compliance rates for three of the six services examined: outreach contacts, home visits and environmental inspections were below 32%. However, dangerously elevated blood lead levels fell and lead poisoning prevalence dropped from 3.3% at initial screening to 1.2% among those rescreened, after intervention. From a public health perspective, reductions in lead poisoning prevalence are very meaningful. Based on these findings, the following are recommendations for future research: (1) integrate Program database files by utilizing a computer database management program; (2) target services at Hispanic male children under age 3 years living in the highest risk neighborhoods; (3) increase resources to: improve tracking and documentation of service delivery and provide more non-medical case management and environmental services; and (4) share the evaluation methodology/findings with the Centers for Disease Control and Prevention administrators; the implications may be relevant to other program managers conducting such assessments. ^