939 resultados para monitoring process mean and variance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: In order to improve the quality of our Emergency Medical Services (EMS), to raise bystander cardiopulmonary resuscitation rates and thereby meet what is becoming a universal standard in terms of quality of emergency services, we decided to implement systematic dispatcher-assisted or telephone-CPR (T-CPR) in our medical dispatch center, a non-Advanced Medical Priority Dispatch System. The aim of this article is to describe the implementation process, costs and results following the introduction of this new "quality" procedure. METHODS: This was a prospective study. Over an 8-week period, our EMS dispatchers were given new procedures to provide T-CPR. We then collected data on all non-traumatic cardiac arrests within our state (Vaud, Switzerland) for the following 12months. For each event, the dispatchers had to record in writing the reason they either ruled out cardiac arrest (CA) or did not propose T-CPR in the event they did suspect CA. All emergency call recordings were reviewed by the medical director of the EMS. The analysis of the recordings and the dispatchers' written explanations were then compared. RESULTS: During the 12-month study period, a total of 497 patients (both adults and children) were identified as having a non-traumatic cardiac arrest. Out of this total, 203 cases were excluded and 294 cases were eligible for T-CPR. Out of these eligible cases, dispatchers proposed T-CPR on 202 occasions (or 69% of eligible cases). They also erroneously proposed T-CPR on 17 occasions when a CA was wrongly identified (false positive). This represents 7.8% of all T-CPR. No costs were incurred to implement our study protocol and procedures. CONCLUSIONS: This study demonstrates it is possible, using a brief campaign of sensitization but without any specific training, to implement systematic dispatcher-assisted cardiopulmonary resuscitation in a non-Advanced Medical Priority Dispatch System such as our EMS that had no prior experience with systematic T-CPR. The results in terms of T-CPR delivery rate and false positive are similar to those found in previous studies. We found our results satisfying the given short time frame of this study. Our results demonstrate that it is possible to improve the quality of emergency services at moderate or even no additional costs and this should be of interest to all EMS that do not presently benefit from using T-CPR procedures. EMS that currently do not offer T-CPR should consider implementing this technique as soon as possible, and we expect our experience may provide answers to those planning to incorporate T-CPR in their daily practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal curves have been defined Hastie and Stuetzle (JASA, 1989) assmooth curves passing through the middle of a multidimensional dataset. They are nonlinear generalizations of the first principalcomponent, a characterization of which is the basis for the principalcurves definition.In this paper we propose an alternative approach based on a differentproperty of principal components. Consider a point in the space wherea multivariate normal is defined and, for each hyperplane containingthat point, compute the total variance of the normal distributionconditioned to belong to that hyperplane. Choose now the hyperplaneminimizing this conditional total variance and look for thecorresponding conditional mean. The first principal component of theoriginal distribution passes by this conditional mean and it isorthogonal to that hyperplane. This property is easily generalized todata sets with nonlinear structure. Repeating the search from differentstarting points, many points analogous to conditional means are found.We call them principal oriented points. When a one-dimensional curveruns the set of these special points it is called principal curve oforiented points. Successive principal curves are recursively definedfrom a generalization of the total variance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Health-related quality of life (HRQoL) is considered a representative outcome in the evaluation of chronic disease management initiatives emphasizing patient-centered care. We evaluated the association between receipt of processes-of-care (PoC) for diabetes and HRQoL. METHODS: This cross-sectional study used self-reported data from non-institutionalized adults with diabetes in a Swiss canton. Outcomes were the physical/mental composites of the short form health survey 12 (SF-12) physical composite score, mental composite score (PCS, MCS) and the Audit of Diabetes-Dependent Quality of Life (ADDQoL). Main exposure variables were receipt of six PoC for diabetes in the past 12 months, and the Patient Assessment of Chronic Illness Care (PACIC) score. We performed linear regressions to examine the association between PoC, PACIC and the three composites of HRQoL. RESULTS: Mean age of the 519 patients was 64.5 years (SD 11.3); 60% were male, 87% reported type 2 or undetermined diabetes and 48% had diabetes for over 10 years. Mean HRQoL scores were SF-12 PCS: 43.4 (SD 10.5), SF-12 MCS: 47.0 (SD 11.2) and ADDQoL: -1.6 (SD 1.6). In adjusted models including all six PoC simultaneously, receipt of influenza vaccine was associated with lower ADDQoL (β=-0.4, p≤0.01) and foot examination was negatively associated with SF-12 PCS (β=-1.8, p≤0.05). There was no association or trend towards a negative association when these PoC were reported as combined measures. PACIC score was associated only with the SF-12 MCS (β=1.6, p≤0.05). CONCLUSIONS: PoC for diabetes did not show a consistent association with HRQoL in a cross-sectional analysis. This may represent an effect lag time between time of process received and health-related quality of life. Further research is needed to study this complex phenomenon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following high winds on January 24, 2006, at least five people claimed to have seen or felt the superstructure of the Saylorville Reservoir Bridge in central Iowa moving both vertically and laterally. Since that time, the Iowa Department of Transportation (DOT) contracted with the Bridge Engineering Center at Iowa State University to design and install a monitoring system capable of providing notification of the occurrence of subsequent high winds. Although measures were put into place following the 2006 event at the Saylorville Reservoir Bridge, knowledge of the performance of this bridge during high wind events was incomplete. Therefore, the Saylorville Reservoir Bridge was outfitted with an information management system to investigate the structural performance of the structure and the potential for safety risks. In subsequent years, given the similarities between the Saylorville and Red Rock Reservoir bridges, a similar system was added to the Red Rock Reservoir Bridge southeast of Des Moines. The monitoring system developed and installed on these two bridges was designed to monitor the wind speed and direction at the bridge and, via a cellular modem, send a text message to Iowa DOT staff when wind speeds meet a predetermined threshold. The original intent was that, once the text message is received, the bridge entrances would be closed until wind speeds diminish to safe levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focused on four alternatives of analysis of experiments in square lattice as far as the estimation of variance components and some genetic parameters are concerned: 1) intra-block analysis with adjusted treatment and blocks within unadjusted repetitions; 2) lattice analysis as complete randomized blocks; 3) intrablock analysis with unadjusted treatment and blocks within adjusted repetitions; 4) lattice analysis as complete randomized blocks, by utilizing the adjusted means of treatments, obtained from the analysis with recovery of interblock information, having as mean square of the error the mean effective variance of this same analysis with recovery of inter-block information. For the four alternatives of analysis, the estimators and estimates were obtained for the variance components and heritability coefficients. The classification of material was also studied. The present study suggests that for each experiment and depending of the objectives of the analysis, one should observe which alternative of analysis is preferable, mainly in cases where a negative estimate is obtained for the variance component due to effects of blocks within adjusted repetitions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Micro-electromechanical systems (MEMS) provide vast improvements over existing sensing methods in the context of structural health monitoring (SHM) of highway infrastructure systems, including improved system reliability, improved longevity and enhanced system performance, improved safety against natural hazards and vibrations, and a reduction in life cycle cost in both operating and maintaining the infrastructure. Advancements in MEMS technology and wireless sensor networks provide opportunities for long-term continuous, real-time structural health monitoring of pavements and bridges at low cost within the context of sustainable infrastructure systems. The primary objective of this research was to investigate the use of MEMS in highway structures for health monitoring purposes. This study focused on investigating the use of MEMS and their potential applications in concrete through a comprehensive literature review, a vendor survey, and a laboratory study, as well as a small-scale field study. Based on the comprehensive literature review and vendor survey, the latest information available on off-the-shelf MEMS devices, as well as research prototypes, for bridge, pavement, and traffic applications were synthesized. A commercially-available wireless concrete monitoring system based on radio-frequency identification (RFID) technology and off-the-shelf temperature and humidity sensors were tested under controlled laboratory and field conditions. The test results validated the ability of the RFID wireless concrete monitoring system in accurately measuring the temperature both inside the laboratory and in the field under severe weather conditions. In consultation with the project technical advisory committee (TAC), the most relevant MEMS-based transportation infrastructure research applications to explore in the future were also highlighted and summarized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the incentives to acquire skill in a model where heterogeneous firmsand workers interact in a labor market characterized by matching frictions and costlyscreening. When effort in acquiring skill raises both the mean and the variance of theresulting ability distribution, multiple equilibria may arise. In the high-effort equilibrium, heterogeneity in ability is sufficiently large to induce firms to select the bestworkers, thereby confirming the belief that effort is important for finding good jobs.In the low-effort equilibrium, ability is not sufficiently dispersed to justify screening,thereby confirming the belief that effort is not so important. The model has implications for wage inequality, the distribution of firm characteristics, sorting patternsbetween firms and workers, and unemployment rates that can help explaining observedcross-country variation in socio-economic and labor market outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objectives of this study were to establish DRIS norms for sugarcane crop, to compare mean yield, foliar nutrient contents and variance of nutrient ratios of low- and high-yielding groups and to compare mean values of nutrient ratios selected as the DRIS norms of low- and high-yielding groups. Leaf samples (analyzed for N, P, K, Ca, Mg, S, Cu, Mn and Zn contents) and respective yields were collected in 126 commercial sugarcane fields in Rio de Janeiro State, Brazil and used to establish DRIS norms for sugarcane. Nearly all nutrient ratios selected as DRIS norms (77.8%) showed statistical differences between mean values of the low- and high-yielding groups. These different nutritional balances between the low- and high-yielding groups indicate that the DRIS norms developed in this paper are reliable. The DRIS norms for micronutrients with high S²l /S²h ratio and low coefficient of variation found can provide more security to evaluate the micronutrient status of sugarcane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Soluble MHC-peptide complexes, commonly known as tetramers, allow the detection and isolation of antigen-specific T cells. Although other types of soluble MHC-peptide complexes have been introduced, the most commonly used MHC class I staining reagents are those originally described by Altman and Davis. As these reagents have become an essential tool for T cell analysis, it is important to have a large repertoire of such reagents to cover a broad range of applications in cancer research and clinical trials. Our tetramer collection currently comprises 228 human and 60 mouse tetramers and new reagents are continuously being added. For the MHC II tetramers, the list currently contains 21 human (HLA-DR, DQ and DP) and 5 mouse (I-A(b)) tetramers. Quantitative enumeration of antigen-specific T cells by tetramer staining, especially at low frequencies, critically depends on the quality of the tetramers and on the staining procedures. For conclusive longitudinal monitoring, standardized reagents and analysis protocols need to be used. This is especially true for the monitoring of antigen-specific CD4+ T cells, as there are large variations in the quality of MHC II tetramers and staining conditions. This commentary provides an overview of our tetramer collection and indications on how tetramers should be used to obtain optimal results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tarkoituksena oli suunnitella kunnonvalvontajärjestelmä kahdelle lasivillan tuotantolinjalle. Suunnitteluprosessin lisäksi työssä on esitelty erilaisia kunnonvalvontamenetelmiä. Työn alussa on kerrottu erilaisista kunnonvalvontamenetelmistä, joilla voidaan seurata erilaisten laitteiden ja koneiden toimintakuntoa.Erityisesti työssä on tarkasteltu teollisuudessa yleistyviä kunnonvalvonnan värähtelymittauksia. Työssä suunniteltu kunnonvalvontajärjestelmä perustuu viiteen eri menetelmään, jotka ovat värähtelymittaus, lämpötilanmittaus lämpökameralla, lämpötilanmittaus kannettavalla mittarilla, kuuntelu elektronisella stetoskoopilla ja pyörivien osien kunnontarkkailu stroboskoopilla. Kunnonvalvontajärjestelmän suunnittelu on tehty useassa eri vaiheessa. Ensin työssä on kartoitettu tuotannon kannalta tärkeimmät laitteet ja niiden mahdolliset vikaantumistavat. Seuraavaksi on valittu sopivat kunnonvalvontamenetelmät ja tehty mittaussuunnitelma, jossa on esitetty eri laitteille suoritettavat mittaukset ja mittausten aikavälit.Lopuksi työssä on esitelty muutama esimerkkitapaus kunnonvalvontamenetelmien käytöstä sekä kerrottu mahdollisista tulevaisuuden kehitysmahdollisuuksista.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Process development will be largely driven by the main equipment suppliers. The reason for this development is their ambition to supply complete plants or process systems instead of single pieces of equipment. The pulp and paper companies' interest lies in product development, as their main goal is to create winning brands and effective brand management. Design engineering companies will find their niche in detail engineering based on approved process solutions. Their development work will focus on increasing the efficiency of engineering work. Process design is a content-producing profession, which requires certain special characteristics: creativity, carefulness, the ability to work as a member of a design team according to time schedules and fluency in oral as well as written presentation. In the future, process engineers will increasingly need knowledge of chemistry as well as information and automation technology. Process engineering tools are developing rapidly. At the moment, these tools are good enough for static sizing and balancing, but dynamic simulation tools are not yet good enough for the complicated chemical reactions of pulp and paper chemistry. Dynamic simulation and virtual mill models are used as tools for training the operators. Computational fluid dynamics will certainlygain ground in process design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic adaptations of one"s behavior by means of performance monitoring are a central function of the human executive system, that underlies considerable interindividual variation. Converging evidence from electrophysiological and neuroimaging studies in both animals and humans hints atthe importance ofthe dopaminergic system forthe regulation of performance monitoring. Here, we studied the impact of two polymorphisms affecting dopaminergic functioning in the prefrontal cortex [catechol-O-methyltransferase (COMT) Val108/158Met and dopamine D4 receptor (DRD4) single-nucleotide polymorphism (SNP)-521] on neurophysiological correlates of performance monitoring. We applied a modified version of a standard flanker task with an embedded stop-signal task to tap into the different functions involved, particularly error monitoring, conflict detection and inhibitory processes. Participants homozygous for the DRD4 T allele produced an increased error-related negativity after both choice errors and failed inhibitions compared with C-homozygotes. This was associated with pronounced compensatory behavior reflected in higher post-error slowing. No group differences were seen in the incompatibility N2, suggesting distinct effects of the DRD4 polymorphism on error monitoring processes. Additionally, participants homozygous for the COMTVal allele, with a thereby diminished prefrontal dopaminergic level, revealed increased prefrontal processing related to inhibitory functions, reflected in the enhanced stop-signal-related components N2 and P3a. The results extend previous findings from mainly behavioral and neuroimaging data on the relationship between dopaminergic genes and executive functions and present possible underlying mechanisms for the previously suggested association between these dopaminergic polymorphisms and psychiatric disorders as schizophrenia or attention deficit hyperactivity disorder.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vaatimusmäärittelyn tavoitteena on luoda halutun järjestelmän kokonaisen, yhtenäisen vaatimusluettelon vaatimusten määrittämiseksi käsitteellisellä tasolla. Liiketoimintaprosessien mallintaminen on varsin hyödyllinen vaatimusmäärittelyn varhaisissa vaiheissa. Tämä työ tutkii liiketoimintaprosessien mallintamista tietojärjestelmien kehittämistä varten. Nykyään on olemassa erilaisia liiketoimintaprosessien mallintamiseen tarkoitettuja tekniikoita. Tämä työ tarkastaa liiketoimintaprosessien mallintamisen periaatteet ja näkökohdat sekä eri mallinnustekniikoita. Uusi menetelmä, joka on suunniteltu erityisesti pienille ja keskisuurille ohjelmistoprojekteille, on kehitetty prosessinäkökohtien ja UML-kaavioiden perusteella.