926 resultados para Multivariate Statistical Process Monitoring
Resumo:
OBJECTIVE: The aim of this study was to assess the implementation process and economic impact of a new pharmaceutical care service provided since 2002 by pharmacists in Swiss nursing homes. SETTING: The setting was 42 nursing homes located in the canton of Fribourg, Switzerland under the responsibility of 22 pharmacists. METHOD: We developed different facilitators, such as a monitoring system, a coaching program, and a research project, to help pharmacists change their practice and to improve implementation of this new service. We evaluated the implementation rate of the service delivered in nursing homes. We assessed the economic impact of the service since its start in 2002 using statistical evaluation (Chow test) with retrospective analysis of the annual drug costs per resident over an 8-year period (1998-2005). MAIN OUTCOME MEASURES: The description of the facilitators and their implications in implementation of the service; the economic impact of the service since its start in 2002. RESULTS: In 2005, after a 4-year implementation period supported by the introduction of facilitators of practice change, all 42 nursing homes (2,214 residents) had implemented the pharmaceutical care service. The annual drug costs per resident decreased by about 16.4% between 2002 and 2005; this change proved to be highly significant. The performance of the pharmacists continuously improved using a specific coaching program including an annual expert comparative report, working groups, interdisciplinary continuing education symposia, and individual feedback. This research project also determined priorities to develop practice guidelines to prevent drug-related problems in nursing homes, especially in relation to the use of psychotropic drugs. CONCLUSION: The pharmaceutical care service was fully and successfully implemented in Fribourg's nursing homes within a period of 4 years. These findings highlight the importance of facilitators designed to assist pharmacists in the implementation of practice changes. The economic impact was confirmed on a large scale, and priorities for clinical and pharmacoeconomic research were identified in order to continue to improve the quality of integrated care for the elderly.
Resumo:
The protein shells, or capsids, of nearly all spherelike viruses adopt icosahedral symmetry. In the present Letter, we propose a statistical thermodynamic model for viral self-assembly. We find that icosahedral symmetry is not expected for viral capsids constructed from structurally identical protein subunits and that this symmetry requires (at least) two internal switching configurations of the protein. Our results indicate that icosahedral symmetry is not a generic consequence of free energy minimization but requires optimization of internal structural parameters of the capsid proteins
Resumo:
This Phase I report describes a preliminary evaluation of a new compaction monitoring system developed by Caterpillar, Inc. (CAT), for use as a quality control and quality assurance (QC/QA) tool during earthwork construction operations. The CAT compaction monitoring system consists of an instrumented roller with sensors to monitor machine power output in response to changes in soil machine interaction and is fitted with a global positioning system (GPS) to monitor roller location in real time. Three pilot tests were conducted using CAT’s compaction monitoring technology. Two of the sites were located in Peoria, Illinois, at the Caterpillar facilities. The third project was an actual earthwork grading project in West Des Moines, Iowa. Typical construction operations for all tests included the following steps: (1) aerate/till existing soil; (2) moisture condition soil with water truck (if too dry); (3) remix; (4) blade to level surface; and (5) compact soil using the CAT CP-533E roller instrumented with the compaction monitoring sensors and display screen. Test strips varied in loose lift thickness, water content, and length. The results of the study show that it is possible to evaluate soil compaction with relatively good accuracy using machine energy as an indicator, with the advantage of 100% coverage with results in real time. Additional field trials are necessary, however, to expand the range of correlations to other soil types, different roller configurations, roller speeds, lift thicknesses, and water contents. Further, with increased use of this technology, new QC/QA guidelines will need to be developed with a framework in statistical analysis. Results from Phase I revealed that the CAT compaction monitoring method has a high level of promise for use as a QC/QA tool but that additional testing is necessary in order to prove its validity under a wide range of field conditions. The Phase II work plan involves establishing a Technical Advisor Committee, developing a better understanding of the algorithms used, performing further testing in a controlled environment, testing on project sites in the Midwest, and developing QC/QA procedures.
Resumo:
The Federal Highway Administration mandates that states collect traffic count information at specified intervals to meet the needs of the Highway Performance Monitoring System (HPMS). A manual land use change detection method was employed to determine the effects of land use change on traffic for Black Hawk County, Iowa, from 1994 to 2002. Results from land use change detection could enable redirecting traffic count activities and related data management resources to areas that are experiencing the greatest changes in land use and related traffic volume. Including a manual land use change detection process in the Iowa Department of Transportation’s traffic count program has the potential to improve efficiency by focusing monitoring activities in areas more likely to experience significant increase in traffic.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Therapeutic drug monitoring (TDM), i. e., the quantification of serum or plasma concentrations of medications for dose optimization, has proven a valuable tool for the patient-matched psychopharmacotherapy. Uncertain drug adherence, suboptimal tolerability, non-response at therapeutic doses, or pharmacokinetic drug-drug interactions are typical situations when measurement of medication concentrations is helpful. Patient populations that may predominantly benefit from TDM in psychiatry are children, pregnant women, elderly patients, individuals with intelligence disabilities, forensic patients, patients with known or suspected genetically determined pharmacokinetic abnormalities or individuals with pharmacokinetically relevant comorbidities. However, the potential benefits of TDM for optimization of pharmacotherapy can only be obtained if the method is adequately integrated into the clinical treatment process. To promote an appropriate use of TDM, the TDM expert group of the Arbeitsgemeinschaft für Neuropsychopharmakologie und Pharmakopsychiatrie (AGNP) issued guidelines for TDM in psychiatry in 2004. Since then, knowledge has advanced significantly, and new psychopharmacologic agents have been introduced that are also candidates for TDM. Therefore the TDM consensus guidelines were updated and extended to 128 neuropsychiatric drugs. 4 levels of recommendation for using TDM were defined ranging from "strongly recommended" to "potentially useful". Evidence-based "therapeutic reference ranges" and "dose related reference ranges" were elaborated after an extensive literature search and a structured internal review process. A "laboratory alert level" was introduced, i. e., a plasma level at or above which the laboratory should immediately inform the treating physician. Supportive information such as cytochrome P450 substrate- and inhibitor properties of medications, normal ranges of ratios of concentrations of drug metabolite to parent drug and recommendations for the interpretative services are given. Recommendations when to combine TDM with pharmacogenetic tests are also provided. Following the guidelines will help to improve the outcomes of psychopharmacotherapy of many patients especially in case of pharmacokinetic problems. Thereby, one should never forget that TDM is an interdisciplinary task that sometimes requires the respectful discussion of apparently discrepant data so that, ultimately, the patient can profit from such a joint effort.
Resumo:
The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.
Resumo:
This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.
Resumo:
Methods used to analyze one type of nonstationary stochastic processes?the periodically correlated process?are considered. Two methods of one-step-forward prediction of periodically correlated time series are examined. One-step-forward predictions made in accordance with an autoregression model and a model of an artificial neural network with one latent neuron layer and with an adaptation mechanism of network parameters in a moving time window were compared in terms of efficiency. The comparison showed that, in the case of prediction for one time step for time series of mean monthly water discharge, the simpler autoregression model is more efficient.
Resumo:
There is a sustained controversy in the literature about the role and utility of self-monitoring of blood glucose (SMBG) in type 2 diabetes. The study results in this field do not provide really useful clues for the integration of SMBG in the follow-up of the individual patient, because they are based on a misconception of SMBG. It is studied as if it was a medical treatment whose effect on glycemic control is to be isolated. However, SMBG has no such intrinsic effect. It gains its purpose only as an inseparable component of a comprehensive and structured educational strategy. To be appropriate this strategy cannot be based on the health care professionals' view on diabetes only. It rather has to be tailored to the individual patient's needs through an ongoing process of shared reflection with him.
Resumo:
The Internet is becoming more and more popular among drug users. The use of websites and forums to obtain illicit drugs and relevant information about the means of consumption is a growing phenomenon mainly for new synthetic drugs. Gamma Butyrolactone (GBL), a chemical precursor of Gamma Hydroxy Butyric acid (GHB), is used as a "club drug" and also in drug facilitated sexual assaults. Its market takes place mainly on the Internet through online websites but the structure of the market remains unknown. This research aims to combine digital, physical and chemical information to help understand the distribution routes and the structure of the GBL market. Based on an Internet monitoring process, thirty-nine websites selling GBL, mainly in the Netherlands, were detected between January 2010 and December 2011. Seventeen websites were categorized into six groups based on digital traces (e.g. IP addresses and contact information). In parallel, twenty-five bulk GBL specimens were purchased from sixteen websites for packaging comparisons and carbon isotopic measurements. Packaging information showed a high correlation with digital data confirming the links previously established whereas chemical information revealed undetected links and provided complementary information. Indeed, while digital and packaging data give relevant information about the retailers, the supply routes and the distribution close to the consumer, the carbon isotopic data provides upstream information about the production level and in particular the synthesis pathways and the chemical precursors. A three-level structured market has been thereby identified with a production level mainly located in China and in Germany, an online distribution level mainly hosted in the Netherlands and the customers who order on the Internet.
Resumo:
Työn tavoitteena oli tutkia hyvän asiakasreferenssin ominaisuuksia suodatinvalmistaja Laroxin myynnin ja huollon sekä yrityksen asiakkaiden näkökulmasta. Larox voi käyttää saatua tietoa referenssien tehokkaampaan valintaan ja hyödyntämiseen. Kaksi internet-kyselyä toteutettiin, välineenä Webropol. Alustava kysely sunnattiin Laroxin myynnille ja huollolle. Kysely koostui viidestä kategoriasta asiakasreferenssejä, joiden tärkeyttä arvioitiin, sekä vapaista vastauksista. Tunnistettuja hyvän asiakasreferenssin ominaisuuksia ovat hyvä suhde referenssiasiakkaaseen, positiiviset jarehelliset suosittelut asiakkaalta, referenssilaitteen hyvä toimintakyky ja asiakas joka ymmärtää huollon tärkeyden. Pääkysely suunnattiin Laroxin asiakkaille. Tilastollisilla analyyseilla tutkittiin koetun riskin mallinmuuttujien välisiä yhteyksiä. Analyysit eivät paljastaneet merkittäviä riippuvuuksia asiakasreferenssin ominaisuuksien tärkeydessä eritaustaisten vastaajien tai tilannetekijöiden välillä, mutta asiakasreferenssin ominaisuuksien faktorit tukevat mallia. Referenssilaitteiden toimintakyky vaikuttaa tärkeimmältä ja huollon tärkeys on myös merkittävä.
Resumo:
Tässä diplomityössä esitellään ohjelmistotestauksen ja verifioinnin yleisiä periaatteita sekä käsitellään tarkemmin älypuhelinohjelmistojen verifiointia. Työssä esitellään myös älypuhelimissa käytettävä Symbian-käyttöjärjestelmä. Työn käytännön osuudessa suunniteltiin ja toteutettiin Symbian-käyttöjärjestelmässä toimiva palvelin, joka tarkkailee ja tallentaa järjestelmäresurssien käyttöä. Verifiointi on tärkeä ja kuluja aiheuttava tehtävä älypuhelinohjelmistojen kehityssyklissä. Kuluja voidaan vähentää automatisoimalla osa verifiointiprosessista. Toteutettu palvelin automatisoijärjestelmäresurssien tarkkailun tallentamalla tietoja niistä tiedostoon testien ajon aikana. Kun testit ajetaan uudestaan, uusia tuloksia vertaillaan lähdetallenteeseen. Jos tulokset eivät ole käyttäjän asettamien virherajojen sisällä, siitä ilmoitetaan käyttäjälle. Virherajojen ja lähdetallenteen määrittäminen saattaa osoittautua vaikeaksi. Kuitenkin, jos ne määritetään sopivasti, palvelin tuottaa hyödyllistä tietoa poikkeamista järjestelmäresurssien kulutuksessa testaajille.
Resumo:
Container Handling Equipment Monitoring System (CHEMS) is a system developed by Savcor One Oy. CHEMS measures important information for container ports performance and produces performance indicators. The aim of this thesis was to clarify performance measurement contents to Savcor and to develop, as an example, performance measures to Steveco Oy's container operations. The theoretical part of the thesis clarifies performance measurement and which of its components are important to container port. Performance measurement and measures are presented from the operational level's point of view, in which CHEMS is planned to aim. The theory of development process of performance measures is introduced at the end of the theoretical part. To make sure that performance measures are efficiently used, Steveco Oy's performance measures are developed in cooperation with the users. The measurement in operational level is continuous and the results must be reacted asquickly as possible. CHEMS is very suitable to continuous measurement and to produce real time-measures of container operations which are hard to get any otherway.