988 resultados para Statistical tools
Resumo:
This paper is a first draft of the principle of statistical modelling on coordinates. Several causes —which would be long to detail—have led to this situation close to the deadline for submitting papers to CODAWORK’03. The main of them is the fast development of the approach along thelast months, which let appear previous drafts as obsolete. The present paper contains the essential parts of the state of the art of this approach from my point of view. I would like to acknowledge many clarifying discussions with the group of people working in this field in Girona, Barcelona, Carrick Castle, Firenze, Berlin, G¨ottingen, and Freiberg. They have given a lot of suggestions and ideas. Nevertheless, there might be still errors or unclear aspects which are exclusively my fault. I hope this contribution serves as a basis for further discussions and new developments
Resumo:
”compositions” is a new R-package for the analysis of compositional and positive data.It contains four classes corresponding to the four different types of compositional andpositive geometry (including the Aitchison geometry). It provides means for computation,plotting and high-level multivariate statistical analysis in all four geometries.These geometries are treated in an fully analogous way, based on the principle of workingin coordinates, and the object-oriented programming paradigm of R. In this way,called functions automatically select the most appropriate type of analysis as a functionof the geometry. The graphical capabilities include ternary diagrams and tetrahedrons,various compositional plots (boxplots, barplots, piecharts) and extensive graphical toolsfor principal components. Afterwards, ortion and proportion lines, straight lines andellipses in all geometries can be added to plots. The package is accompanied by ahands-on-introduction, documentation for every function, demos of the graphical capabilitiesand plenty of usage examples. It allows direct and parallel computation inall four vector spaces and provides the beginner with a copy-and-paste style of dataanalysis, while letting advanced users keep the functionality and customizability theydemand of R, as well as all necessary tools to add own analysis routines. A completeexample is included in the appendix
Resumo:
Compositional random vectors are fundamental tools in the Bayesian analysis of categorical data.Many of the issues that are discussed with reference to the statistical analysis of compositionaldata have a natural counterpart in the construction of a Bayesian statistical model for categoricaldata.This note builds on the idea of cross-fertilization of the two areas recommended by Aitchison (1986)in his seminal book on compositional data. Particular emphasis is put on the problem of whatparameterization to use
Resumo:
This paper presents a tool for the analysis and regeneration of Web contents, implemented through XML and Java. At the moment, the Web content delivery from server to clients is carried out without taking into account clients' characteristics. Heterogeneous and diverse characteristics, such as user's preferences, different capacities of the client's devices, different types of access, state of the network and current load on the server, directly affect the behavior of Web services. On the other hand, the growing use of multimedia objects in the design of Web contents is made without taking into account this diversity and heterogeneity. It affects, even more, the appropriate content delivery. Thus, the objective of the presented tool is the treatment of Web pages taking into account the mentioned heterogeneity and adapting contents in order to improve the performance on the Web
Resumo:
We have developed the computer programme NUTRISOL, a nutritional programme destined to analysis of dietary intake by means of the food transformation to nutrient. It has been performed under Windows operative system, using Visual Basic 6.0. It is presented in a CD-Rom. We have used the Spanish CSIC Food Composition Table and domestic food measures commonly used in Spain which could be modified and updated. Diverse kind of diets and reference anthropometric data are also presented. The results may be treated using various statistical programmes. The programme contains three modules: 1) Nutritional epidemiology, which allows to create or open a data base, sample management, analyse food intake, consultation of nutrient content and exportation of data to statistical programmes. 2) Analyses of diets and recipes, creation or modification of new ones. 3) To ask different diets for prevalent pathologies. Independent tools for modifying the original tables, calculate energetic needs, recommend nutrient intake and anthropometric indexes are also offered. In conclusion, NUTRISOL Programme is an application which runs in PC computers with minimal equipment in a friendly interface, of easy use, freeware, which may be adapted to each country, and has demonstrated its usefulness and reliability in different epidemiologic studies. Furthermore, it may become an efficient instrument for clinical nutrition and health promotion.
Resumo:
INTRODUCTION According to several series, hospital hyponutrition involves 30-50% of hospitalized patients. The high prevalence justifies the need for early detection from admission. There several classical screening tools that show important limitations in their systematic application in daily clinical practice. OBJECTIVES To analyze the relationship between hyponutrition, detected by our screening method, and mortality, hospital stay, or re-admissions. To analyze, as well, the relationship between hyponutrition and prescription of nutritional support. To compare different nutritional screening methods at admission on a random sample of hospitalized patients. Validation of the INFORNUT method for nutritional screening. MATERIAL AND METHODS In a previous phase from the study design, a retrospective analysis with data from the year 2003 was carried out in order to know the situation of hyponutrition in Virgen de la Victoria Hospital, at Malaga, gathering data from the MBDS (Minimal Basic Data Set), laboratory analysis of nutritional risk (FILNUT filter), and prescription of nutritional support. In the experimental phase, a cross-sectional cohort study was done with a random sample of 255 patients, on May of 2004. Anthropometrical study, Subjective Global Assessment (SGA), Mini-Nutritional Assessment (MNA), Nutritional Risk Screening (NRS), Gassull's method, CONUT and INFORNUT were done. The settings of the INFORNUT filter were: albumin < 3.5 g/dL, and/or total proteins <5 g/dL, and/or prealbumin <18 mg/dL, with or without total lymphocyte count < 1.600 cells/mm3 and/or total cholesterol <180 mg/dL. In order to compare the different methods, a gold standard is created based on the recommendations of the SENPE on anthropometrical and laboratory data. The statistical association analysis was done by the chi-squared test (a: 0.05) and agreement by the k index. RESULTS In the study performed in the previous phase, it is observed that the prevalence of hospital hyponutrition is 53.9%. One thousand six hundred and forty four patients received nutritional support, of which 66.9% suffered from hyponutrition. We also observed that hyponutrition is one of the factors favoring the increase in mortality (hyponourished patients 15.19% vs. non-hyponourished 2.58%), hospital stay (hyponourished patients 20.95 days vs. non-hyponourished 8.75 days), and re-admissions (hyponourished patients 14.30% vs. non-hyponourished 6%). The results from the experimental study are as follows: the prevalence of hyponutrition obtained by the gold standard was 61%, INFORNUT 60%. Agreement levels between INFORNUT, CONUT, and GASSULL are good or very good between them (k: 0.67 INFORNUT with CONUT, and k: 0.94 INFORNUT and GASSULL) and wit the gold standard (k: 0.83; k: 0.64 CONUT; k: 0.89 GASSULL). However, structured tests (SGA, MNA, NRS) show low agreement indexes with the gold standard and laboratory or mixed tests (Gassull), although they show a low to intermediate level of agreement when compared one to each other (k: 0.489 NRS with SGA). INFORNUT shows sensitivity of 92.3%, a positive predictive value of 94.1%, and specificity of 91.2%. After the filer phase, a preliminary report is sent, on which anthropometrical and intake data are added and a Nutritional Risk Report is done. CONCLUSIONS Hyponutrition prevalence in our study (60%) is similar to that found by other authors. Hyponutrition is associated to increased mortality, hospital stay, and re-admission rate. There are no tools that have proven to be effective to show early hyponutrition at the hospital setting without important applicability limitations. FILNUT, as the first phase of the filter process of INFORNUT represents a valid tool: it has sensitivity and specificity for nutritional screening at admission. The main advantages of the process would be early detection of patients with risk for hyponutrition, having a teaching and sensitization function to health care staff implicating them in nutritional assessment of their patients, and doing a hyponutrition diagnosis and nutritional support need in the discharge report that would be registered by the Clinical Documentation Department. Therefore, INFORNUT would be a universal screening method with a good cost-effectiveness ratio.
Resumo:
BACKGROUND Only multifaceted hospital wide interventions have been successful in achieving sustained improvements in hand hygiene (HH) compliance. METHODOLOGY/PRINCIPAL FINDINGS Pre-post intervention study of HH performance at baseline (October 2007-December 2009) and during intervention, which included two phases. Phase 1 (2010) included multimodal WHO approach. Phase 2 (2011) added Continuous Quality Improvement (CQI) tools and was based on: a) Increase of alcohol hand rub (AHR) solution placement (from 0.57 dispensers/bed to 1.56); b) Increase in frequency of audits (three days every three weeks: "3/3 strategy"); c) Implementation of a standardized register form of HH corrective actions; d) Statistical Process Control (SPC) as time series analysis methodology through appropriate control charts. During the intervention period we performed 819 scheduled direct observation audits which provided data from 11,714 HH opportunities. The most remarkable findings were: a) significant improvements in HH compliance with respect to baseline (25% mean increase); b) sustained high level (82%) of HH compliance during intervention; c) significant increase in AHRs consumption over time; c) significant decrease in the rate of healthcare-acquired MRSA; d) small but significant improvements in HH compliance when comparing phase 2 to phase 1 [79.5% (95% CI: 78.2-80.7) vs 84.6% (95% CI:83.8-85.4), p<0.05]; e) successful use of control charts to identify significant negative and positive deviations (special causes) related to the HH compliance process over time ("positive": 90.1% as highest HH compliance coinciding with the "World hygiene day"; and "negative":73.7% as lowest HH compliance coinciding with a statutory lay-off proceeding). CONCLUSIONS/SIGNIFICANCE CQI tools may be a key addition to WHO strategy to maintain a good HH performance over time. In addition, SPC has shown to be a powerful methodology to detect special causes in HH performance (positive and negative) and to help establishing adequate feedback to healthcare workers.
Resumo:
Geographical Information Systems (GIS) facilitate access to epidemiological data through visualization and may be consulted for the development of mathematical models and analysis by spatial statistics. Variables such as land-cover, land-use, elevations, surface temperatures, rainfall etc. emanating from earth-observing satellites, complement GIS as this information allows the analysis of disease distribution based on environmental characteristics. The strength of this approach issues from the specific environmental requirements of those causative infectious agents, which depend on intermediate hosts for their transmission. The distribution of these diseases is restricted, both by the environmental requirements of their intermediate hosts/vectors and by the ambient temperature inside these hosts, which effectively govern the speed of maturation of the parasite. This paper discusses the current capabilities with regard to satellite data collection in terms of resolution (spatial, temporal and spectral) of the sensor instruments on board drawing attention to the utility of computer-based models of the Earth for epidemiological research. Virtual globes, available from Google and other commercial firms, are superior to conventional maps as they do not only show geographical and man-made features, but also allow instant import of data-sets of specific interest, e.g. environmental parameters, demographic information etc., from the Internet.
Resumo:
In this opinion paper, we discuss the potential and challenges of using the symbiont Wolbachia to block mosquito transmitted diseases such as dengue, malaria and chikungunya in Latin America.
Resumo:
Despite the huge effort and massive advances toward the elimination of leprosy over the last two decades, the disease has proven stubborn; new case detection rates have stabilised over the last few years and leprosy remains endemic in a number of localised regions. The American Leprosy Missions and Infectious Disease Research Institute have undertaken a large research effort aimed at developing new tools and a vaccine to continue the push for leprosy elimination. In this paper, we outline our strategy for the integration of rapid diagnostic tests and lab-based assays to facilitate the detection of early or asymptomatic leprosy cases, as well as the efficient and focused implementation of chemoprophylaxis and immunisation to intervene in leprosy development and transmission.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
The Andalusian eHealth Library (Biblioteca Virtual del Sistema Sanitario Publico de Andalucia, BV-SSPA), was created in June 2006. The 42 librarians who already worked for the Health System were integrated within this new system. The annual library meeting has been held every year since then, and in 2013 the EAHIL workshop held in Stockholm was the model to follow.