991 resultados para Interactive Methods
Resumo:
Objectives Exposure assessment to a single pesticide does not capture the complexity of the occupational exposure. Recently, pesticide use patterns analysis has emerged as an alternative to study these exposures. The aim of this study is to identify the pesticide use pattern among flower growers in Mexico participating in the study on the endocrine and reproductive effects associated with pesticide exposure. Methods A cross-sectional study was carried out to gather retrospective information on pesticide use applying a questionnaire to the person in charge of the participating flower growing farms. Information about seasonal frequency of pesticide use (rainy and dry) for the years 2004 and 2005 was obtained. Principal components analysis was performed. Results Complete information was obtained for 88 farms and 23 pesticides were included in the analysis. Six principal components were selected, which explained more than 70% of the data variability. The identified pesticide use patterns during both years were: 1. fungicides benomyl, carbendazim, thiophanate and metalaxyl (both seasons), including triadimephon during the rainy season, chlorotalonyl and insecticide permethrin during the dry season; 2. insecticides oxamyl, biphenthrin and fungicide iprodione (both seasons), including insecticide methomyl during the dry season; 3. fungicide mancozeb and herbicide glyphosate (only during the rainy season); 4. insecticides metamidophos and parathion (both seasons); 5. insecticides omethoate and methomyl (only rainy season); and 6. insecticides abamectin and carbofuran (only dry season). Some pesticides do not show a clear pattern of seasonal use during the studied years. Conclusions The principal component analysis is useful to summarise a large set of exposure variables into smaller groups of exposure patterns, identifying the mixtures of pesticides in the occupational environment that may have an interactive effect on a particular health effect.
Resumo:
The aim of this study was to determine the incidence of congenital toxoplasmosis (CT) and to assess the performances of prenatal and neonatal diagnoses. From 1994-2005, in Toulouse University Hospital, France, amniocentesis was performed on 352 pregnant women who were infected during pregnancy. All women were treated with spiramycin and pyrimethamine-sulfadoxine when prenatal diagnosis was positive. Among the 275 foetuses with follow-up, 66 (24%) were infected. The transmission rates of Toxoplasma gondii were 7%, 24% and 59% in the first, second and third trimesters, respectively. The sensitivity and specificity of PCR on amniotic fluid (AF) were 91% and 99.5%, respectively. One case was diagnosed by mouse inoculation with AF and six cases were diagnosed by neonatal or postnatal screening. The sensitivity and specificity of PCR on placentas were 52% and 99%, respectively. The sensitivity of tests for the detection of specific IgA and IgM in cord blood was 53% and 64%, respectively, and specificity values were 91% and 92%. In conclusion, PCR performed on AF had the highest levels of sensitivity and specificity for the diagnosis of CT. This permits an early diagnosis of most cases and should be recommended.
Resumo:
A study was carried out to evaluate the presence of serological markers for the immunodiagnosis of the vertical transmission of toxoplasmosis. We tested the sensitivity, specificity and predictive values (positive and negative) of different serological methods for the early diagnosis of congenital toxoplasmosis. In a prospective longitudinal study, 50 infants with suspected congenital toxoplasmosis were followed up in the ambulatory care centre of Congenital Infections at University Hospital in Goiânia, Goiás, Brazil, from 1 January 2004-30 September 2005. Microparticle Enzyme Immunoassay (MEIA), Enzyme-Linked Fluorescent Assay (ELFA) and Immune-Fluorescent Antibody Technique (IFAT) were used to detect specific IgM anti-Toxoplasma gondii antibodies and a capture ELISA was used to detect specific IgA antibodies. The results showed that 28/50 infants were infected. During the neonatal period, IgM was detected in 39.3% (11/28) of those infected infants and IgA was detected in 21.4% (6/28). The sensitivity, specificity and predictive values (positive and negative) of each assay were, respectively: MEIA and ELFA: 60.9%, 100%, 100%, 55.0%; IFAT: 59.6%, 91.7%, 93.3%, 53.7%; IgA capture ELISA: 57.1%, 100%, 100%, 51.2%. The presence of specific IgM and IgA antibodies during the neonatal period was not frequent, although it was correlated with the most severe cases of congenital transmission. The results indicate that the absence of congenital disease markers (IgM and IgA) in newborns, even after confirming the absence with several techniques, does not constitute an exclusion criterion for toxoplasmosis.
Resumo:
The aim of this study was to compare two nucleic acid extraction methods for the recovery of enteric viruses from activated sludge. Test samples were inoculated with human adenovirus (AdV), hepatitis A virus (HAV), poliovirus (PV) and rotavirus (RV) and were then processed by an adsorption-elution-precipitation method. Two extraction methods were used: an organic solvent-based method and a silica method. The organic-based method was able to recoup 20% of the AdV, 90% of the RV and 100% of both the PV and HAV from seeded samples. The silica method was able to recoup 1.8% of the AdV and 90% of the RV. These results indicate that the organic-based method is more suitable for detecting viruses in sewage sludge.
Resumo:
Until now, mortality atlases have been static. Most of them describe the geographical distribution of mortality using count data aggregated over time and standardized mortality rates. However, this methodology has several limitations. Count data aggregated over time produce a bias in the estimation of death rates. Moreover, this practice difficult the study of temporal changes in geographical distribution of mortality. On the other hand, using standardized mortality hamper to check differences in mortality among groups. The Interactive Mortality Atlas in Andalusia (AIMA) is an alternative to conventional static atlases. It is a dynamic Geographical Information System that allows visualizing in web-site more than 12.000 maps and 338.00 graphics related to the spatio-temporal distribution of the main death causes in Andalusia by age and sex groups from 1981. The objective of this paper is to describe the methods used for AIMA development, to show technical specifications and to present their interactivity. The system is available from the link products in www.demap.es. AIMA is the first interactive GIS that have been developed in Spain with these characteristics. Spatio-temporal Hierarchical Bayesian Models were used for statistical data analysis. The results were integrated into web-site using a PHP environment and a dynamic cartography in Flash. Thematic maps in AIMA demonstrate that the geographical distribution of mortality is dynamic, with differences among year, age and sex groups. The information nowadays provided by AIMA and the future updating will contribute to reflect on the past, the present and the future of population health in Andalusia.
Resumo:
In most psychological tests and questionnaires, a test score is obtained bytaking the sum of the item scores. In virtually all cases where the test orquestionnaire contains multidimensional forced-choice items, this traditionalscoring method is also applied. We argue that the summation of scores obtained with multidimensional forced-choice items produces uninterpretabletest scores. Therefore, we propose three alternative scoring methods: a weakand a strict rank preserving scoring method, which both allow an ordinalinterpretation of test scores; and a ratio preserving scoring method, whichallows a proportional interpretation of test scores. Each proposed scoringmethod yields an index for each respondent indicating the degree to whichthe response pattern is inconsistent. Analysis of real data showed that withrespect to rank preservation, the weak and strict rank preserving methodresulted in lower inconsistency indices than the traditional scoring method;with respect to ratio preservation, the ratio preserving scoring method resulted in lower inconsistency indices than the traditional scoring method
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observedfor each individual. A particular case of FDA is when the observed functions are densityfunctions, that are also an example of infinite dimensional compositional data. In thiswork we compare several methods for dimensionality reduction for this particular typeof data: functional principal components analysis (PCA) with or without a previousdata transformation and multidimensional scaling (MDS) for diferent inter-densitiesdistances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (householdsincome distributions)
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
Influenza surveillance networks must detect early the viruses that will cause the forthcoming annual epidemics and isolate the strains for further characterization. We obtained the highest sensitivity (95.4%) with a diagnostic tool that combined a shell-vial assay and reverse transcription-PCR on cell culture supernatants at 48 h, and indeed, recovered the strain
Resumo:
Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dynamics. Nowadays, not only the continuous scientific advances in underwater robotics but the increasing number of subsea missions and its complexity ask for an automatization of submarine processes. This paper proposes a high-level control system for solving the action selection problem of an autonomous robot. The system is characterized by the use of reinforcement learning direct policy search methods (RLDPS) for learning the internal state/action mapping of some behaviors. We demonstrate its feasibility with simulated experiments using the model of our underwater robot URIS in a target following task
Resumo:
Interpretability and power of genome-wide association studies can be increased by imputing unobserved genotypes, using a reference panel of individuals genotyped at higher marker density. For many markers, genotypes cannot be imputed with complete certainty, and the uncertainty needs to be taken into account when testing for association with a given phenotype. In this paper, we compare currently available methods for testing association between uncertain genotypes and quantitative traits. We show that some previously described methods offer poor control of the false-positive rate (FPR), and that satisfactory performance of these methods is obtained only by using ad hoc filtering rules or by using a harsh transformation of the trait under study. We propose new methods that are based on exact maximum likelihood estimation and use a mixture model to accommodate nonnormal trait distributions when necessary. The new methods adequately control the FPR and also have equal or better power compared to all previously described methods. We provide a fast software implementation of all the methods studied here; our new method requires computation time of less than one computer-day for a typical genome-wide scan, with 2.5 M single nucleotide polymorphisms and 5000 individuals.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
Process supervision is the activity focused on monitoring the process operation in order to deduce conditions to maintain the normality including when faults are present Depending on the number/distribution/heterogeneity of variables, behaviour situations, sub-processes, etc. from processes, human operators and engineers do not easily manipulate the information. This leads to the necessity of automation of supervision activities. Nevertheless, the difficulty to deal with the information complicates the design and development of software applications. We present an approach called "integrated supervision systems". It proposes multiple supervisors coordination to supervise multiple sub-processes whose interactions permit one to supervise the global process
Resumo:
The statistical analysis of compositional data should be treated using logratios of parts,which are difficult to use correctly in standard statistical packages. For this reason afreeware package, named CoDaPack was created. This software implements most of thebasic statistical methods suitable for compositional data.In this paper we describe the new version of the package that now is calledCoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©),Visual Basic and Open GL, and it is oriented towards users with a minimum knowledgeof computers with the aim at being simple and easy to use.This new version includes new graphical output in 2D and 3D. These outputs could bezoomed and, in 3D, rotated. Also a customization menu is included and outputs couldbe saved in jpeg format. Also this new version includes an interactive help and alldialog windows have been improved in order to facilitate its use.To use CoDaPack one has to access Excel© and introduce the data in a standardspreadsheet. These should be organized as a matrix where Excel© rows correspond tothe observations and columns to the parts. The user executes macros that returnnumerical or graphical results. There are two kinds of numerical results: new variablesand descriptive statistics, and both appear on the same sheet. Graphical output appearsin independent windows. In the present version there are 8 menus, with a total of 38submenus which, after some dialogue, directly call the corresponding macro. Thedialogues ask the user to input variables and further parameters needed, as well aswhere to put these results. The web site http://ima.udg.es/CoDaPack contains thisfreeware package and only Microsoft Excel© under Microsoft Windows© is required torun the software.Kew words: Compositional data Analysis, Software