919 resultados para Multi-Factor ModeI, Missing Data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a filter based on a general regression neural network and a moving average filter, for preprocessing half-hourly load data for short-term multinodal load forecasting, discussed in another paper. Tests made with half-hourly load data from nine New Zealand electrical substations demonstrate that this filter is able to handle noise, missing data and abnormal data. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sugarcane-breeding programs take at least 12 years to develop new commercial cultivars. Molecular markers offer a possibility to study the genetic architecture of quantitative traits in sugarcane, and they may be used in marker-assisted selection to speed up artificial selection. Although the performance of sugarcane progenies in breeding programs are commonly evaluated across a range of locations and harvest years, many of the QTL detection methods ignore two- and three-way interactions between QTL, harvest, and location. In this work, a strategy for QTL detection in multi-harvest-location trial data, based on interval mapping and mixed models, is proposed and applied to map QTL effects on a segregating progeny from a biparental cross of pre-commercial Brazilian cultivars, evaluated at two locations and three consecutive harvest years for cane yield (tonnes per hectare), sugar yield (tonnes per hectare), fiber percent, and sucrose content. In the mixed model, we have included appropriate (co)variance structures for modeling heterogeneity and correlation of genetic effects and non-genetic residual effects. Forty-six QTLs were found: 13 QTLs for cane yield, 14 for sugar yield, 11 for fiber percent, and 8 for sucrose content. In addition, QTL by harvest, QTL by location, and QTL by harvest by location interaction effects were significant for all evaluated traits (30 QTLs showed some interaction, and 16 none). Our results contribute to a better understanding of the genetic architecture of complex traits related to biomass production and sucrose content in sugarcane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The information provided by the International Commission for the Conservation of Atlantic Tunas (ICCAT) on captures of skipjack tuna (Katsuwonus pelamis) in the central-east Atlantic has a number of limitations, such as gaps in the statistics for certain fleets and the level of spatiotemporal detail at which catches are reported. As a result, the quality of these data and their effectiveness for providing management advice is limited. In order to reconstruct missing spatiotemporal data of catches, the present study uses Data INterpolating Empirical Orthogonal Functions (DINEOF), a technique for missing data reconstruction, applied here for the first time to fisheries data. DINEOF is based on an Empirical Orthogonal Functions decomposition performed with a Lanczos method. DINEOF was tested with different amounts of missing data, intentionally removing values from 3.4% to 95.2% of data loss, and then compared with the same data set with no missing data. These validation analyses show that DINEOF is a reliable methodological approach of data reconstruction for the purposes of fishery management advice, even when the amount of missing data is very high.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To describe the electronic medical databases used in antiretroviral therapy (ART) programmes in lower-income countries and assess the measures such programmes employ to maintain and improve data quality and reduce the loss of patients to follow-up. METHODS: In 15 countries of Africa, South America and Asia, a survey was conducted from December 2006 to February 2007 on the use of electronic medical record systems in ART programmes. Patients enrolled in the sites at the time of the survey but not seen during the previous 12 months were considered lost to follow-up. The quality of the data was assessed by computing the percentage of missing key variables (age, sex, clinical stage of HIV infection, CD4+ lymphocyte count and year of ART initiation). Associations between site characteristics (such as number of staff members dedicated to data management), measures to reduce loss to follow-up (such as the presence of staff dedicated to tracing patients) and data quality and loss to follow-up were analysed using multivariate logit models. FINDINGS: Twenty-one sites that together provided ART to 50 060 patients were included (median number of patients per site: 1000; interquartile range, IQR: 72-19 320). Eighteen sites (86%) used an electronic database for medical record-keeping; 15 (83%) such sites relied on software intended for personal or small business use. The median percentage of missing data for key variables per site was 10.9% (IQR: 2.0-18.9%) and declined with training in data management (odds ratio, OR: 0.58; 95% confidence interval, CI: 0.37-0.90) and weekly hours spent by a clerk on the database per 100 patients on ART (OR: 0.95; 95% CI: 0.90-0.99). About 10 weekly hours per 100 patients on ART were required to reduce missing data for key variables to below 10%. The median percentage of patients lost to follow-up 1 year after starting ART was 8.5% (IQR: 4.2-19.7%). Strategies to reduce loss to follow-up included outreach teams, community-based organizations and checking death registry data. Implementation of all three strategies substantially reduced losses to follow-up (OR: 0.17; 95% CI: 0.15-0.20). CONCLUSION: The quality of the data collected and the retention of patients in ART treatment programmes are unsatisfactory for many sites involved in the scale-up of ART in resource-limited settings, mainly because of insufficient staff trained to manage data and trace patients lost to follow-up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal Component Analysis (PCA) is a popular method for dimension reduction that can be used in many fields including data compression, image processing, exploratory data analysis, etc. However, traditional PCA method has several drawbacks, since the traditional PCA method is not efficient for dealing with high dimensional data and cannot be effectively applied to compute accurate enough principal components when handling relatively large portion of missing data. In this report, we propose to use EM-PCA method for dimension reduction of power system measurement with missing data, and provide a comparative study of traditional PCA and EM-PCA methods. Our extensive experimental results show that EM-PCA method is more effective and more accurate for dimension reduction of power system measurement data than traditional PCA method when dealing with large portion of missing data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Low-grade gliomas (LGGs) are rare brain neoplasms, with survival spanning up to a few decades. Thus, accurate evaluations on how biomarkers impact survival among patients with LGG require long-term studies on samples prospectively collected over a long period. METHODS The 210 adult LGGs collected in our databank were screened for IDH1 and IDH2 mutations (IDHmut), MGMT gene promoter methylation (MGMTmet), 1p/19q loss of heterozygosity (1p19qloh), and nuclear TP53 immunopositivity (TP53pos). Multivariate survival analyses with multiple imputation of missing data were performed using either histopathology or molecular markers. Both models were compared using Akaike's information criterion (AIC). The molecular model was reduced by stepwise model selection to filter out the most critical predictors. A third model was generated to assess for various marker combinations. RESULTS Molecular parameters were better survival predictors than histology (ΔAIC = 12.5, P< .001). Forty-five percent of studied patients died. MGMTmet was positively associated with IDHmut (P< .001). In the molecular model with marker combinations, IDHmut/MGMTmet combined status had a favorable impact on overall survival, compared with IDHwt (hazard ratio [HR] = 0.33, P< .01), and even more so the triple combination, IDHmut/MGMTmet/1p19qloh (HR = 0.18, P< .001). Furthermore, IDHmut/MGMTmet/TP53pos triple combination was a significant risk factor for malignant transformation (HR = 2.75, P< .05). CONCLUSION By integrating networks of activated molecular glioma pathways, the model based on genotype better predicts prognosis than histology and, therefore, provides a more reliable tool for standardizing future treatment strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE To prospectively evaluate the psychometric properties of the Venous Insufficiency Epidemiological and Economic Study (VEINES-QOL/Sym) questionnaire, an instrument to measure disease-specific quality of life and symptoms in elderly patients with deep vein thrombosis (DVT), and to validate a German version of the questionnaire. METHODS In a prospective multicenter cohort study of patients aged ≥ 65 years with acute venous thromboembolism, we used standard psychometric tests and criteria to evaluate the reliability, validity, and responsiveness of the VEINES-QOL/Sym in patients with acute symptomatic DVT. We also performed an exploratory factor analysis. RESULTS Overall, 352 French- and German-speaking patients were enrolled (response rate of 87 %). Both language versions of the VEINES-QOL/Sym showed good acceptability (missing data, floor and ceiling effects), reliability (internal consistency, item-total and inter-item correlations), validity (convergent, discriminant, known-groups differences), and responsiveness to clinical change over time in elderly patients with DVT. The exploratory factor analysis of the VEINES-QOL/Sym suggested three underlying dimensions: limitations in daily activities, DVT-related symptoms, and psychological impact. CONCLUSIONS The VEINES-QOL/Sym questionnaire is a practical, reliable, valid, and responsive instrument to measure quality of life and symptoms in elderly patients with DVT and can be used with confidence in prospective studies to measure outcomes in such patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary artery disease (CAD) is the most common cause of morbidity and mortality in the United States. While Coronary Angiography (CA) is the gold standard test to investigate coronary artery disease, Prospective gated-64 Slice Computed Tomography (Prosp-64CT) is a new non-invasive technology that uses the 64Slice computed tomography (64CT) with electrocardiographic gating to investigate coronary artery disease. The aim of the current study was to investigate the role of Body Mass Index (BMI) as a factor affecting occurrence of CA after a Prosp-64CT, as well as the quality of the Prosp-64CT. Demographic and clinical characteristics of the study population were described. A secondary analysis of data on patients who underwent a Prosp-64CT for evaluation of coronary artery disease was performed. Seventy seven patients who underwent Prosp-64CT for evaluation for coronary artery disease were included. Fifteen patients were excluded because they had missing data regarding BMI, quality of the Prosp-64CT or CA. Thus, a total of 62 patients were included in the final analysis. The mean age was 56.2 years. The mean BMI was 31.3 kg/m 2. Eight (13%) patients underwent a CA within one month of Prosp-64CT. Eight (13%) patients had a poor quality Prosp-64CT. There was significant association of higher BMI as a factor for occurrence of CA post Prosp-64CT (P<0.05). There was a trend, but no statistical significance was observed for the association of being obese and occurrence of CA (P=0.06). BMI, as well as obesity, were not found to be significantly associated with poor quality of Prosp-64CT (P=0.19 and P=0.76, respectively). In conclusion, BMI was significantly associated with occurrence of CA within one month of Prosp-64CT. Thus, in patients with a higher BMI, diagnostic investigation with both tests could be avoided; rather, only a CA could be performed. However, the relationship of BMI to quality of Prosp-64CT needs to be further investigated since the sample size of the current study was small.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Targeting of many secretory and membrane proteins to the inner membrane in Escherichia coli is achieved by the signal recognition particle (SRP) and its receptor (FtsY). In E. coli SRP consists of only one polypeptide (Ffh), and a 4.5S RNA. Ffh and FtsY each contain a conserved GTPase domain (G domain) with an α-helical domain on its N terminus (N domain). The nucleotide binding kinetics of the NG domain of the SRP receptor FtsY have been investigated, using different fluorescence techniques. Methods to describe the reaction kinetically are presented. The kinetics of interaction of FtsY with guanine nucleotides are quantitatively different from those of other GTPases. The intrinsic guanine nucleotide dissociation rates of FtsY are about 105 times higher than in Ras, but similar to those seen in GTPases in the presence of an exchange factor. Therefore, the data presented here show that the NG domain of FtsY resembles a GTPase–nucleotide exchange factor complex not only in its structure but also kinetically. The I-box, an insertion present in all SRP-type GTPases, is likely to act as an intrinsic exchange factor. From this we conclude that the details of the GTPase cycle of FtsY and presumably other SRP-type GTPases are fundamentally different from those of other GTPases.