939 resultados para multiple linear regression
Resumo:
Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.
Resumo:
Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
2000 Mathematics Subject Classification: 62J12, 62P10.
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
2010 Mathematics Subject Classification: 68T50,62H30,62J05.
Resumo:
This paper explains how Poisson regression can be used in studies in which the dependent variable describes the number of occurrences of some rare event such as suicide. After pointing out why ordinary linear regression is inappropriate for treating dependent variables of this sort, we go on to present the basic Poisson regression model and show how it fits in the broad class of generalized linear models. Then we turn to discussing a major problem of Poisson regression known as overdispersion and suggest possible solutions, including the correction of standard errors and negative binomial regression. The paper ends with a detailed empirical example, drawn from our own research on suicide.
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
The purpose of the study was to determine the degree of relationships among GRE scores, undergraduate GPA (UGPA), and success in graduate school, as measured by first year graduate GPA (FGPA), cumulative graduate GPA, and degree attainment status. A second aim of the study was to determine whether the relationships between the composite predictor (GRE scores and UGPA) and the three success measures differed by race/ethnicity and sex. A total of 7,367 graduate student records (masters, 5,990; doctoral: 1,377) from 2000 to 2010 were used to evaluate the relationships among GRE scores, UGPA and the three success measures. Pearson's correlation, multiple linear and logistic regression, and hierarchical multiple linear and logistic regression analyses were performed to answer the research questions. The results of the correlational analyses differed by degree level. For master's students, the ETS proposed prediction that GRE scores are valid predictors of first year graduate GPA was supported by the findings from the present study; however, for doctoral students, the proposed prediction was only partially supported. Regression and correlational analyses indicated that UGPA was the variable that consistently predicted all three success measures for both degree levels. The hierarchical multiple linear and logistic regression analyses indicated that at master's degree level, White students with higher GRE Quantitative Reasoning Test scores were more likely to attain a degree than Asian Americans, while International students with higher UGPA were more likely to attain a degree than White students. The relationships between the three predictors and the three success measures were not significantly different between men and women for either degree level. Findings have implications both for practice and research. They will provide graduate school administrators with institution-specific validity data for UGPA and the GRE scores, which can be referenced in making admission decisions, while they will provide empirical and professionally defensible evidence to support the current practice of using UGPA and GRE scores for admission considerations. In addition, new evidence relating to differential predictions will be useful as a resource reference for future GRE validation researchers.
Resumo:
Background: Autism spectrum disorder (ASD) is multifactorial and is likely the result of complex interactions between multiple environmental and genetic factors. Recently, it has been suggested that each symptom cluster of the disorder, such as poor social communication, may be mediated by different genetic influences. Genes in the oxytocin pathway, which mediates social behaviours in humans, have been studied with single nucleotide polymorphisms (SNPs) in the oxytocin receptor gene (OXTR) being implicated in ASD. This thesis examines the presence of different oxytocin receptor genotypes, and their associations with ASD and resulting social communication deficits. Methods: The relationship between four OXTR variants and ASD was evaluated in 607 ASD simplex (SPX) families. Cases were compared to their unaffected siblings using a conditional logistic approach. Odds ratios and associated 95 percent confidence intervals were obtained. A second sample of 235 individuals with a diagnosis of ASD was examined to evaluate whether these four OXTR variants were associated with social communication scores on the Autism Diagnostic Interview – Revised (ADI-R). Parameter estimates and associated 95 percent confidence intervals were generated using a linear regression approach. Multiple testing issues were addressed using false discovery adjustments. Results: The rs53576 AG genotype was significantly associated with a lower risk of ASD (OR = 0.707, 95% CI: 0.512-0.975). A single genotype (AG) provided by the rs2254298 marker was found to be significantly associated with higher social communication scores (Parameter estimate = 1.833, SE = 0.762, p = 0.0171). This association was also seen in a Caucasian only and mothers as the respondent samples. No association was significant following false discovery rate adjustments. Conclusion: The findings from these studies provide limited support for the role of OXTR SNPs in ASD, especially in social communication skills. The clinical significance of these associations remains unknown, however, it is likely that these associations do not play a role in the severity of symptoms associated with ASD. Rather, they may be important in the appearance of social deficits due to the rs2254298 markers association with enlarged amygdalas.
Resumo:
Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.
Resumo:
Background: As the global population is ageing, studying cognitive impairments including dementia, one of the leading causes of disability in old age worldwide, is of fundamental importance to public health. As a major transition in older age, a focus on the complex impacts of the duration, timing, and voluntariness of retirement on health is important for policy changes in the future. Longer retirement periods, as well as leaving the workforce early, have been associated with poorer health, including reduced cognitive functioning. These associations are hypothesized to differ based on gender, as well as on pre-retirement educational and occupational experiences, and on post-retirement social factors and health conditions. Methods: A cross-sectional study is conducted to determine the relationship between duration and timing of retirement and cognitive function, using data from the five sites of International Mobility in Aging Study (IMIAS). Cognitive function is assessed using the Leganes Cognitive Test (LCT) scores in 2012. Data are analyzed using multiple linear regressions. Analyses are also done by site/region separately (Canada, Latin America, and Albania). Robustness checks are done with an analysis of cognitive change from 2012 to 2014, the effect of voluntariness of retirement on cognitive function. An instrumental variable (IV) approach is also applied to the cross-sectional and longitudinal analyses as a robustness check to address the potential endogeneity of the retirement variable. Results: Descriptive statistics highlight differences between men and women, as well as between sites. In linear regression analysis, there was no relationship between timing or duration of retirement and cognitive function in 2012, when adjusting for site/region. There was no association between retirement characteristics and cognitive function in site/region/stratified analyses. In IV analysis, longer retirement and on time or late retirement was associated with lower cognitive function among men. In IV analysis, there is no relationship between retirement characteristics and cognitive function among women. Conclusions: While results of the thesis suggest a negative effect of retirement on cognitive function, especially among men, the relationship remains uncertain. A lack of power results in the inability to draw conclusions for site/region-specific analysis and site-adjusted analysis in both linear and IV regressions.
Resumo:
v. 19, n. 2, abr./jun. 2016.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.
Resumo:
We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.