972 resultados para linear weighting methods
Resumo:
Complex human diseases are a major challenge for biological research. The goal of my research is to develop effective methods for biostatistics in order to create more opportunities for the prevention and cure of human diseases. This dissertation proposes statistical technologies that have the ability of being adapted to sequencing data in family-based designs, and that account for joint effects as well as gene-gene and gene-environment interactions in the GWA studies. The framework includes statistical methods for rare and common variant association studies. Although next-generation DNA sequencing technologies have made rare variant association studies feasible, the development of powerful statistical methods for rare variant association studies is still underway. Chapter 2 demonstrates two adaptive weighting methods for rare variant association studies based on family data for quantitative traits. The results show that both proposed methods are robust to population stratification, robust to the direction and magnitude of the effects of causal variants, and more powerful than the methods using weights suggested by Madsen and Browning [2009]. In Chapter 3, I extended the previously proposed test for Testing the effect of an Optimally Weighted combination of variants (TOW) [Sha et al., 2012] for unrelated individuals to TOW &ndash F, TOW for Family &ndash based design. Simulation results show that TOW &ndash F can control for population stratification in wide range of population structures including spatially structured populations, is robust to the directions of effect of causal variants, and is relatively robust to percentage of neutral variants. In GWA studies, this dissertation consists of a two &ndash locus joint effect analysis and a two-stage approach accounting for gene &ndash gene and gene &ndash environment interaction. Chapter 4 proposes a novel two &ndash stage approach, which is promising to identify joint effects, especially for monotonic models. The proposed approach outperforms a single &ndash marker method and a regular two &ndash stage analysis based on the two &ndash locus genotypic test. In Chapter 5, I proposed a gene &ndash based two &ndash stage approach to identify gene &ndash gene and gene &ndash environment interactions in GWA studies which can include rare variants. The two &ndash stage approach is applied to the GAW 17 dataset to identify the interaction between KDR gene and smoking status.
Resumo:
Correlation and regression are two of the statistical procedures most widely used by optometrists. However, these tests are often misused or interpreted incorrectly, leading to erroneous conclusions from clinical experiments. This review examines the major statistical tests concerned with correlation and regression that are most likely to arise in clinical investigations in optometry. First, the use, interpretation and limitations of Pearson's product moment correlation coefficient are described. Second, the least squares method of fitting a linear regression to data and for testing how well a regression line fits the data are described. Third, the problems of using linear regression methods in observational studies, if there are errors associated in measuring the independent variable and for predicting a new value of Y for a given X, are discussed. Finally, methods for testing whether a non-linear relationship provides a better fit to the data and for comparing two or more regression lines are considered.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Resumo:
BACKGROUND AND PURPOSE: Several morphometric MR imaging studies have investigated age- and sex-related cerebral volume changes in healthy human brains, most often by using samples spanning several decades of life and linear correlation methods. This study aimed to map the normal pattern of regional age-related volumetric reductions specifically in the elderly population. MATERIALS AND METHODS: One hundred thirty-two eligible individuals (67-75 years of age) were selected from a community-based sample recruited for the Sao Paulo Ageing and Health (SPAH) study, and a cross-sectional MR imaging investigation was performed concurrently with the second SPAH wave. We used voxel-based morphometry (VBM) to conduct a voxelwise search for significant linear correlations between gray matter (GM) volumes and age. In addition, region-of-interest masks were used to investigate whether the relationship between regional GM (rGM) volumes and age would be best predicted by a nonlinear model. RESULTS: VBM and region-of-interest analyses revealed selective foci of accelerated rGM loss exclusively in men, involving the temporal neocortex, prefrontal cortex, and medial temporal region. The only structure in which GM volumetric changes were best predicted by a nonlinear model was the left parahippocampal gyrus. CONCLUSIONS: The variable patterns of age-related GM loss across separate neocortical and temporolimbic regions highlight the complexity of degenerative processes that affect the healthy human brain across the life span. The detection of age-related Ill GM decrease in men supports the view that atrophy in such regions should be seen as compatible with normal aging.
Resumo:
In this work, a new method of optimization is successfully applied to the theoretical design of compact, actively shielded, clinical MRI magnets. The problem is formulated as a two-step process in which the desired current densities on multiple, cc-axial surface layers are first calculated by solving Fredholm equations of the first kind. Non-linear optimization methods with inequality constraints are then invoked to fit practical magnet coils to the desired current densities. The current density approach allows rapid prototyping of unusual magnet designs. The emphasis of this work is on the optimal design of short, actively-shielded MRI magnets for whole-body imaging. Details of the hybrid numerical model are presented, and the model is used to investigate compact, symmetric, and asymmetric MRI magnets. Magnet designs are presented for actively-shielded, symmetric magnets of coil length 1.0 m, which is considerably shorter than currently available designs of comparable dsv size. Novel, actively-shielded, asymmetric magnet designs are also presented in which the beginning of a 50-cm dsv is positioned just 11 cm from the end of the coil structure, allowing much improved access to the patient and reduced patient claustrophobia. Magn Reson Med 45:331540, 2001. (C) 2001 Wiley-Liss, Inc.
Resumo:
The principal aim of this paper is to measure the amount by which the profit of a multi-input, multi-output firm deviates from maximum short-run profit, and then to decompose this profit gap into components that are of practical use to managers. In particular, our interest is in the measurement of the contribution of unused capacity, along with measures of technical inefficiency, and allocative inefficiency, in this profit gap. We survey existing definitions of capacity and, after discussing their shortcomings, we propose a new ray economic capacity measure that involves short-run profit maximisation, with the output mix held constant. We go on to describe how the gap between observed profit and maximum profit can be calculated and decomposed using linear programming methods. The paper concludes with an empirical illustration, involving data on 28 international airline companies. The empirical results indicate that these airline companies achieve profit levels which are on average US$815m below potential levels, and that 70% of the gap may be attributed to unused capacity. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
O problema de selecção de fornecedores/parceiros é uma parte integrante e importante nas empresas que se propõem a um desempenho competitivo e lucrativo na sua área de actividade. A escolha do melhor fornecedor/parceiro passa na maior parte da vezes por fazer uma análise cuidada dos factores que podem influenciar positiva ou negativamente essa escolha. Desde cedo este problema tem vindo a ser alvo de inúmeros estudos, estudos esses que se focam essencialmente nos critérios a considerar e nas metodologias a adoptar para optimizar a escolha dos parceiros. De entre os vários estudos efectuados, muitos são os que consideram como critérios chave o custo do produto, a qualidade, a entrega e a reputação da empresa fornecedora. Ainda assim, há muitos outros que são referidos e que na sua maioria se apresentam como subcritérios. No âmbito deste trabalho, foram identificados cinco grandes critérios, Qualidade, Sistema Financeiro, Sinergias, Custo e Sistema Produtivo. Dentro desses critérios, sentiu-se a necessidade de incluir alguns subcritérios pelo que, cada um dos critérios chave apresenta cinco subcritérios. Identificados os critérios, foi necessário perceber de que forma são aplicados e que modelos são utilizados para se poder tirar o melhor partido das informações. Sabendo que existem modelos que privilegiam a programação matemática e outros que fazem uso de ponderações lineares para se identificar o melhor fornecedor, foi realizado um inquérito e contactadas empresas por forma a perceber quais os factores que mais peso tinham nas suas decisões de escolha de parceiros. Interpretados os resultados e tratados os dados foi adoptado um modelo de ponderação linear para traduzir a importância de cada um dos factores. O modelo proposto apresenta uma estrutura hierárquica e pode ser aplicado com o método AHP de Saaty ou o método de Análise de Valor. Este modelo permite escolher a ou as alternativas que melhor se adequam aos requisitos das empresas.
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, there were identified five broad selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. After the identification criteria, a survey was elaborated and companies were contacted in order to understand which factors have more weight in their decisions to choose the partners. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Value Analysis. The goal of the paper it's to supply a selection reference model that can represent an orientation/pattern for a decision making on the suppliers/partners selection process
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics and Maastricht University School of Business and Economics
Resumo:
Olive oil quality grading is traditionally assessed by human sensory evaluation of positive and negative attributes (olfactory, gustatory, and final olfactorygustatory sensations). However, it is not guaranteed that trained panelist can correctly classify monovarietal extra-virgin olive oils according to olive cultivar. In this work, the potential application of human (sensory panelists) and artificial (electronic tongue) sensory evaluation of olive oils was studied aiming to discriminate eight single-cultivar extra-virgin olive oils. Linear discriminant, partial least square discriminant, and sparse partial least square discriminant analyses were evaluated. The best predictive classification was obtained using linear discriminant analysis with simulated annealing selection algorithm. A low-level data fusion approach (18 electronic tongue signals and nine sensory attributes) enabled 100 % leave-one-out cross-validation correct classification, improving the discrimination capability of the individual use of sensor profiles or sensory attributes (70 and 57 % leave-one-out correct classifications, respectively). So, human sensory evaluation and electronic tongue analysis may be used as complementary tools allowing successful monovarietal olive oil discrimination.
Resumo:
Solid phase microextraction (SPME) has been widely used for many years in various applications, such as environmental and water samples, food and fragrance analysis, or biological fluids. The aim of this study was to suggest the SPME method as an alternative to conventional techniques used in the evaluation of worker exposure to benzene, toluene, ethylbenzene, and xylene (BTEX). Polymethylsiloxane-carboxen (PDMS/CAR) showed as the most effective stationary phase material for sorbing BTEX among other materials (polyacrylate, PDMS, PDMS/divinylbenzene, Carbowax/divinylbenzene). Various experimental conditions were studied to apply SPME to BTEX quantitation in field situations. The uptake rate of the selected fiber (75 microm PDMS/CAR) was determined for each analyte at various concentrations, relative humidities, and airflow velocities from static (calm air) to dynamic (> 200 cm/s) conditions. The SPME method also was compared with the National Institute of Occupational Safety and Health method 1501. Unlike the latter, the SPME approach fulfills the new requirement for the threshold limit value-short term exposure limit (TLV-STEL) of 2.5 ppm for benzene (8 mg/m(3))
Resumo:
The variation with latitude of incidence and mortality for cutaneous malignant melanoma (CMM) in the non-Maori population of New Zealand was assessed. For those aged 20 to 74 years, the effects of age, time period, birth-cohort, gender, and region (latitude), and some interactions between them were evaluated by log-linear regression methods. Increasing age-standardized incidence and mortality rates with increasing proximity to the equator were found for men and women. These latitude gradients were greater for males than females. The relative risk of melanoma in the most southern part of New Zealand (latitude 44 degrees S) compared with the most northern region (latitude 36 degrees S) was 0.63 (95 percent confidence interval [CI] = 0.60-0.67) for incidence and 0.76 (CI = 0.68-0.86) for mortality, both genders combined. The mean percentage change in CMM rates per degree of latitude for males was greater than those reported in other published studies. Differences between men and women in melanoma risk with latitude suggest that regional sun-behavior patterns or other risk factors may contribute to the latitude gradient observed.
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.