856 resultados para Random regression models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Virtual metrology (VM) aims to predict metrology values using sensor data from production equipment and physical metrology values of preceding samples. VM is a promising technology for the semiconductor manufacturing industry as it can reduce the frequency of in-line metrology operations and provide supportive information for other operations such as fault detection, predictive maintenance and run-to-run control. The prediction models for VM can be from a large variety of linear and nonlinear regression methods and the selection of a proper regression method for a specific VM problem is not straightforward, especially when the candidate predictor set is of high dimension, correlated and noisy. Using process data from a benchmark semiconductor manufacturing process, this paper evaluates the performance of four typical regression methods for VM: multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), neural networks (NN) and Gaussian process regression (GPR). It is observed that GPR performs the best among the four methods and that, remarkably, the performance of linear regression approaches that of GPR as the subset of selected input variables is increased. The observed competitiveness of high-dimensional linear regression models, which does not hold true in general, is explained in the context of extreme learning machines and functional link neural networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Models of complex systems with n components typically have order n<sup>2</sup> parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species’ trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the impact of smoking and smoking cessation on cardiovascular mortality, acute coronary events, and stroke events in people aged 60 and older, and to calculate and report risk advancement periods for cardiovascular mortality in addition to traditional epidemiological relative risk measures.

DESIGN: Individual participant meta-analysis using data from 25 cohorts participating in the CHANCES consortium. Data were harmonised, analysed separately employing Cox proportional hazard regression models, and combined by meta-analysis.

RESULTS: Overall, 503,905 participants aged 60 and older were included in this study, of whom 37,952 died from cardiovascular disease. Random effects meta-analysis of the association of smoking status with cardiovascular mortality yielded a summary hazard ratio of 2.07 (95% CI 1.82 to 2.36) for current smokers and 1.37 (1.25 to 1.49) for former smokers compared with never smokers. Corresponding summary estimates for risk advancement periods were 5.50 years (4.25 to 6.75) for current smokers and 2.16 years (1.38 to 2.39) for former smokers. The excess risk in smokers increased with cigarette consumption in a dose-response manner, and decreased continuously with time since smoking cessation in former smokers. Relative risk estimates for acute coronary events and for stroke events were somewhat lower than for cardiovascular mortality, but patterns were similar.

CONCLUSIONS: Our study corroborates and expands evidence from previous studies in showing that smoking is a strong independent risk factor of cardiovascular events and mortality even at older age, advancing cardiovascular mortality by more than five years, and demonstrating that smoking cessation in these age groups is still beneficial in reducing the excess risk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To study the population distribution and longitudinal changes in anterior chamber angle width and its determinants among Chinese adults. Design: Prospective cohort, population-based study. Participants: Persons aged 35 years or more residing in Guangzhou, China, who had not previously undergone incisional or laser eye surgery. Methods: In December 2008 and December 2010, all subjects underwent automated keratometry, and a random 50% sample had anterior segment optical coherence tomography with measurement of angle-opening distance at 500 μm (AOD500), angle recess area (ARA), iris thickness at 750 μm (IT750), iris curvature, pupil diameter, corneal thickness, anterior chamber width (ACW), lens vault (LV), and lens thickness (LT) and measurement of axial length (AL) and anterior chamber depth (ACD) by partial coherence laser interferometry. Main Outcome Measures: Baseline and 2-year change in AOD500 and ARA in the right eye. Results: A total of 745 subjects were present for full biometric testing in both 2008 and 2010 (mean age at baseline, 52.2 years; standard deviation [SD], 11.5 years; 53.7% were female). Test completion rates in 2010 varied from 77.3% (AOD500: 576/745) to 100% (AL). Mean AOD500 decreased from 0.25 mm (SD, 0.13 mm) in 2008 to 0.21 mm (SD, 13 mm) in 2010 (difference, -0.04; 95% confidence interval [CI], -0.05 to -0.03). The ARA decreased from 21.5±3.73 10-2 mm2 to 21.0±3.64 10 -2 mm2 (difference, -0.46; 95% CI, -0.52 to -0.41). The decrease in both was most pronounced among younger subjects and those with baseline AOD500 in the widest quartile at baseline. The following baseline variables were significantly associated with a greater 2-year decrease in both AOD500 and ARA: deeper ACD, steeper iris curvature, smaller LV, greater ARA, and greater AOD500. By using simple regression models, we could explain 52% to 58% and 93% of variation in baseline AOD500 and ARA, respectively, but only 27% and 16% of variation in 2-year change in AOD500 and ARA, respectively. Conclusions: Younger persons and those with the least crowded anterior chambers at baseline have the largest 2-year decreases in AOD500 and ARA. The ability to predict change in angle width based on demographic and biometric factors is relatively poor, which may have implications for screening. Financial Disclosure(s): The author(s) have no proprietary or commercial interest in any materials discussed in this article. © 2012 American Academy of Ophthalmology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To assess the impact of laser peripheral iridotomy (LPI) on forward-scatter of light and subjective visual symptoms and to identify LPI parameters influencing these phenomena. DESIGN: Cohort study derived from a randomized trial, using an external control group. PARTICIPANTS: Chinese subjects initially aged 50 or older and 70 years or younger with bilateral narrow angles undergoing LPI in 1 eye selected at random, and age- and gender-matched controls. METHODS: Eighteen months after laser, LPI-treated subjects underwent digital iris photography and photogrammetry to characterize the size and location of the LPI, Lens Opacity Classification System III cataract grading, and measurement of retinal straylight (C-Quant; OCULUS, Wetzlar, Germany) in the treated and untreated eyes and completed a visual symptoms questionnaire. Controls answered the questionnaire and underwent straylight measurement and (in a random one-sixth sample) cataract grading. MAIN OUTCOME MEASURES: Retinal straylight levels and subjective visual symptoms. RESULTS: Among 230 LPI-treated subjects (121 [58.8%] with LPI totally covered by the lid, 43 [19.8%] with LPI partly covered by the lid, 53 [24.4%] with LPI uncovered by the lid), 217 (94.3%) completed all testing, as did 250 (93.3%) of 268 controls. Age, gender, and prevalence of visual symptoms did not differ between treated subjects and controls, although nuclear (P<0.01) and cortical (P = 0.03) cataract were less common among controls. Neither presenting visual acuity nor straylight score differed between the treated and untreated eyes among all treated persons, nor among those (n = 96) with LPI partially or totally uncovered. Prevalence of subjective glare did not differ significantly between participants with totally covered LPI (6.61%; 95% confidence interval [CI], 3.39%-12.5%), partially covered LPI (11.6%; 95% CI, 5.07%-24.5%), or totally uncovered LPI (9.43%; 95% CI, 4.10%-10.3%). In regression models, only worse cortical cataract grade (P = 0.01) was associated significantly with straylight score, and no predictors were associated with subjective glare. None of the LPI size or location parameters were associated with straylight or subjective symptoms. CONCLUSIONS: These results suggests that LPI is safe regarding measures of straylight and visual symptoms. This randomized design provides strong evidence that treatment programs for narrow angles would be unlikely to result in important medium-term visual disability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To study the prevalence and determinants of compliance with spectacle wear among school-age children in Oaxaca, Mexico, who were provided spectacles free of charge. METHODS: A cohort of 493 children aged 5 to 18 years chosen by random cluster sampling from primary and secondary schools in Oaxaca, Mexico, all of whom had received free spectacles through a local program, underwent unannounced, direct examination to determine compliance with spectacle wear within 18 months after initial provision of spectacles. Potential determinants of spectacle wear including age, gender, urban versus rural residence, presenting visual acuity, refractive error, and time since dispensing of the spectacles were examined in univariate and multivariate regression models. Children not currently wearing their spectacles were asked to select the reason from a list of possibilities, and reasons for noncompliance were analyzed within different demographic groups. RESULTS: Among this sample of children with a mean age of 10.4 +/- 2.6 years, the majority (74.5%) of whom were myopic (spherical equivalent [SE] < or = -0.50 D), 13.4% (66/493) were wearing their spectacles at the time of examination. An additional 34% (169/493) had the spectacles with them but were not wearing them. In regression models, the odds of spectacle wear were significantly higher among younger (OR = 1.19 per year of age; 95% CI, 1.05-1.33) rural (OR = 10.6; 95% CI, 5.3-21.0) children and those with myopia < or = -1.25 D (OR = 3.97; 95% CI, 1.98-7.94). The oldest children and children in urban-suburban areas were significantly more likely to list concerns about the appearance of the glasses or about being teased than were younger, rurally resident children. CONCLUSIONS: Compliance with spectacle wear may be very low, even when spectacles are provided free of charge, particularly among older, urban children, who have been shown in many populations to have the highest prevalence of myopia. As screening programs for refractive error become increasingly common throughout the world, new strategies are needed to improve compliance if program resources are to be maximized.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE:

To study the associations between near work, outdoor activity, and myopia among children attending secondary school in rural China.

METHODS:

Among a random cluster sample of 1892 children in Xichang, China, subjects with an uncorrected acuity of 6/12 or less in either eye (n = 984) and a 25% sample of children with normal vision (n = 248) underwent measurement of refractive error. Subjects were administered a questionnaire on parental education, time spent outdoors, and weekly time spent engaged in and preferred working distance for a variety of near-work activities.

RESULTS:

Among 1232 children with refraction data, 998 (81.0%) completed the near-work survey. Their mean age was 14.6 years (SD, 0.8 years), 55.6% were girls, and 83.1% had myopia of -0.5 diopters or less (more myopia) in both eyes. Time and diopter-hours spent on near activities did not differ between children with and without myopia. In regression models, time spent on near activities and time outdoors were unassociated with myopia, adjusting for age, sex, and parental education.

CONCLUSIONS:

These and other recent results raise some doubts about the association between near work and myopia. Additional efforts to identify other environmental factors associated with myopia risk and that may be amenable to intervention are warranted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To describe the distribution of central corneal thickness (CCT), intraocular pressure (IOP), and their determinants and association with glaucoma in Chinese adults.DESIGN: Population-based cross-sectional study.METHODS: Chinese adults aged 50 years and older were identified using cluster random sampling in Liwan District, Guangzhou. CCT (both optical [OCCT] and ultrasound [UCCT]), intraocular pressure (by Tonopen, IOP), refractive error (by autorefractor, RE), radius of corneal curvature (RCC), axial length (AL), and body mass index (BMI) were measured, and history of hypertension and diabetes (DM) was collected by questionnaire. Right eye data were analyzed.RESULTS: The mean values of OCCT, UCCT, and IOP were 512 ± 29.0 μm, 542 ± 31.4 μm, and 15.2 ± 3.1 mm Hg, respectively. In multiple regression models, CCT declined with age (P < .001) and increased with greater RCC (P < .001) and DM (P = .037). IOP was positively associated with greater CCT (P < .001), BMI (P < .001), and hypertension (P < .001). All 25 persons with open-angle glaucoma had IOP <21 mm Hg. CCT did not differ significantly between persons with and without open- or closed-angle glaucoma. Among 65 persons with ocular hypertension (IOP >97.5th percentile), CCT (555 ± 29 μm) was significantly (P = .01) higher than for normal persons.CONCLUSIONS: The distributions of CCT and IOP in this study are similar to that for other Chinese populations, though IOP was lower than for European populations, possibly due to lower BMI and blood pressure. Glaucoma with IOP <21 mm Hg is common in this population. We found no association between glaucoma and CCT, though power (0.3) for this analysis was low.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: Regular use of nonsteroidal anti-inflammatory drugs (NSAIDs) is associated with a reduced risk of esophageal adenocarcinoma. Epidemiological studies examining the association between NSAID use and the risk of the precursor lesion, Barrett’s esophagus, have been inconclusive.

METHODS: We analyzed pooled individual-level participant data from six case-control studies of Barrett’s esophagus in the Barrett’s and Esophageal Adenocarcinoma Consortium (BEACON). We compared medication use from 1474 patients with Barrett’s esophagus separately with two control groups: 2256 population-based controls and 2018 gastroesophageal reflux disease (GERD) controls. Study-specific odds ratios (OR) and 95% confidence intervals (CI) were estimated using multivariable logistic regression models and were combined using a random effects meta-analytic model.

RESULTS: Regular (at least once weekly) use of any NSAIDs was not associated with the risk of Barrett’s esophagus (vs. population-based controls, adjusted OR = 1.00, 95% CI = 0.76–1.32; I2=61%; vs. GERD controls, adjusted OR = 0.99, 95% CI = 0.82–1.19; I2=19%). Similar null findings were observed among individuals who took aspirin or non-aspirin NSAIDs. We also found no association with highest levels of frequency (at least daily use) and duration (≥5 years) of NSAID use. There was evidence of moderate between-study heterogeneity; however, associations with NSAID use remained non-significant in “leave-one-out” sensitivity analyses.

CONCLUSIONS: Use of NSAIDs was not associated with the risk of Barrett’s esophagus. The previously reported inverse association between NSAID use and esophageal adenocarcinoma may be through reducing the risk of neoplastic progression in patients with Barrett’s esophagus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nos últimos anos, o número de vítimas de acidentes de tráfego por milhões de habitantes em Portugal tem sido mais elevado do que a média da União Europeia. Ao nível nacional torna-se premente uma melhor compreensão dos dados de acidentes e sobre o efeito do veículo na gravidade do mesmo. O objetivo principal desta investigação consistiu no desenvolvimento de modelos de previsão da gravidade do acidente, para o caso de um único veículo envolvido e para caso de uma colisão, envolvendo dois veículos. Além disso, esta investigação compreendeu o desenvolvimento de uma análise integrada para avaliar o desempenho do veículo em termos de segurança, eficiência energética e emissões de poluentes. Os dados de acidentes foram recolhidos junto da Guarda Nacional Republicana Portuguesa, na área metropolitana do Porto para o período de 2006-2010. Um total de 1,374 acidentes foram recolhidos, 500 acidentes envolvendo um único veículo e 874 colisões. Para a análise da segurança, foram utilizados modelos de regressão logística. Para os acidentes envolvendo um único veículo, o efeito das características do veículo no risco de feridos graves e/ou mortos (variável resposta definida como binária) foi explorado. Para as colisões envolvendo dois veículos foram criadas duas variáveis binárias adicionais: uma para prever a probabilidade de feridos graves e/ou mortos num dos veículos (designado como veículo V1) e outra para prever a probabilidade de feridos graves e/ou mortos no outro veículo envolvido (designado como veículo V2). Para ultrapassar o desafio e limitações relativas ao tamanho da amostra e desigualdade entre os casos analisados (apenas 5.1% de acidentes graves), foi desenvolvida uma metodologia com base numa estratégia de reamostragem e foram utilizadas 10 amostras geradas de forma aleatória e estratificada para a validação dos modelos. Durante a fase de modelação, foi analisado o efeito das características do veículo, como o peso, a cilindrada, a distância entre eixos e a idade do veículo. Para a análise do consumo de combustível e das emissões, foi aplicada a metodologia CORINAIR. Posteriormente, os dados das emissões foram modelados de forma a serem ajustados a regressões lineares. Finalmente, foi desenvolvido um indicador de análise integrada (denominado “SEG”) que proporciona um método de classificação para avaliar o desempenho do veículo ao nível da segurança rodoviária, consumos e emissões de poluentes.Face aos resultados obtidos, para os acidentes envolvendo um único veículo, o modelo de previsão do risco de gravidade identificou a idade e a cilindrada do veículo como estatisticamente significativas para a previsão de ocorrência de feridos graves e/ou mortos, ao nível de significância de 5%. A exatidão do modelo foi de 58.0% (desvio padrão (D.P.) 3.1). Para as colisões envolvendo dois veículos, ao prever a probabilidade de feridos graves e/ou mortos no veículo V1, a cilindrada do veículo oposto (veículo V2) aumentou o risco para os ocupantes do veículo V1, ao nível de significância de 10%. O modelo para prever o risco de gravidade no veículo V1 revelou um bom desempenho, com uma exatidão de 61.2% (D.P. 2.4). Ao prever a probabilidade de feridos graves e/ou mortos no veículo V2, a cilindrada do veículo V1 aumentou o risco para os ocupantes do veículo V2, ao nível de significância de 5%. O modelo para prever o risco de gravidade no veículo V2 também revelou um desempenho satisfatório, com uma exatidão de 40.5% (D.P. 2.1). Os resultados do indicador integrado SEG revelaram que os veículos mais recentes apresentam uma melhor classificação para os três domínios: segurança, consumo e emissões. Esta investigação demonstra que não existe conflito entre a componente da segurança, a eficiência energética e emissões relativamente ao desempenho dos veículos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de Mestrado, Gestão da Água e da Costa, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2010

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present two Partial Least Squares Regression (PLSR) models for compressive and flexural strength responses of a concrete composite material reinforced with pultrusion wastes. The main objective is to characterize this cost-effective waste management solution for glass fiber reinforced polymer (GFRP) pultrusion wastes and end-of-life products that will lead, thereby, to a more sustainable composite materials industry. The experiments took into account formulations with the incorporation of three different weight contents of GFRP waste materials into polyester based mortars, as sand aggregate and filler replacements, two waste particle size grades and the incorporation of silane adhesion promoter into the polyester resin matrix in order to improve binder aggregates interfaces. The regression models were achieved for these data and two latent variables were identified as suitable, with a 95% confidence level. This technological option, for improving the quality of GFRP filled polymer mortars, is viable thus opening a door to selective recycling of GFRP waste and its use in the production of concrete-polymer based products. However, further and complementary studies will be necessary to confirm the technical and economic viability of the process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The prediction of the time and the efficiency of the remediation of contaminated soils using soil vapor extraction remain a difficult challenge to the scientific community and consultants. This work reports the development of multiple linear regression and artificial neural network models to predict the remediation time and efficiency of soil vapor extractions performed in soils contaminated separately with benzene, toluene, ethylbenzene, xylene, trichloroethylene, and perchloroethylene. The results demonstrated that the artificial neural network approach presents better performances when compared with multiple linear regression models. The artificial neural network model allowed an accurate prediction of remediation time and efficiency based on only soil and pollutants characteristics, and consequently allowing a simple and quick previous evaluation of the process viability.