975 resultados para multivariate methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

La stratégie actuelle de contrôle de la qualité de l’anode est inadéquate pour détecter les anodes défectueuses avant qu’elles ne soient installées dans les cuves d’électrolyse. Des travaux antérieurs ont porté sur la modélisation du procédé de fabrication des anodes afin de prédire leurs propriétés directement après la cuisson en utilisant des méthodes statistiques multivariées. La stratégie de carottage des anodes utilisée à l’usine partenaire fait en sorte que ce modèle ne peut être utilisé que pour prédire les propriétés des anodes cuites aux positions les plus chaudes et les plus froides du four à cuire. Le travail actuel propose une stratégie pour considérer l’histoire thermique des anodes cuites à n’importe quelle position et permettre de prédire leurs propriétés. Il est montré qu’en combinant des variables binaires pour définir l’alvéole et la position de cuisson avec les données routinières mesurées sur le four à cuire, les profils de température des anodes cuites à différentes positions peuvent être prédits. Également, ces données ont été incluses dans le modèle pour la prédiction des propriétés des anodes. Les résultats de prédiction ont été validés en effectuant du carottage supplémentaire et les performances du modèle sont concluantes pour la densité apparente et réelle, la force de compression, la réactivité à l’air et le Lc et ce peu importe la position de cuisson.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The evidence base on end-of-life care in acute stroke is limited, particularly with regard to recognising dying and related decision-making. There is also limited evidence to support the use of end-of-life care pathways (standardised care plans) for patients who are dying after stroke. Aim: This study aimed to explore the clinical decision-making involved in placing patients on an end-of-life care pathway, evaluate predictors of care pathway use, and investigate the role of families in decision-making. The study also aimed to examine experiences of end-of-life care pathway use for stroke patients, their relatives and the multi-disciplinary health care team. Methods: A mixed methods design was adopted. Data were collected in four Scottish acute stroke units. Case-notes were identified prospectively from 100 consecutive stroke deaths and reviewed. Multivariate analysis was performed on case-note data. Semi-structured interviews were conducted with 17 relatives of stroke decedents and 23 healthcare professionals, using a modified grounded theory approach to collect and analyse data. The VOICES survey tool was also administered to the bereaved relatives and data were analysed using descriptive statistics and thematic analysis of free-text responses. Results: Relatives often played an important role in influencing aspects of end-of-life care, including decisions to use an end-of-life care pathway. Some relatives experienced enduring distress with their perceived responsibility for care decisions. Relatives felt unprepared for and were distressed by prolonged dying processes, which were often associated with severe dysphagia. Pro-active information-giving by staff was reported as supportive by relatives. Healthcare professionals generally avoided discussing place of care with families. Decisions to use an end-of-life care pathway were not predicted by patients’ demographic characteristics; decisions were generally made in consultation with families and the extended health care team, and were made within regular working hours. Conclusion: Distressing stroke-related issues were more prominent in participants’ accounts than concerns with the end-of-life care pathway used. Relatives sometimes perceived themselves as responsible for important clinical decisions. Witnessing prolonged dying processes was difficult for healthcare professionals and families, particularly in relation to the management of persistent major swallowing difficulties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To evaluate the association between oral health status, socio-demographic and behavioral factors with the pattern of maturity of normal epithelial oral mucosa. Methods: Exfoliative cytology specimens were collected from 117 men from the border of the tongue and floor of the mouth on opposite sides. Cells were stained with the Papanicolaou method and classified into: anucleated, superficial cells with nuclei, intermediate and parabasal cells. Quantification was made by selecting the first 100 cells in each glass slide. Sociodemographic and behavioral variables were collected from a structured questionnaire. Oral health was analyzed by clinical examination, recording decayed, missing and filled teeth index (DMFT) and use of prostheses. Multivariable linear regression models were applied. Results: No significant differences for all studied variables influenced the pattern of maturation of the oral mucosa except for alcohol consumption. There was an increase of cell surface layers of the epithelium with the chronic use of alcohol. Conclusions: It is appropriate to use Papanicolaou cytopathological technique to analyze the maturation pattern of exposed subjects, with a strong recommendation for those who use alcohol - a risk factor for oral cancer, in which a change in the proportion of cell types is easily detected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87 – 100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling’s T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging research involves analyses of huge amounts of biological data that might or might not be related with cognition. This relationship is usually approached using univariate methods, and, therefore, correction methods are mandatory for reducing false positives. Nevertheless, the probability of false negatives is also increased. Multivariate frameworks have been proposed for helping to alleviate this balance. Here we apply multivariate distance matrix regression for the simultaneous analysis of biological and cognitive data, namely, structural connections among 82 brain regions and several latent factors estimating cognitive performance. We tested whether cognitive differences predict distances among individuals regarding their connectivity pattern. Beginning with 3,321 connections among regions, the 36 edges better predicted by the individuals' cognitive scores were selected. Cognitive scores were related to connectivity distances in both the full (3,321) and reduced (36) connectivity patterns. The selected edges connect regions distributed across the entire brain and the network defined by these edges supports high-order cognitive processes such as (a) (fluid) executive control, (b) (crystallized) recognition, learning, and language processing, and (c) visuospatial processing. This multivariate study suggests that one widespread, but limited number, of regions in the human brain, supports high-level cognitive ability differences. Hum Brain Mapp, 2016. © 2016 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes using the Shapley values in allocating the total tail conditional expectation (TCE) to each business line (X j, j = 1, ... , n) when there are n correlated business lines. The joint distributions of X j and S (S = X1 + X2 + ⋯ + X n) are needed in the existing methods, but they are not required in the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multivariate monitoring techniques such as multivariate control charts are used to control the processes that contain more than one correlated characteristic. Although the majority of previous researches are focused on controlling only the mean vector of multivariate processes, little work has been performed to monitor the covariance matrix. In this research, a new method is presented to detect possible shifts in the covariance matrix of multivariate processes. The basis of the proposed method is to eliminate the correlation structure between the quality characteristics by transformation technique and then use an S chart for each variable. The performance of the proposed method is then compared to the ones from other existing methods and a real case is presented.