952 resultados para Multivariate statistical method
Resumo:
BACKGROUND AND PURPOSE: To determine whether infarct core or penumbra is the more significant predictor of outcome in acute ischemic stroke, and whether the results are affected by the statistical method used. METHODS: Clinical and imaging data were collected in 165 patients with acute ischemic stroke. We reviewed the noncontrast head computed tomography (CT) to determine the Alberta Score Program Early CT score and assess for hyperdense middle cerebral artery. We reviewed CT-angiogram for site of occlusion and collateral flow score. From perfusion-CT, we calculated the volumes of infarct core and ischemic penumbra. Recanalization status was assessed on early follow-up imaging. Clinical data included age, several time points, National Institutes of Health Stroke Scale at admission, treatment type, and modified Rankin score at 90 days. Two multivariate regression analyses were conducted to determine which variables predicted outcome best. In the first analysis, we did not include recanalization status among the potential predicting variables. In the second, we included recanalization status and its interaction between perfusion-CT variables. RESULTS: Among the 165 study patients, 76 had a good outcome (modified Rankin score ≤2) and 89 had a poor outcome (modified Rankin score >2). In our first analysis, the most important predictors were age (P<0.001) and National Institutes of Health Stroke Scale at admission (P=0.001). The imaging variables were not important predictors of outcome (P>0.05). In the second analysis, when the recanalization status and its interaction with perfusion-CT variables were included, recanalization status and perfusion-CT penumbra volume became the significant predictors (P<0.001). CONCLUSIONS: Imaging prediction of tissue fate, more specifically imaging of the ischemic penumbra, matters only if recanalization can also be predicted.
Resumo:
The research of condition monitoring of electric motors has been wide for several decades. The research and development at universities and in industry has provided means for the predictive condition monitoring. Many different devices and systems are developed and are widely used in industry, transportation and in civil engineering. In addition, many methods are developed and reported in scientific arenas in order to improve existing methods for the automatic analysis of faults. The methods, however, are not widely used as a part of condition monitoring systems. The main reasons are, firstly, that many methods are presented in scientific papers but their performance in different conditions is not evaluated, secondly, the methods include parameters that are so case specific that the implementation of a systemusing such methods would be far from straightforward. In this thesis, some of these methods are evaluated theoretically and tested with simulations and with a drive in a laboratory. A new automatic analysis method for the bearing fault detection is introduced. In the first part of this work the generation of the bearing fault originating signal is explained and its influence into the stator current is concerned with qualitative and quantitative estimation. The verification of the feasibility of the stator current measurement as a bearing fault indicatoris experimentally tested with the running 15 kW induction motor. The second part of this work concentrates on the bearing fault analysis using the vibration measurement signal. The performance of the micromachined silicon accelerometer chip in conjunction with the envelope spectrum analysis of the cyclic bearing faultis experimentally tested. Furthermore, different methods for the creation of feature extractors for the bearing fault classification are researched and an automatic fault classifier using multivariate statistical discrimination and fuzzy logic is introduced. It is often important that the on-line condition monitoring system is integrated with the industrial communications infrastructure. Two types of a sensor solutions are tested in the thesis: the first one is a sensor withcalculation capacity for example for the production of the envelope spectra; the other one can collect the measurement data in memory and another device can read the data via field bus. The data communications requirements highly depend onthe type of the sensor solution selected. If the data is already analysed in the sensor the data communications are needed only for the results but in the other case, all measurement data need to be transferred. The complexity of the classification method can be great if the data is analysed at the management level computer, but if the analysis is made in sensor itself, the analyses must be simple due to the restricted calculation and memory capacity.
Resumo:
There has been a lack of quick, simple and reliable methods for determination of nanoparticle size. An investigation of the size of hydrophobic (CdSe) and hydrophilic (CdSe/ZnS) quantum dots was performed by using the maximum position of the corresponding fluorescence spectrum. It has been found that fluorescence spectroscopy is a simple and reliable methodology to estimate the size of both quantum dot types. For a given solution, the homogeneity of the size of quantum dots is correlated to the relationship between the fluorescence maximum position (FMP) and the quantum dot size. This methodology can be extended to the other fluorescent nanoparticles. The employment of evolving factor analysis and multivariate curve resolution-alternating least squares for decomposition of the series of quantum dots fluorescence spectra recorded by a specific measuring procedure reveals the number of quantum dot fractions having different diameters. The size of the quantum dots in a particular group is defined by the FMP of the corresponding component in the decomposed spectrum. These results show that a combination of the fluorescence and appropriate statistical method for decomposition of the emission spectra of nanoparticles may be a quick and trusted method for the screening of the inhomogeneity of their solution.
Resumo:
Many social phenomena involve a set of dyadic relations among agents whose actions may be dependent. Although individualistic approaches have frequently been applied to analyze social processes, these are not generally concerned with dyadic relations nor do they deal with dependency. This paper describes a mathematical procedure for analyzing dyadic interactions in a social system. The proposed method mainly consists of decomposing asymmetric data into their symmetrical and skew-symmetrical parts. A quantification of skew-symmetry for a social system can be obtained by dividing the norm of the skew-symmetrical matrix by the norm of the asymmetric matrix. This calculation makes available to researchers a quantity related to the amount of dyadic reciprocity. Regarding agents, the procedure enables researchers to identify those whose behavior is asymmetric with respect to all agents. It is also possible to derive symmetric measurements among agents and to use multivariate statistical techniques.
Resumo:
In recent years there has been growing interest in composite indicators as an efficient tool of analysis and a method of prioritizing policies. This paper presents a composite index of intermediary determinants of child health using a multivariate statistical approach. The index shows how specific determinants of child health vary across Colombian departments (administrative subdivisions). We used data collected from the 2010 Colombian Demographic and Health Survey (DHS) for 32 departments and the capital city, Bogotá. Adapting the conceptual framework of Commission on Social Determinants of Health (CSDH), five dimensions related to child health are represented in the index: material circumstances, behavioural factors, psychosocial factors, biological factors and the health system. In order to generate the weight of the variables, and taking into account the discrete nature of the data, principal component analysis (PCA) using polychoric correlations was employed in constructing the index. From this method five principal components were selected. The index was estimated using a weighted average of the retained components. A hierarchical cluster analysis was also carried out. The results show that the biggest differences in intermediary determinants of child health are associated with health care before and during delivery.
Resumo:
This paper presents a composite index of early childhood health using a multivariate statistical approach. The index shows how child health varies across Colombian departments, -administrative subdivisions-. In recent years there has been growing interest in composite indicators as an efficient analysis tool and a way of prioritizing policies. These indicators not only enable multi-dimensional phenomena to be simplified but also make it easier to measure, visualize, monitor and compare a country’s performance in particular issues. We used data collected from the Colombian Demographic and Health Survey, DHS, for 32 departments and the capital city, Bogotá, in 2005 and 2010. The variables included in the index provide a measure of three dimensions related to child health: health status, health determinants and the health system. In order to generate the weight of the variables and take into account the discrete nature of the data, we employed a principal component analysis, PCA, using polychoric correlation. From this method, five principal components were selected. The index was estimated using a weighted average of the components retained. A hierarchical cluster analysis was also carried out. We observed that the departments ranking in the lowest positions are located on the Colombian periphery. They are departments with low per capita incomes and they present critical social indicators. The results suggest that the regional disparities in child health may be associated with differences in parental characteristics, household conditions and economic development levels, which makes clear the importance of context in the study of child health in Colombia.
Resumo:
Hoitotyön koulutukseen pyritään valitsemaan alalle soveltuvia, motivoituneita sekä teoreettisissa ja kliinisissä opinnoissa menestyviä opiskelijoita. Tämän seurantatutkimuksen tarkoituksena oli vertailla soveltuvuuskokeella ja kirjallisella kokeella valittujen hoitotyön opiskelijoiden osaamista ja opiskelumotivaatiota. Tutkimuksen tavoitteena oli tehdä tutkimustulosten perusteella hoitotyön koulutuksen opiskelijavalintoihin liittyviä kehittämisehdotuksia. Tutkimuksen kohderyhmänä olivat yhteen ammattikorkeakouluun syksyn 2002 ja syksyn 2004 välisenä aikana hoitotyön koulutukseen kahdella eri valintakoemenetelmällä valitut hoitotyön opiskelijat (N=626) (sairaanhoitotyö, terveydenhoitotyö, kätilötyö). Opiskelijaryhmistä muodostettiin kaksi kohorttia valintakoemenetelmän perusteella: soveltuvuuskoe (VAL1, N=368) ja kirjallinen koe (VAL2, N=258). Seurantatutkimuksen aineisto kerättiin opiskelijoiden opintorekisteristä sekä kahdella strukturoidulla mittarilla, joilla kartoitettiin hoitotyön opiskelijoiden itsearvioitua hoitotyön osaamista (OSAA-mittari) ja opiskelumotivaatiota (MOTI-mittari). Seurantatutkimuksen aineistonkeruu ajoittui opiskelijoiden kolmannelle lukukaudella (1. mittaus, 2004‒2006, VAL1 n=234, VAL2 n=126) ja valmistumisvaiheeseen (2. mittaus, 2006‒2009, VAL1 n=149, VAL2 n=108). Ensimmäisen mittauksen vastausprosentti oli 75,0 % ja toisen mittauksen 92,4 %. Aineistojen analysoinnissa käytettiin pitkittäistutkimukseen soveltuvia monimuuttujamenetelmiä. Kahdella valintakoemenetelmällä valikoitui pienistä eroista huolimatta osaamiseltaan ja opiskelumotivaatioltaan hyvin samanlaisia opiskelijoita. Soveltuvuuskokeella valitut opiskelijat kokivat ryhmän kannustavuuden vahvemmaksi valmistumisvaiheessa kuin kirjallisella kokeella valitut. Kirjallisella kokeella valittujen opiskelijoiden kolmannen lukukauden arvosanoihin perustuva osaaminen oli parempaa kuin soveltuvuuskokeella valittujen opiskelijoiden. Suuntautumisvaihtoehto, hoitoalan työkokemus, peruskoulutus ja hakusija olivat merkittävimmin yhteydessä opiskelijoiden osaamiseen ja opiskelumotivaatioon. Valintakoemenetelmä selitti eniten opiskelijoiden osaamisessa ja opiskelumotivaatiossa ilmenneitä eroja, joskin selitysosuudet jäivät alhaisiksi. Kehittämisehdotukset kohdistuvat valintakoemenetelmien kehittämiseen ja säännölliseen arviointiin sekä alalle motivoituneisuuden määrittelyyn ja mittaamisen kehittämiseen. Jatkotutkimusaiheina ehdotetaan eri valintakoemenetelmien testaamista ja tutkimuksessa käytettyjen mittareiden edelleen kehittämistä.
Resumo:
Este documento presenta los resultados del componente cuantitativo de la evaluación del Programa de Educación para la Sexualidad y Construcción de Ciudadanía (PESCC) del Ministerio de Educación Nacional de Colombia (MEN). Para identificar el efecto, la estrategia empírica explota la variación en la implementación del componente pedagógico del PESCC entre los colegios y la variación en el componente de fortalecimiento institucional del programa a nivel departamental. El principal hallazgo de este trabajo es que el PESCC mejora las prácticas docentes de planeación y los conocimientos de los estudiantes en servicios en salud sexual y reproductiva y en derechos humanos sexuales y reproductivos. No hay efectos significativos en otros índices de Conocimientos, Actitudes o Prácticas (CAP) de profesores o estudiantes.
Resumo:
La implementació de la Directiva Europea 91/271/CEE referent a tractament d'aigües residuals urbanes va promoure la construcció de noves instal·lacions al mateix temps que la introducció de noves tecnologies per tractar nutrients en àrees designades com a sensibles. Tant el disseny d'aquestes noves infraestructures com el redisseny de les ja existents es va portar a terme a partir d'aproximacions basades fonamentalment en objectius econòmics degut a la necessitat d'acabar les obres en un període de temps relativament curt. Aquests estudis estaven basats en coneixement heurístic o correlacions numèriques provinents de models determinístics simplificats. Així doncs, moltes de les estacions depuradores d'aigües residuals (EDARs) resultants van estar caracteritzades per una manca de robustesa i flexibilitat, poca controlabilitat, amb freqüents problemes microbiològics de separació de sòlids en el decantador secundari, elevats costos d'operació i eliminació parcial de nutrients allunyant-les de l'òptim de funcionament. Molts d'aquestes problemes van sorgir degut a un disseny inadequat, de manera que la comunitat científica es va adonar de la importància de les etapes inicials de disseny conceptual. Precisament per aquesta raó, els mètodes tradicionals de disseny han d'evolucionar cap a sistemes d'avaluació mes complexos, que tinguin en compte múltiples objectius, assegurant així un millor funcionament de la planta. Tot i la importància del disseny conceptual tenint en compte múltiples objectius, encara hi ha un buit important en la literatura científica tractant aquest camp d'investigació. L'objectiu que persegueix aquesta tesi és el de desenvolupar un mètode de disseny conceptual d'EDARs considerant múltiples objectius, de manera que serveixi d'eina de suport a la presa de decisions al seleccionar la millor alternativa entre diferents opcions de disseny. Aquest treball de recerca contribueix amb un mètode de disseny modular i evolutiu que combina diferent tècniques com: el procés de decisió jeràrquic, anàlisi multicriteri, optimació preliminar multiobjectiu basada en anàlisi de sensibilitat, tècniques d'extracció de coneixement i mineria de dades, anàlisi multivariant i anàlisi d'incertesa a partir de simulacions de Monte Carlo. Això s'ha aconseguit subdividint el mètode de disseny desenvolupat en aquesta tesis en quatre blocs principals: (1) generació jeràrquica i anàlisi multicriteri d'alternatives, (2) anàlisi de decisions crítiques, (3) anàlisi multivariant i (4) anàlisi d'incertesa. El primer dels blocs combina un procés de decisió jeràrquic amb anàlisi multicriteri. El procés de decisió jeràrquic subdivideix el disseny conceptual en una sèrie de qüestions mes fàcilment analitzables i avaluables mentre que l'anàlisi multicriteri permet la consideració de diferent objectius al mateix temps. D'aquesta manera es redueix el nombre d'alternatives a avaluar i fa que el futur disseny i operació de la planta estigui influenciat per aspectes ambientals, econòmics, tècnics i legals. Finalment aquest bloc inclou una anàlisi de sensibilitat dels pesos que proporciona informació de com varien les diferents alternatives al mateix temps que canvia la importància relativa del objectius de disseny. El segon bloc engloba tècniques d'anàlisi de sensibilitat, optimització preliminar multiobjectiu i extracció de coneixement per donar suport al disseny conceptual d'EDAR, seleccionant la millor alternativa un cop s'han identificat decisions crítiques. Les decisions crítiques són aquelles en les que s'ha de seleccionar entre alternatives que compleixen de forma similar els objectius de disseny però amb diferents implicacions pel que respecte a la futura estructura i operació de la planta. Aquest tipus d'anàlisi proporciona una visió més àmplia de l'espai de disseny i permet identificar direccions desitjables (o indesitjables) cap on el procés de disseny pot derivar. El tercer bloc de la tesi proporciona l'anàlisi multivariant de les matrius multicriteri obtingudes durant l'avaluació de les alternatives de disseny. Específicament, les tècniques utilitzades en aquest treball de recerca engloben: 1) anàlisi de conglomerats, 2) anàlisi de components principals/anàlisi factorial i 3) anàlisi discriminant. Com a resultat és possible un millor accés a les dades per realitzar la selecció de les alternatives, proporcionant més informació per a una avaluació mes efectiva, i finalment incrementant el coneixement del procés d'avaluació de les alternatives de disseny generades. En el quart i últim bloc desenvolupat en aquesta tesi, les diferents alternatives de disseny són avaluades amb incertesa. L'objectiu d'aquest bloc és el d'estudiar el canvi en la presa de decisions quan una alternativa és avaluada incloent o no incertesa en els paràmetres dels models que descriuen el seu comportament. La incertesa en el paràmetres del model s'introdueix a partir de funcions de probabilitat. Desprès es porten a terme simulacions Monte Carlo, on d'aquestes distribucions se n'extrauen números aleatoris que es subsisteixen pels paràmetres del model i permeten estudiar com la incertesa es propaga a través del model. Així és possible analitzar la variació en l'acompliment global dels objectius de disseny per a cada una de les alternatives, quines són les contribucions en aquesta variació que hi tenen els aspectes ambientals, legals, econòmics i tècnics, i finalment el canvi en la selecció d'alternatives quan hi ha una variació de la importància relativa dels objectius de disseny. En comparació amb les aproximacions tradicionals de disseny, el mètode desenvolupat en aquesta tesi adreça problemes de disseny/redisseny tenint en compte múltiples objectius i múltiples criteris. Al mateix temps, el procés de presa de decisions mostra de forma objectiva, transparent i sistemàtica el perquè una alternativa és seleccionada en front de les altres, proporcionant l'opció que més bé acompleix els objectius marcats, mostrant els punts forts i febles, les principals correlacions entre objectius i alternatives, i finalment tenint en compte la possible incertesa inherent en els paràmetres del model que es fan servir durant les anàlisis. Les possibilitats del mètode desenvolupat es demostren en aquesta tesi a partir de diferents casos d'estudi: selecció del tipus d'eliminació biològica de nitrogen (cas d'estudi # 1), optimització d'una estratègia de control (cas d'estudi # 2), redisseny d'una planta per aconseguir eliminació simultània de carboni, nitrogen i fòsfor (cas d'estudi # 3) i finalment anàlisi d'estratègies control a nivell de planta (casos d'estudi # 4 i # 5).
Resumo:
An extensive statistical ‘downscaling’ study is done to relate large-scale climate information from a general circulation model (GCM) to local-scale river flows in SW France for 51 gauging stations ranging from nival (snow-dominated) to pluvial (rainfall-dominated) river-systems. This study helps to select the appropriate statistical method at a given spatial and temporal scale to downscale hydrology for future climate change impact assessment of hydrological resources. The four proposed statistical downscaling models use large-scale predictors (derived from climate model outputs or reanalysis data) that characterize precipitation and evaporation processes in the hydrological cycle to estimate summary flow statistics. The four statistical models used are generalized linear (GLM) and additive (GAM) models, aggregated boosted trees (ABT) and multi-layer perceptron neural networks (ANN). These four models were each applied at two different spatial scales, namely at that of a single flow-gauging station (local downscaling) and that of a group of flow-gauging stations having the same hydrological behaviour (regional downscaling). For each statistical model and each spatial resolution, three temporal resolutions were considered, namely the daily mean flows, the summary statistics of fortnightly flows and a daily ‘integrated approach’. The results show that flow sensitivity to atmospheric factors is significantly different between nival and pluvial hydrological systems which are mainly influenced, respectively, by shortwave solar radiations and atmospheric temperature. The non-linear models (i.e. GAM, ABT and ANN) performed better than the linear GLM when simulating fortnightly flow percentiles. The aggregated boosted trees method showed higher and less variable R2 values to downscale the hydrological variability in both nival and pluvial regimes. Based on GCM cnrm-cm3 and scenarios A2 and A1B, future relative changes of fortnightly median flows were projected based on the regional downscaling approach. The results suggest a global decrease of flow in both pluvial and nival regimes, especially in spring, summer and autumn, whatever the considered scenario. The discussion considers the performance of each statistical method for downscaling flow at different spatial and temporal scales as well as the relationship between atmospheric processes and flow variability.
Resumo:
Multivariate statistical methods were used to investigate file Causes of toxicity and controls on groundwater chemistry from 274 boreholes in an Urban area (London) of the United Kingdom. The groundwater was alkaline to neutral, and chemistry was dominated by calcium, sodium, and Sulfate. Contaminants included fuels, solvents, and organic compounds derived from landfill material. The presence of organic material in the aquifer caused decreases in dissolved oxygen, sulfate and nitrate concentrations. and increases in ferrous iron and ammoniacal nitrogen concentrations. Pearson correlations between toxicity results and the concentration of individual analytes indicated that concentrations of ammoinacal nitrogen, dissolved oxygen, ferrous iron, and hydrocarbons were important where present. However, principal component and regression analysis suggested no significant correlation between toxicity and chemistry over the whole area. Multidimensional Scaling was used to investigate differences in sites caused by historical use, landfill gas status, or position within the sample area. Significant differences were observed between sites with different historical land use and those with different gas status. Examination of the principal component matrix revealed that these differences are related to changes in the importance of reduced chemical species.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
A statistical data analysis methodology was developed to evaluate the field emission properties of many samples of copper oxide nanostructured field emitters. This analysis was largely done in terms of Seppen-Katamuki (SK) charts, field strength and emission current. Some physical and mathematical models were derived to describe the effect of small electric field perturbations in the Fowler-Nordheim (F-N) equation, and then to explain the trend of the data represented in the SK charts. The field enhancement factor and the emission area parameters showed to be very sensitive to variations in the electric field for most of the samples. We have found that the anode-cathode distance is critical in the field emission characterization of samples having a non-rigid nanostructure. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
To know how much misalignment is tolerable for a particle accelerator is an important input for the design of these machines. In particle accelerators the beam must be guided and focused using bending magnets and magnetic lenses, respectively. The alignment of the lenses along a transport line aims to ensure that the beam passes through their optical axes and represents a critical point in the assembly of the machine. There are more and more accelerators in the world, many of which are very small machines. Because the existing literature and programs are mostly targeted for large machines. in this work we describe a method suitable for small machines. This method consists in determining statistically the alignment tolerance in a set of lenses. Differently from the methods used in standard simulation codes for particle accelerators, the statistical method we propose makes it possible to evaluate particle losses as a function of the alignment accuracy of the optical elements in a transport line. Results for 100 key electrons, on the 3.5-m long conforming beam stage of the IFUSP Microtron are presented as an example of use. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The topology of real-world complex networks, such as in transportation and communication, is always changing with time. Such changes can arise not only as a natural consequence of their growth, but also due to major modi. cations in their intrinsic organization. For instance, the network of transportation routes between cities and towns ( hence locations) of a given country undergo a major change with the progressive implementation of commercial air transportation. While the locations could be originally interconnected through highways ( paths, giving rise to geographical networks), transportation between those sites progressively shifted or was complemented by air transportation, with scale free characteristics. In the present work we introduce the path-star transformation ( in its uniform and preferential versions) as a means to model such network transformations where paths give rise to stars of connectivity. It is also shown, through optimal multivariate statistical methods (i.e. canonical projections and maximum likelihood classification) that while the US highways network adheres closely to a geographical network model, its path-star transformation yields a network whose topological properties closely resembles those of the respective airport transportation network.