906 resultados para statistical techniques
Resumo:
The present work involves a computational study of soot formation and transport in case of a laminar acetylene diffusion flame perturbed by a co nvecting line vortex. The topology of the soot contours (as in an earlier experimental work [4]) have been investigated. More soot was produced when vortex was introduced from the air si de in comparison to a fuel side vortex. Also the soot topography was more diffused in case of the air side vortex. The computational model was found to be in good agreement with the ex perimental work [4]. The computational simulation enabled a study of the various parameters affecting soot transport. Temperatures were found to be higher in case of air side vortex as compared to a fuel side vortex. In case of the fuel side vortex, abundance of fuel in the vort ex core resulted in stoichiometrically rich combustion in the vortex core, and more discrete so ot topography. Overall soot production too was low. In case of the air side vortex abundan ce of air in the core resulted in higher temperatures and more soot yield. Statistical techniques like probability density fun ction, correlation coefficient and conditional probability function were introduced to explain the transient dependence of soot yield and transport on various parameters like temperature, a cetylene concentration.
Resumo:
The present work involves a computational study of soot formation and transport in case of a laminar acetylene diffusion flame perturbed by a convecting line vortex. The topology of the soot contours (as in an earlier experimental work [4]) have been investigated. More soot was produced when vortex was introduced from the air side in comparison to a fuel side vortex. Also the soot topography was more diffused in case of the air side vortex. The computational model was found to be in good agreement with the experimental work [4]. The computational simulation enabled a study of the various parameters affecting soot transport. Temperatures were found to be higher in case of air side vortex as compared to a fuel side vortex. In case of the fuel side vortex, abundance of fuel in the vort ex core resulted in stoichiometrically rich combustion in the vortex core, and more discrete soot topography. Overall soot production too was low. In case of the air side vortex abundance of air in the core resulted in higher temperatures and more soot yield. Statistical techniques like probability density function, correlation coefficient and conditional probability function were introduced to explain the transient dependence of soot yield and transport on various parameters like temperature, a cetylene concentration.
Resumo:
CHAP 1 - Introduction to the Guide CHAP 2 - Solution chemistry of carbon dioxide in sea water CHAP 3 - Quality assurance CHAP 4 - Recommended standard operating procedures (SOPs) SOP 1 - Water sampling for the parameters of the oceanic carbon dioxide system SOP 2 - Determination of total dissolved inorganic carbon in sea water SOP 3a - Determination of total alkalinity in sea water using a closed-cell titration SOP 3b - Determination of total alkalinity in sea water using an open-cell titration SOP 4 - Determination of p(CO2) in air that is in equilibrium with a discrete sample of sea water SOP 5 - Determination of p(CO2) in air that is in equilibrium with a continuous stream of sea water SOP 6a - Determination of the pH of sea water using a glass/reference electrode cell SOP 6b - Determination of the pH of sea water using the indicator dye m-cresol purple SOP 7 - Determination of dissolved organic carbon and total dissolved nitrogen in sea water SOP 7 en Español - Determinacion de carbono organico disuelto y nitrogeno total disuelto en agua de mar SOP 11 - Gravimetric calibration of the volume of a gas loop using water SOP 12 - Gravimetric calibration of volume delivered using water SOP 13 - Gravimetric calibration of volume contained using water SOP 14 - Procedure for preparing sodium carbonate solutions for the calibration of coulometric CT measurements SOP 21 - Applying air buoyancy corrections SOP 22 - Preparation of control charts SOP 23 - Statistical techniques used in quality assessment SOP 24 - Calculation of the fugacity of carbon dioxide in the pure gas or in air CHAP 5 - Physical and thermodynamic data Errata - to the hard copy of the Guide to best practices for ocean CO2 measurements
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.
Resumo:
670 p. Capítulos de introducción, metodología, discusión y conclusiones en castellano e inglés.
Resumo:
O aumento nos rejeitos industriais e a contínua produção de resíduos causam muitas preocupações no âmbito ambiental. Neste contexto, o descarte de pneus usados tem se tornado um grande problema por conta da pequena atenção que se dá à sua destinação final. Assim sendo, essa pesquisa propõe a produção de uma mistura polimérica com polipropileno (PP), a borracha de etileno-propileno-dieno (EPDM) e o pó de pneu (SRT). A Metodologia de Superfície de Resposta (MSR), coleção de técnicas estatísticas e matemáticas úteis para desenvolver, melhorar e optimizar processos, foi aplicada à investigação das misturas ternárias. Após o processamento adequado em extrusora de dupla rosca e a moldagem por injeção, as propriedades mecânicas de resistência à tração e resistência ao impacto foram determinadas e utilizadas como variáveis resposta. Ao mesmo tempo, a microscopia eletrônica de varredura (MEV) foi usada para a investigação da morfologia das diferentes misturas e melhor interpretação dos resultados. Com as ferramentas estatísticas específicas e um número mínimo de experimentos foi possível o desenvolvimento de modelos de superfícies de resposta e a otimização das concentrações dos diferentes componentes da mistura em função do desempenho mecânico e além disso com a modificação da granulometria conseguimos um aumento ainda mais significativo deste desempenho mecânico.
Resumo:
Nos dias atuais, a maioria das operações feitas por empresas e organizações é armazenada em bancos de dados que podem ser explorados por pesquisadores com o objetivo de se obter informações úteis para auxílio da tomada de decisão. Devido ao grande volume envolvido, a extração e análise dos dados não é uma tarefa simples. O processo geral de conversão de dados brutos em informações úteis chama-se Descoberta de Conhecimento em Bancos de Dados (KDD - Knowledge Discovery in Databases). Uma das etapas deste processo é a Mineração de Dados (Data Mining), que consiste na aplicação de algoritmos e técnicas estatísticas para explorar informações contidas implicitamente em grandes bancos de dados. Muitas áreas utilizam o processo KDD para facilitar o reconhecimento de padrões ou modelos em suas bases de informações. Este trabalho apresenta uma aplicação prática do processo KDD utilizando a base de dados de alunos do 9 ano do ensino básico do Estado do Rio de Janeiro, disponibilizada no site do INEP, com o objetivo de descobrir padrões interessantes entre o perfil socioeconômico do aluno e seu desempenho obtido em Matemática na Prova Brasil 2011. Neste trabalho, utilizando-se da ferramenta chamada Weka (Waikato Environment for Knowledge Analysis), foi aplicada a tarefa de mineração de dados conhecida como associação, onde se extraiu regras por intermédio do algoritmo Apriori. Neste estudo foi possível descobrir, por exemplo, que alunos que já foram reprovados uma vez tendem a tirar uma nota inferior na prova de matemática, assim como alunos que nunca foram reprovados tiveram um melhor desempenho. Outros fatores, como a sua pretensão futura, a escolaridade dos pais, a preferência de matemática, o grupo étnico o qual o aluno pertence, se o aluno lê sites frequentemente, também influenciam positivamente ou negativamente no aprendizado do discente. Também foi feita uma análise de acordo com a infraestrutura da escola onde o aluno estuda e com isso, pôde-se afirmar que os padrões descobertos ocorrem independentemente se estes alunos estudam em escolas que possuem infraestrutura boa ou ruim. Os resultados obtidos podem ser utilizados para traçar perfis de estudantes que tem um melhor ou um pior desempenho em matemática e para a elaboração de políticas públicas na área de educação, voltadas ao ensino fundamental.
Resumo:
The application of electronic dispersion compensation (EDC) technology to extended-reach operation of multimode-fibre (MMF) links is considered. The essential theory is described in the context of MMF and the application of statistical techniques to predict supported link lengths for EDC-enabled MMF links is presented.
Resumo:
Drought frequency analysis can be performed with statistical techniques developed for determining recurrence intervals for extreme precipitation and flood events (Linsley et al 1992). The drought analysis method discussed in this paper uses the log-Pearson Type III distribution, which has been widely used in flood frequency research. Some of the difficulties encountered when using this distribution for drought analysis are investigated.
Resumo:
We have applied a number of objective statistical techniques to define homogeneous climatic regions for the Pacific Ocean, using COADS (Woodruff et al 1987) monthly sea surface temperature (SST) for 1950-1989 as the key variable. The basic data comprised all global 4°x4° latitude/longitude boxes with enough data available to yield reliable long-term means of monthly mean SST. An R-mode principal components analysis of these data, following a technique first used by Stidd (1967), yields information about harmonics of the annual cycles of SST. We used the spatial coefficients (one for each 4-degree box and eigenvector) as input to a K-means cluster analysis to classify the gridbox SST data into 34 global regions, in which 20 comprise the Pacific and Indian oceans. Seasonal time series were then produced for each of these regions. For comparison purposes, the variance spectrum of each regional anomaly time series was calculated. Most of the significant spectral peaks occur near the biennial (2.1-2.2 years) and ENSO (~3-6 years) time scales in the tropical regions. Decadal scale fluctuations are important in the mid-latitude ocean regions.
Resumo:
Spatial pattern metrics have routinely been applied to characterize and quantify structural features of terrestrial landscapes and have demonstrated great utility in landscape ecology and conservation planning. The important role of spatial structure in ecology and management is now commonly recognized, and recent advances in marine remote sensing technology have facilitated the application of spatial pattern metrics to the marine environment. However, it is not yet clear whether concepts, metrics, and statistical techniques developed for terrestrial ecosystems are relevant for marine species and seascapes. To address this gap in our knowledge, we reviewed, synthesized, and evaluated the utility and application of spatial pattern metrics in the marine science literature over the past 30 yr (1980 to 2010). In total, 23 studies characterized seascape structure, of which 17 quantified spatial patterns using a 2-dimensional patch-mosaic model and 5 used a continuously varying 3-dimensional surface model. Most seascape studies followed terrestrial-based studies in their search for ecological patterns and applied or modified existing metrics. Only 1 truly unique metric was found (hydrodynamic aperture applied to Pacific atolls). While there are still relatively few studies using spatial pattern metrics in the marine environment, they have suffered from similar misuse as reported for terrestrial studies, such as the lack of a priori considerations or the problem of collinearity between metrics. Spatial pattern metrics offer great potential for ecological research and environmental management in marine systems, and future studies should focus on (1) the dynamic boundary between the land and sea; (2) quantifying 3-dimensional spatial patterns; and (3) assessing and monitoring seascape change.
Resumo:
The work-family relations included positive relationship (work-family enrichment) and negative relationship (work-family conflict). Along with the development of positive psychology, the researchers turned their focus from work-family conflict to work-family enrichment. On the other hand, the research between work and family had just started and most attention was fix on work-family conflict. This research based on the skilled workers in manufacturing, and tried to discuss antecedents and the machanism of work-family relations through a series research which include action relation among job characteristics, work-family relations and outcomes of work, and the action relation among job resource、work-family relations、marital adjustment、work outcomes and role salience. This subject made the investigation to the workers who are working in manufacturing through literature research, questionnaire surveys and other methods. In this subject, several statistical techniques such as Explore Factor Analysis(CFA),Structural Equation Modeling(SEM), multiple-group Analysis were used to get the following conclusion: Firstly, work-family enrichment was an independent variable in work-family conflict, which had more extensively influence. Work-family conflict would increase accompany with the increasing of job demand, and it also would enhance negative work outcomes; work-family enrichment was effected by both job demand and job resource, and it made positive influence to positive and negative work outcomes. Secondly, marital adjustment penetrated into work domain through work-family enrichment inner effect mechanism, and influenced the work outcomes. The work→family enrichment and family→work enrichment could facilitate mutually, marital adjustment influenced the work outcome by the reciprocal relationship of work→family enrichment ↔ family→work enrichment. Finally, the family-role salience had directed loop-enhanced effect to organizational commitment and the importance could also enhance the positive function of work→family enrichment to organizational commitment.
Resumo:
General aptitude tests have been playing an important role in vocational guidance and preliminary personnel selection. The present research aimed at the estimation of the reliability and validity of the preliminarily constructed Chinese version of the General Aptitude Test Battery (GATB). A Chinese version of GATB was developed on the basis of the Japanese version of GATB at first. It was then administered to a sample of nearly 500 secondary school students in Beijing City. And its reliability and validity were studied through a series of univariate and multivariate statistical techniques. The results showed that the reliability of the test battery and the criteria-related validities of some subtests were acceptable. Concerning construct validity, three or four common factors were identified by exploratory factor analysis, and a simpler reasonable four-factor-solution was approached by confirmatory factor analysis; desirable group differences were also discovered by analyses of variance and multivariate analysis of variance. Generally, it has been demonstrated that the reliability and validity of the Chinese version of GATB constructed in the present research are satisfactory.
Resumo:
Nonlinear multivariate statistical techniques on fast computers offer the potential to capture more of the dynamics of the high dimensional, noisy systems underlying financial markets than traditional models, while making fewer restrictive assumptions. This thesis presents a collection of practical techniques to address important estimation and confidence issues for Radial Basis Function networks arising from such a data driven approach, including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data mining'' problem. Novel applications in the finance area are described, including customized, adaptive option pricing and stock price prediction.
Resumo:
MOTIVATION: Technological advances that allow routine identification of high-dimensional risk factors have led to high demand for statistical techniques that enable full utilization of these rich sources of information for genetics studies. Variable selection for censored outcome data as well as control of false discoveries (i.e. inclusion of irrelevant variables) in the presence of high-dimensional predictors present serious challenges. This article develops a computationally feasible method based on boosting and stability selection. Specifically, we modified the component-wise gradient boosting to improve the computational feasibility and introduced random permutation in stability selection for controlling false discoveries. RESULTS: We have proposed a high-dimensional variable selection method by incorporating stability selection to control false discovery. Comparisons between the proposed method and the commonly used univariate and Lasso approaches for variable selection reveal that the proposed method yields fewer false discoveries. The proposed method is applied to study the associations of 2339 common single-nucleotide polymorphisms (SNPs) with overall survival among cutaneous melanoma (CM) patients. The results have confirmed that BRCA2 pathway SNPs are likely to be associated with overall survival, as reported by previous literature. Moreover, we have identified several new Fanconi anemia (FA) pathway SNPs that are likely to modulate survival of CM patients. AVAILABILITY AND IMPLEMENTATION: The related source code and documents are freely available at https://sites.google.com/site/bestumich/issues. CONTACT: yili@umich.edu.