962 resultados para Maximum entropy method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

En esta tesis presentamos una teoría adaptada a la simulación de fenómenos lentos de transporte en sistemas atomísticos. En primer lugar, desarrollamos el marco teórico para modelizar colectividades estadísticas de equilibrio. A continuación, lo adaptamos para construir modelos de colectividades estadísticas fuera de equilibrio. Esta teoría reposa sobre los principios de la mecánica estadística, en particular el principio de máxima entropía de Jaynes, utilizado tanto para sistemas en equilibrio como fuera de equilibrio, y la teoría de las aproximaciones del campo medio. Expresamos matemáticamente el problema como un principio variacional en el que maximizamos una entropía libre, en lugar de una energía libre. La formulación propuesta permite definir equivalentes atomísticos de variables macroscópicas como la temperatura y la fracción molar. De esta forma podemos considerar campos macroscópicos no uniformes. Completamos el marco teórico con reglas de cuadratura de Monte Carlo, gracias a las cuales obtenemos modelos computables. A continuación, desarrollamos el conjunto completo de ecuaciones que gobiernan procesos de transporte. Deducimos la desigualdad de disipación entrópica a partir de fuerzas y flujos termodinámicos discretos. Esta desigualdad nos permite identificar la estructura que deben cumplir los potenciales cinéticos discretos. Dichos potenciales acoplan las tasas de variación en el tiempo de las variables microscópicas con las fuerzas correspondientes. Estos potenciales cinéticos deben ser completados con una relación fenomenológica, del tipo definido por la teoría de Onsanger. Por último, aportamos validaciones numéricas. Con ellas ilustramos la capacidad de la teoría presentada para simular propiedades de equilibrio y segregación superficial en aleaciones metálicas. Primero, simulamos propiedades termodinámicas de equilibrio en el sistema atomístico. A continuación evaluamos la habilidad del modelo para reproducir procesos de transporte en sistemas complejos que duran tiempos largos con respecto a los tiempos característicos a escala atómica. ABSTRACT In this work, we formulate a theory to address simulations of slow time transport effects in atomic systems. We first develop this theoretical framework in the context of equilibrium of atomic ensembles, based on statistical mechanics. We then adapt it to model ensembles away from equilibrium. The theory stands on Jaynes' maximum entropy principle, valid for the treatment of both, systems in equilibrium and away from equilibrium and on meanfield approximation theory. It is expressed in the entropy formulation as a variational principle. We interpret atomistic equivalents of macroscopic variables such as the temperature and the molar fractions, wich are not required to be uniform, but can vary from particle to particle. We complement this theory with Monte Carlo summation rules for further approximation. In addition, we provide a framework for studying transport processes with the full set of equations driving the evolution of the system. We first derive a dissipation inequality for the entropic production involving discrete thermodynamic forces and fluxes. This discrete dissipation inequality identifies the adequate structure for discrete kinetic potentials which couple the microscopic field rates to the corresponding driving forces. Those kinetic potentials must finally be expressed as a phenomenological rule of the Onsanger Type. We present several validation cases, illustrating equilibrium properties and surface segregation of metallic alloys. We first assess the ability of a simple meanfield model to reproduce thermodynamic equilibrium properties in systems with atomic resolution. Then, we evaluate the ability of the model to reproduce a long-term transport process in complex systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Operational Modal Analysis consists on estimate the modal parameters of a structure (natural frequencies, damping ratios and modal vectors) from output-only vibration measurements. The modal vectors can be only estimated where a sensor is placed, so when the number of available sensors is lower than the number of tested points, it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors): some sensors stay at the same position from setup to setup, and the other sensors change the position until all the tested points are covered. The permanent sensors are then used to merge the mode shape estimated at each setup (or partial modal vectors) into global modal vectors. Traditionally, the partial modal vectors are estimated independently setup by setup, and the global modal vectors are obtained in a postprocess phase. In this work we present two state space models that can be used to process all the recorded setups at the same time, and we also present how these models can be estimated using the maximum likelihood method. The result is that the global mode shape of each mode is obtained automatically, and subsequently, a single value for the natural frequency and damping ratio of the mode is computed. Finally, both models are compared using real measured data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Patterns in sequences of amino acid hydrophobic free energies predict secondary structures in proteins. In protein folding, matches in hydrophobic free energy statistical wavelengths appear to contribute to selective aggregation of secondary structures in “hydrophobic zippers.” In a similar setting, the use of Fourier analysis to characterize the dominant statistical wavelengths of peptide ligands’ and receptor proteins’ hydrophobic modes to predict such matches has been limited by the aliasing and end effects of short peptide lengths, as well as the broad-band, mode multiplicity of many of their frequency (power) spectra. In addition, the sequence locations of the matching modes are lost in this transformation. We make new use of three techniques to address these difficulties: (i) eigenfunction construction from the linear decomposition of the lagged covariance matrices of the ligands and receptors as hydrophobic free energy sequences; (ii) maximum entropy, complex poles power spectra, which select the dominant modes of the hydrophobic free energy sequences or their eigenfunctions; and (iii) discrete, best bases, trigonometric wavelet transformations, which confirm the dominant spectral frequencies of the eigenfunctions and locate them as (absolute valued) moduli in the peptide or receptor sequence. The leading eigenfunction of the covariance matrix of a transmembrane receptor sequence locates the same transmembrane segments seen in n-block-averaged hydropathy plots while leaving the remaining hydrophobic modes unsmoothed and available for further analyses as secondary eigenfunctions. In these receptor eigenfunctions, we find a set of statistical wavelength matches between peptide ligands and their G-protein and tyrosine kinase coupled receptors, ranging across examples from 13.10 amino acids in acid fibroblast growth factor to 2.18 residues in corticotropin releasing factor. We find that the wavelet-located receptor modes in the extracellular loops are compatible with studies of receptor chimeric exchanges and point mutations. A nonbinding corticotropin-releasing factor receptor mutant is shown to have lost the signatory mode common to the normal receptor and its ligand. Hydrophobic free energy eigenfunctions and their transformations offer new quantitative physical homologies in database searches for peptide-receptor matches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo dessa pesquisa foi avaliar aspectos genéticos que relacionados à produção in vitro de embriões na raça Guzerá. O primeiro estudo focou na estimação de (co) variâncias genéticas e fenotípicas em características relacionadas a produção de embriões e na detecção de possível associação com a idade ao primeiro parto (AFC). Foi detectada baixa e média herdabilidade para características relacionadas à produção de oócitos e embriões. Houve fraca associação genética entre características ligadas a reprodução artificial e a idade ao primeiro parto. O segundo estudo avaliou tendências genéticas e de endogamia em uma população Guzerá no Brasil. Doadoras e embriões produzidos in vitro foram considerados como duas subpopulações de forma a realizar comparações acerca das diferenças de variação anual genética e do coeficiente de endogamia. A tendência anual do coeficiente de endogamia (F) foi superior para a população geral, sendo detectado efeito quadrático. No entanto, a média de F para a sub- população de embriões foi maior do que na população geral e das doadoras. Foi observado ganho genético anual superior para a idade ao primeiro parto e para a produção de leite (305 dias) entre embriões produzidos in vitro do que entre doadoras ou entre a população geral. O terceiro estudo examinou os efeitos do coeficiente de endogamia da doadora, do reprodutor (usado na fertilização in vitro) e dos embriões sobre resultados de produção in vitro de embriões na raça Guzerá. Foi detectado efeito da endogamia da doadora e dos embriões sobre as características estudadas. O quarto (e último) estudo foi elaborado para comparar a adequação de modelos mistos lineares e generalizados sob método de Máxima Verossimilhança Restrita (REML) e sua adequação a variáveis discretas. Quatro modelos hierárquicos assumindo diferentes distribuições para dados de contagem encontrados no banco. Inferência foi realizada com base em diagnósticos de resíduo e comparação de razões entre componentes de variância para os modelos em cada variável. Modelos Poisson superaram tanto o modelo linear (com e sem transformação da variável) quanto binomial negativo à qualidade do ajuste e capacidade preditiva, apesar de claras diferenças observadas na distribuição das variáveis. Entre os modelos testados, a pior qualidade de ajuste foi obtida para o modelo linear mediante transformação logarítmica (Log10 X +1) da variável resposta.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Geralmente, nos experimentos genótipo por ambiente (G × E) é comum observar o comportamento dos genótipos em relação a distintos atributos nos ambientes considerados. A análise deste tipo de experimentos tem sido abordada amplamente para o caso de um único atributo. Nesta tese são apresentadas algumas alternativas de análise considerando genótipos, ambientes e atributos simultaneamente. A primeira, é baseada no método de mistura de máxima verossimilhança de agrupamento - Mixclus e a análise de componentes principais de 3 modos - 3MPCA, que permitem a análise de tabelas de tripla entrada, estes dois métodos têm sido muito usados na área da psicologia e da química, mas pouco na agricultura. A segunda, é uma metodologia que combina, o modelo de efeitos aditivos com interação multiplicativa - AMMI, modelo eficiente para a análise de experimentos (G × E) com um atributo e a análise de procrustes generalizada, que permite comparar configurações de pontos e proporcionar uma medida numérica de quanto elas diferem. Finalmente, é apresentada uma alternativa para realizar imputação de dados nos experimentos (G × E), pois, uma situação muito frequente nestes experimentos, é a presença de dados faltantes. Conclui-se que as metodologias propostas constituem ferramentas úteis para a análise de experimentos (G × E) multiatributo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present simultaneous and continuous observations of the Hα, Hβ, He I D_3, Na I D_1,D_2 doublet and the Ca II H&K lines for the RS CVn system HR 1099. The spectroscopic observations were obtained during the MUSICOS 1998 campaign involving several observatories and instruments, both echelle and long-slit spectrographs. During this campaign, HR 1099 was observed almost continuously for more than 8 orbits of 2^d.8. Two large optical flares were observed, both showing an increase in the emission of Hα, Ca II H K, Hβ and He I D_3 and a strong filling-in of the Na I D_1, D_2 doublet. Contemporary photometric observations were carried out with the robotic telescopes APT-80 of Catania and Phoenix-25 of Fairborn Observatories. Maps of the distribution of the spotted regions on the photosphere of the binary components were derived using the Maximum Entropy and Tikhonov photometric regularization criteria. Rotational modulation was observed in Hα and He I D_3 in anti-correlation with the photometric light curves. Both flares occurred at the same binary phase (0.85), suggesting that these events took place in the same active region. Simultaneous X-ray observations, performed by ASM on board RXTE, show several flare-like events, some of which correlate well with the observed optical flares. Rotational modulation in the X-ray light curve has been detected with minimum flux when the less active G5 V star was in front. A possible periodicity in the X-ray flare-like events was also found.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The interface between Au(hkl) basal planes and the ionic liquid 1-Ethyl-2,3-dimethyl imidazolium bis(trifluoromethyl)sulfonil imide was investigated by using both cyclic voltammetry and laser-induced temperature jump. Cyclic voltammetry showed characteristic features, revealing surface sensitive processes at the interfaces Au(hkl)/[Emmim][Tf2N]. From laser-induced heating the potential of maximum entropy (pme) is determined. Pme is close to the potential of zero charge (pzc) and, therefore, the technique provides relevant interfacial information. The following order for the pme values has been found: Au(111) > Au(100) > Au(110). This order correlates well with work function data and values of pzc in aqueous solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este proyecto se investigan 3 subáreas de Inteligencia Artificial y sus aplicaciones en contextos educativos. Las 3 áreas son 1) agentes conversacionales automatizados que actúan como instructores virtuales o sistemas de tutoring automatizado, 2) asistentes virtuales que llevan a cabo una tarea dada bajo la instrucción de un aprendiz avanzado, y 3) plataformas de programación de chatbots como una herramienta educativa para enseñar conceptos básicos de ciencias de la computación. La hipótesis de este proyecto es que tanto los tutores como los asistentes conversacionales automatizados deben incluir una representación contextual rica que identifique lo entendido por el aprendiz hasta el momento y ser capaces de realizar inferencias sobre ella para poder guiar mejor su aprendizaje. Los objetivos de este proyecto incluyen el desarrollo de algoritmos de inferencia contextuales apropiados para instructores y asistentes virtuales, el desarrollo de algoritmos para la programación simplificada de chatbots, la evaluación de estos algoritmos en pruebas piloto en escuelas y la realización de un curso online abierto masivo para estudiantes de secundario del programa Conectar Igualdad que quieran aprender sobre Inteligencia Artificial y Ciencias de la Computación. El método a utilizar será la realización de recolección de corpus (interacciones humano-humano de las interacciones tutor-aprendiz), la aplicación de técnicas de procesamiento de lenguaje natural como la generación por selección y la interpretación por clustering y maximum entropy models usando características sintácticas, semánticas y pragmáticas. Se desarrollarán los algoritmos siguiendo una metodología estándar de Ingeniería de Software y se evaluarán en experiencias piloto en escuelas secundarias así como en un curso online abierto y masivo. Además se dictará un curso de capacitación docente para la incorporación de las tecnologías producidas a sus cursos. Como resultado se espera la contribución al área de Inteligencia Artificial con aplicaciones en Educación de algoritmos evaluados empíricamente en entornos educativos reales del nivel medio. Además, se espera contribuir a las metodologías de enseñanza de Ciencias de la Computación en el nivel medio. Este proyecto es relevante a la realidad nacional y mundial de falta de recursos humanos formados en las Ciencias de la Computación y al crecimiento mundial que el área de Inteligencia Artificial en general y de Sistemas de diálogo (o interfaces conversacionales) en particular ha tenido en los últimos años con el crecimiento exponencial de la tecnología en la vida diaria.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Species distribution models (SDM) predict species occurrence based on statistical relationships with environmental conditions. The R-package biomod2 which includes 10 different SDM techniques and 10 different evaluation methods was used in this study. Macroalgae are the main biomass producers in Potter Cove, King George Island (Isla 25 de Mayo), Antarctica, and they are sensitive to climate change factors such as suspended particulate matter (SPM). Macroalgae presence and absence data were used to test SDMs suitability and, simultaneously, to assess the environmental response of macroalgae as well as to model four scenarios of distribution shifts by varying SPM conditions due to climate change. According to the averaged evaluation scores of Relative Operating Characteristics (ROC) and True scale statistics (TSS) by models, those methods based on a multitude of decision trees such as Random Forest and Classification Tree Analysis, reached the highest predictive power followed by generalized boosted models (GBM) and maximum-entropy approaches (Maxent). The final ensemble model used 135 of 200 calculated models (TSS > 0.7) and identified hard substrate and SPM as the most influencing parameters followed by distance to glacier, total organic carbon (TOC), bathymetry and slope. The climate change scenarios show an invasive reaction of the macroalgae in case of less SPM and a retreat of the macroalgae in case of higher assumed SPM values.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this work was to model lung cancer mortality as a function of past exposure to tobacco and to forecast age-sex-specific lung cancer mortality rates. A 3-factor age-period-cohort (APC) model, in which the period variable is replaced by the product of average tar content and adult tobacco consumption per capita, was estimated for the US, UK, Canada and Australia by the maximum likelihood method. Age- and sex-specific tobacco consumption was estimated from historical data on smoking prevalence and total tobacco consumption. Lung cancer mortality was derived from vital registration records. Future tobacco consumption, tar content and the cohort parameter were projected by autoregressive moving average (ARIMA) estimation. The optimal exposure variable was found to be the product of average tar content and adult cigarette consumption per capita, lagged for 2530 years for both males and females in all 4 countries. The coefficient of the product of average tar content and tobacco consumption per capita differs by age and sex. In all models, there was a statistically significant difference in the coefficient of the period variable by sex. In all countries, male age-standardized lung cancer mortality rates peaked in the 1980s and declined thereafter. Female mortality rates are projected to peak in the first decade of this century. The multiplicative models of age, tobacco exposure and cohort fit the observed data between 1950 and 1999 reasonably well, and time-series models yield plausible past trends of relevant variables. Despite a significant reduction in tobacco consumption and average tar content of cigarettes sold over the past few decades, the effect on lung cancer mortality is affected by the time lag between exposure and established disease. As a result, the burden of lung cancer among females is only just reaching, or soon will reach, its peak but has been declining for I to 2 decades in men. Future sex differences in lung cancer mortality are likely to be greater in North America than Australia and the UK due to differences in exposure patterns between the sexes. (c) 2005 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: To determine the role of the National Mental Health Strategy in the deinstitutionalization of patients in psychiatric hospitals in Queensland. Method: Regression analysis (using the maximum likelihood method) has been applied to relevant time-series datasets on public psychiatric institutions in Queensland. In particular, data on both patients and admissions per 10 000 population are analysed in detail from 1953-54 to the present, although data are presented from 1883-84. Results: These Queensland data indicate that deinstitutionalization was a continuing process from the 1950s to the present. However, it is clear that the experience varied from period to period. For example, the fastest change (in both patients and admissions) took place in the period 1953-54 to 1973-74, followed by the period 1974-75 to 1984-85. Conclusions: In large part, the two policies associated with deinstitutionalization, namely a discharge policy ('opening the back door') and an admission policy ('closing the front door') had been implemented before the advent of the National Mental Health Strategy in January 1993. Deinstitutionalization was most rapid in the 30-year period to the early 1980s: the process continued in the 1990s, but at a much slower rate. Deinstitutionalization was, in large part, over before the Strategy was developed and implemented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe methods for estimating the parameters of Markovian population processes in continuous time, thus increasing their utility in modelling real biological systems. A general approach, applicable to any finite-state continuous-time Markovian model, is presented, and this is specialised to a computationally more efficient method applicable to a class of models called density-dependent Markov population processes. We illustrate the versatility of both approaches by estimating the parameters of the stochastic SIS logistic model from simulated data. This model is also fitted to data from a population of Bay checkerspot butterfly (Euphydryas editha bayensis), allowing us to assess the viability of this population. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sentiment analysis has long focused on binary classification of text as either positive or negative. There has been few work on mapping sentiments or emotions into multiple dimensions. This paper studies a Bayesian modeling approach to multi-class sentiment classification and multidimensional sentiment distributions prediction. It proposes effective mechanisms to incorporate supervised information such as labeled feature constraints and document-level sentiment distributions derived from the training data into model learning. We have evaluated our approach on the datasets collected from the confession section of the Experience Project website where people share their life experiences and personal stories. Our results show that using the latent representation of the training documents derived from our approach as features to build a maximum entropy classifier outperforms other approaches on multi-class sentiment classification. In the more difficult task of multi-dimensional sentiment distributions prediction, our approach gives superior performance compared to a few competitive baselines. © 2012 ACM.