964 resultados para Distribution (Probability theory)
Resumo:
Most large dynamical systems are thought to have ergodic dynamics, whereas small systems may not have free interchange of energy between degrees of freedom. This assumption is made in many areas of chemistry and physics, ranging from nuclei to reacting molecules and on to quantum dots. We examine the transition to facile vibrational energy flow in a large set of organic molecules as molecular size is increased. Both analytical and computational results based on local random matrix models describe the transition to unrestricted vibrational energy flow in these molecules. In particular, the models connect the number of states participating in intramolecular energy flow to simple molecular properties such as the molecular size and the distribution of vibrational frequencies. The transition itself is governed by a local anharmonic coupling strength and a local state density. The theoretical results for the transition characteristics compare well with those implied by experimental measurements using IR fluorescence spectroscopy of dilution factors reported by Stewart and McDonald [Stewart, G. M. & McDonald, J. D. (1983) J. Chem. Phys. 78, 3907–3915].
Resumo:
Visual classification is the way we relate to different images in our environment as if they were the same, while relating differently to other collections of stimuli (e.g., human vs. animal faces). It is still not clear, however, how the brain forms such classes, especially when introduced with new or changing environments. To isolate a perception-based mechanism underlying class representation, we studied unsupervised classification of an incoming stream of simple images. Classification patterns were clearly affected by stimulus frequency distribution, although subjects were unaware of this distribution. There was a common bias to locate class centers near the most frequent stimuli and their boundaries near the least frequent stimuli. Responses were also faster for more frequent stimuli. Using a minimal, biologically based neural-network model, we demonstrate that a simple, self-organizing representation mechanism based on overlapping tuning curves and slow Hebbian learning suffices to ensure classification. Combined behavioral and theoretical results predict large tuning overlap, implicating posterior infero-temporal cortex as a possible site of classification.
Resumo:
The discovery that the epsilon 4 allele of the apolipoprotein E (apoE) gene is a putative risk factor for Alzheimer disease (AD) in the general population has highlighted the role of genetic influences in this extremely common and disabling illness. It has long been recognized that another genetic abnormality, trisomy 21 (Down syndrome), is associated with early and severe development of AD neuropathological lesions. It remains a challenge, however, to understand how these facts relate to the pathological changes in the brains of AD patients. We used computerized image analysis to examine the size distribution of one of the characteristic neuropathological lesions in AD, deposits of A beta peptide in senile plaques (SPs). Surprisingly, we find that a log-normal distribution fits the SP size distribution quite well, motivating a porous model of SP morphogenesis. We then analyzed SP size distribution curves in genotypically defined subgroups of AD patients. The data demonstrate that both apoE epsilon 4/AD and trisomy 21/AD lead to increased amyloid deposition, but by apparently different mechanisms. The size distribution curve is shifted toward larger plaques in trisomy 21/AD, probably reflecting increased A beta production. In apoE epsilon 4/AD, the size distribution is unchanged but the number of SP is increased compared to apoE epsilon 3, suggesting increased probability of SP initiation. These results demonstrate that subgroups of AD patients defined on the basis of molecular characteristics have quantitatively different neuropathological phenotypes.
Resumo:
Neste trabalho, foi proposta uma nova família de distribuições, a qual permite modelar dados de sobrevivência quando a função de risco tem formas unimodal e U (banheira). Ainda, foram consideradas as modificações das distribuições Weibull, Fréchet, half-normal generalizada, log-logística e lognormal. Tomando dados não-censurados e censurados, considerou-se os estimadores de máxima verossimilhança para o modelo proposto, a fim de verificar a flexibilidade da nova família. Além disso, um modelo de regressão locação-escala foi utilizado para verificar a influência de covariáveis nos tempos de sobrevida. Adicionalmente, conduziu-se uma análise de resíduos baseada nos resíduos deviance modificada. Estudos de simulação, utilizando-se de diferentes atribuições dos parâmetros, porcentagens de censura e tamanhos amostrais, foram conduzidos com o objetivo de verificar a distribuição empírica dos resíduos tipo martingale e deviance modificada. Para detectar observações influentes, foram utilizadas medidas de influência local, que são medidas de diagnóstico baseadas em pequenas perturbações nos dados ou no modelo proposto. Podem ocorrer situações em que a suposição de independência entre os tempos de falha e censura não seja válida. Assim, outro objetivo desse trabalho é considerar o mecanismo de censura informativa, baseado na verossimilhança marginal, considerando a distribuição log-odd log-logística Weibull na modelagem. Por fim, as metodologias descritas são aplicadas a conjuntos de dados reais.
Resumo:
Este trabalho apresenta um sistema neural modular, que processa separadamente informações de contexto espacial e temporal, para a tarefa de reprodução de sequências temporais. Para o desenvolvimento do sistema neural foram considerados redes neurais recorrentes, modelos estocásticos, sistemas neurais modulares e processamento de informações de contexto. Em seguida, foram estudados três modelos com abordagens distintas para aprendizagem de seqüências temporais: uma rede neural parcialmente recorrente, um exemplo de sistema neural modular e um modelo estocástico utilizando a teoria de modelos markovianos escondidos. Com base nos estudos e modelos apresentados, esta pesquisa propõe um sistema formado por dois módulos sucessivos distintos. Uma rede de propagação direta (módulo estimador de contexto espacial) realiza o processamento de contexto espacial identificando a seqüência a ser reproduzida e fornecendo um protótipo do contexto para o segundo módulo. Este é formado por uma rede parcialmente recorrente (módulo de reprodução de sequências temporais) para aprender as informações de contexto temporal e reproduzir em suas saídas a seqüência identificada pelo módulo anterior. Para a finalidade mencionada, este mestrado utiliza a distribuição de Gibbs na saída do módulo para contexto espacial de forma que este forneça probabilidades de contexto espacial, indicando o grau de certeza do módulo e possibilitando a utilização de procedimentos especiais para os casos de dúvida. O sistema neural foi testado em conjuntos contendo trajetórias abertas, fechadas, e com diferentes situações de ambigüidade e complexidade. Duas situações distintas foram avaliadas: (a) capacidade do sistema em reproduzir trajetórias a partir de pontos iniciais treinados; e (b) capacidade de generalização do sistema reproduzindo trajetórias considerando pontos iniciais ou finais em situações não treinadas. A situação (b) é um problema de difícil ) solução em redes neurais devido à falta de contexto temporal, essencial na reprodução de seqüências. Foram realizados experimentos comparando o desempenho do sistema modular proposto com o de uma rede parcialmente recorrente operando sozinha e um sistema modular neural (TOTEM). Os resultados sugerem que o sistema proposto apresentou uma capacidade de generalização significamente melhor, sem que houvesse uma deterioração na capacidade de reproduzir seqüências treinadas. Esses resultados foram obtidos em sistema mais simples que o TOTEM.
Resumo:
In this paper, we propose a novel filter for feature selection. Such filter relies on the estimation of the mutual information between features and classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon one. The complexity of such bypassing process does not depend on the number of dimensions but on the number of patterns/samples, and thus the curse of dimensionality is circumvented. We show that it is then possible to outperform a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification.
Resumo:
La cultura organizacional se configura a partir de la interrelación de los procesos de apropiación de la filosofía, la pertenencia, la adaptación, la satisfacción y el liderazgo compartidos por un grupo. Este conjunto de categorías puede ser reconocido mediante el uso de una matriz que incluye en su estructura subcategorías o conceptos y un conjunto de propiedades observables en el público interno. El presente artículo tiene por objetivo describir un modelo de estudio construido a partir de la Grounded Theory o Teoría Fundamentada que nos permita comprender el desarrollo cultural de las organizaciones. El estudio de caso se realizó en una compañía líder en Europa del sector de la distribución.
Resumo:
If one has a distribution of words (SLUNs or CLUNS) in a text written in language L(MT), and is adjusted one of the mathematical expressions of distribution that exists in the mathematical literature, some parameter of the elected expression it can be considered as a measure of the diversity. But because the adjustment is not always perfect as usual measure; it is preferable to select an index that doesn't postulate a regularity of distribution expressible for a simple formula. The problem can be approachable statistically, without having special interest for the organization of the text. It can serve as index any monotonous function that has a minimum value when all their elements belong to the same class, that is to say, all the individuals belong to oneself symbol, and a maximum value when each element belongs to a different class, that is to say, each individual is of a different symbol. It should also gather certain conditions like they are: to be not very sensitive to the extension of the text and being invariant to certain number of operations of selection in the text. These operations can be theoretically random. The expressions that offer more advantages are those coming from the theory of the information of Shannon-Weaver. Based on them, the authors develop a theoretical study for indexes of diversity to be applied in texts built in modeling language L(MT), although anything impedes that they can be applied to texts written in natural languages.
Resumo:
Background: Despite the progress made on policies and programmes to strengthen primary health care teams’ response to Intimate Partner Violence, the literature shows that encounters between women exposed to IPV and health-care providers are not always satisfactory, and a number of barriers that prevent individual health-care providers from responding to IPV have been identified. We carried out a realist case study, for which we developed and tested a programme theory that seeks to explain how, why and under which circumstances a primary health care team in Spain learned to respond to IPV. Methods: A realist case study design was chosen to allow for an in-depth exploration of the linkages between context, intervention, mechanisms and outcomes as they happen in their natural setting. The first author collected data at the primary health care center La Virgen (pseudonym) through the review of documents, observation and interviews with health systems’ managers, team members, women patients, and members of external services. The quality of the IPV case management was assessed with the PREMIS tool. Results: This study found that the health care team at La Virgen has managed 1) to engage a number of staff members in actively responding to IPV, 2) to establish good coordination, mutual support and continuous learning processes related to IPV, 3) to establish adequate internal referrals within La Virgen, and 4) to establish good coordination and referral systems with other services. Team and individual level factors have triggered the capacity and interest in creating spaces for team leaning, team work and therapeutic responses to IPV in La Virgen, although individual motivation strongly affected this mechanism. Regional interventions did not trigger individual and/ or team responses but legitimated the workings of motivated professionals. Conclusions: The primary health care team of La Virgen is involved in a continuous learning process, even as participation in the process varies between professionals. This process has been supported, but not caused, by a favourable policy for integration of a health care response to IPV. Specific contextual factors of La Virgen facilitated the uptake of the policy. To some extent, the performance of La Virgen has the potential to shape the IPV learning processes of other primary health care teams in Murcia.
Resumo:
For non-negative random variables with finite means we introduce an analogous of the equilibrium residual-lifetime distribution based on the quantile function. This allows us to construct new distributions with support (0, 1), and to obtain a new quantile-based version of the probabilistic generalization of Taylor's theorem. Similarly, for pairs of stochastically ordered random variables we come to a new quantile-based form of the probabilistic mean value theorem. The latter involves a distribution that generalizes the Lorenz curve. We investigate the special case of proportional quantile functions and apply the given results to various models based on classes of distributions and measures of risk theory. Motivated by some stochastic comparisons, we also introduce the “expected reversed proportional shortfall order”, and a new characterization of random lifetimes involving the reversed hazard rate function.
Resumo:
The aim of this paper is twofold. First, we present an up-to-date assessment of the differences across euro area countries in the distributions of various measures of debt conditional on household characteristics. We consider three different outcomes: the probability of holding debt, the amount of debt held and, in the case of secured debt, the interest rate paid on the main mortgage. Second, we examine the role of legal and economic institutions in accounting for these differences. We use data from the first wave of a new survey of household finances, the Household Finance and Consumption Survey, to achieve these aims. We find that the patterns of secured and unsecured debt outcomes vary markedly across countries. Among all the institutions considered, the length of asset repossession periods best accounts for the features of the distribution of secured debt. In countries with longer repossession periods, the fraction of people who borrow is smaller, the youngest group of households borrow lower amounts (conditional on borrowing), and the mortgage interest rates paid by low-income households are higher. Regulatory loan-to-value ratios, the taxation of mortgages and the prevalence of interest-only or fixed-rate mortgages deliver less robust results.
Resumo:
Blue whiting (Micromesistius poutassou, http://www.marinespecies.org/aphia.php?p=taxdetails&id=126439) is a small mesopelagic planktivorous gadoid found throughout the North-East Atlantic. This data contains the results of a model-based analysis of larvae captured by the Continuous Plankton Recorder (CPR) during the period 1951-2005. The observations are analysed using Generalised Additive Models (GAMs) of the the spatial, seasonal and interannual variation in the occurrence of larvae. The best fitting model is chosen using the Aikaike Information Criteria (AIC). The probability of occurrence in the continous plankton recorder is then normalised and converted to a probability distribution function in space (UTM projection Zone 28) and season (day of year). The best fitting model splits the distribution into two separate spawning grounds north and south of a dividing line at 53 N. The probability distribution is therefore normalised in these two regions (ie the space-time integral over each of the two regions is 1). The modelled outputs are on a UTM Zone 28 grid: however, for convenience, the latitude ("lat") and longitude ("lon") of each of these grid points are also included as a variable in the NetCDF file. The assignment of each grid point to either the Northern or Southern component (defined here as north/south of 53 N), is also included as a further variable ("component"). Finally, the day of year ("doy") is stored as the number of days elapsed from and included January 1 (ie doy=1 on January 1) - the year is thereafter divided into 180 grid points.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
v. 1. Wholesale distribution function.--v. 2. Administrative management, the role of the chief executive.--v. 3. Financial management.--v. 4. Marketing management.--v. 5. Inventory control, theory and practice.--v. 6. Applied management techniques.
Resumo:
Mode of access: Internet.