171 resultados para statistical speaker models
Resumo:
Impulsivity based on Gray's [Gray, J. A. (1982) The neuropsychology of anxiety: an enquiry into the function of the septo-hippocampal system. New York: Oxford University Press: (1991). The neurophysiology of temperament. In J. Strelau & A. Angleitner. Explorations in temperament: international perspectives on theory and measurement. London. Plenum Press]. physiological model of personality was hypothesised to be more predictive of goal oriented criteria within the workplace than scales derived From Eysenck's [Eysenck. H.J. (1967). The biological basis of personality. Springfield, IL: Charles C. Thompson.] physiological model of personality. Results confirmed the hypothesis and also showed that Gray's scale of Impulsivity was generally a better predictor than attributional style and interest in money. Results were interpreted as providing support for Gray's Behavioural Activation System which moderates response to reward. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The Eysenck Personality Questionnaire-Revised (EPQ-R), the Eysenck Personality Profiler Short Version (EPP-S), and the Big Five Inventory (BFI-V4a) were administered to 135 postgraduate students of business in Pakistan. Whilst Extraversion and Neuroticism scales from the three questionnaires were highly correlated, it was found that Agreeableness was most highly correlated with Psychoticism in the EPQ-R and Conscientiousness was most highly correlated with Psychoticism in the EPP-S. Principal component analyses with varimax rotation were carried out. The analyses generally suggested that the five factor model rather than the three-factor model was more robust and better for interpretation of all the higher order scales of the EPQ-R, EPP-S, and BFI-V4a in the Pakistani data. Results show that the superiority of the five factor solution results from the inclusion of a broader variety of personality scales in the input data, whereas Eysenck's three factor solution seems to be best when a less complete but possibly more important set of variables are input. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The effects of thermodynamic non-ideality on the forms of sedimentation equilibrium distributions for several isoelectric proteins have been analysed on the statistical-mechanical basis of excluded volume to obtain an estimate of the extent of protein solvation. Values of the effective solvation. parameter delta are reported for ellipsoidal as well as spherical models of the proteins, taken to be rigid, impenetrable macromolecular structures. The dependence of the effective solvated radius upon protein molecular mass exhibits reasonable agreement with the relationship calculated for a model in which the unsolvated protein molecule is surrounded by a 0.52-nm solvation shell. Although the observation that this shell thickness corresponds to a double layer of water molecules may be of questionable relevance to mechanistic interpretation of protein hydration, it augurs well for the assignment of magnitudes to the second virial coefficients of putative complexes in the quantitative characterization of protein-protein interactions under conditions where effects of thermodynamic non-ideality cannot justifiably be neglected. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, we look at three models (mixture, competing risk and multiplicative) involving two inverse Weibull distributions. We study the shapes of the density and failure-rate functions and discuss graphical methods to determine if a given data set can be modelled by one of these models. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
When examining a rock mass, joint sets and their orientations can play a significant role with regard to how the rock mass will behave. To identify joint sets present in the rock mass, the orientation of individual fracture planer can be measured on exposed rock faces and the resulting data can be examined for heterogeneity. In this article, the expectation-maximization algorithm is used to lit mixtures of Kent component distributions to the fracture data to aid in the identification of joint sets. An additional uniform component is also included in the model to accommodate the noise present in the data.
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
We solve the Sp(N) Heisenberg and SU(N) Hubbard-Heisenberg models on the anisotropic triangular lattice in the large-N limit. These two models may describe respectively the magnetic and electronic properties of the family of layered organic materials K-(BEDT-TTF)(2)X, The Heisenberg model is also relevant to the frustrated antiferromagnet, Cs2CuCl4. We find rich phase diagrams for each model. The Sp(N) :antiferromagnet is shown to have five different phases as a function of the size of the spin and the degree of anisotropy of the triangular lattice. The effects of fluctuations at finite N are also discussed. For parameters relevant to Cs2CuCl4 the ground state either exhibits incommensurate spin order, or is in a quantum disordered phase with deconfined spin-1/2 excitations and topological order. The SU(N) Hubbard-Heisenberg model exhibits an insulating dimer phase, an insulating box phase, a semi-metallic staggered flux phase (SFP), and a metallic uniform phase. The uniform and SFP phases exhibit a pseudogap, A metal-insulator transition occurs at intermediate values of the interaction strength.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.
Resumo:
Now that some of the genes involved in asthma and allergy have been identified, interest is turning to how genetic predisposition interacts with exposure to environmental risk factors. These questions are best answered by studies in which both genotypes and other risk factors are measured, but even simpler studies, in which family history is used as a proxy for genotype, have made suggestive findings. For example, early breast feeding may increase the risk of allergic disease in genetically susceptible children, and decrease the risk of 'sporadic' allergy. This review also addresses the overall importance of genetic causes of allergic disease in the general population.
Resumo:
This article reports on the results of a study undertaken by the author together with her research assistant, Heather Green. The study collected and analysed data from all disciplinary tribunal decisions heard in Queensland since 1930 in an attempt to provide empirical information which has previously been lacking. This article will outline the main features of the disciplinary system in Queensland, describe the research methodology used in the present study and then report on some findings from the study. Reported findings include a profile of solicitors who have appeared before a disciplinary hearing, the types of matters which have attracted formal discipline and the types of orders made by the tribunal. Much of the data is then presented on a time scale so as to reveal any changes over time.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
For the improvement of genetic material suitable for on farm use under low-input conditions, participatory and formal plant breeding strategies are frequently presented as competing options. A common frame of reference to phrase mechanisms and purposes related to breeding strategies will facilitate clearer descriptions of similarities and differences between participatory plant breeding and formal plant breeding. In this paper an attempt is made to develop such a common framework by means of a statistically inspired language that acknowledges the importance of both on farm trials and research centre trials as sources of information for on farm genetic improvement. Key concepts are the genetic correlation between environments, and the heterogeneity of phenotypic and genetic variance over environments. Classic selection response theory is taken as the starting point for the comparison of selection trials (on farm and research centre) with respect to the expected genetic improvement in a target environment (low-input farms). The variance-covariance parameters that form the input for selection response comparisons traditionally come from a mixed model fit to multi-environment trial data. In this paper we propose a recently developed class of mixed models, namely multiplicative mixed models, also called factor-analytic models, for modelling genetic variances and covariances (correlations). Mixed multiplicative models allow genetic variances and covariances to be dependent on quantitative descriptors of the environment, and confer a high flexibility in the choice of variance-covariance structure, without requiring the estimation of a prohibitively high number of parameters. As a result detailed considerations regarding selection response comparisons are facilitated. ne statistical machinery involved is illustrated on an example data set consisting of barley trials from the International Center for Agricultural Research in the Dry Areas (ICARDA). Analysis of the example data showed that participatory plant breeding and formal plant breeding are better interpreted as providing complementary rather than competing information.