964 resultados para variance component models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este proyecto consiste en el desarrollo de un sistema completo de generación procedimental de misiones para videojuegos. Buscamos crear, mediante un encadenamiento de algoritmos y un modelado del juego y sus componentes, secuencias de acciones y eventos de juego encadenados entre sí de forma lógica. La realización de estas secuencias de acciones lleva progresivamente hacia un objetivo final. Estas secuencias se conocen en el mundo de los juegos como misiones. Las dos fases principales del proceso son la generación de una misión a partir de un estado de juego inicial y la búsqueda de una misión óptima utilizando ciertos criterios que pueden estar ligados a las propiedades del jugador, dando lugar a misiones adaptativas. El proyecto contempla el desarrollo íntegro del sistema, lo que incluye tanto el sistema de generación y búsqueda como un videojuego donde integrar el resto del sistema para completarlo. El resultado final es plenamente funcional y jugable. La base teórica del proyecto proviene de la simbiosis de dos artes: la generación procedimental de contenido y la narración interactiva. This project involves the development of a complete procedural game quest generation system. We seek to build, by linking a series of algorithms, game and game component models, sequences of logically chained game actions and events. The ordered accomplishment of these sequences lead progressively to the fulfillment of a final objective. These sequences are known as quests in the videogame world. The two main parts of the process are quest generation from an initial game state and optimal quest search. This last is achieved by using certain criteria that can defined by the player properties, thus giving birth to adaptive quests. In this project. The system is comprehensively developed, including the quest generation and optimal search, as well as a full videogame, in which the rest of the system will be embedded so as to complete it. The final result is fully functional and playable. The theoretical basis of the project comes from the symbiosis of two different arts: procedural content generation and interactive storytelling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

INTRODUÇÃO:A rigidez arterial aumentada é um importante determinante do risco cardiovascular e um forte preditor de morbimortalidade. Além disso, estudos demonstram que o enrijecimento vascular pode estar associado a fatores genéticos e metabólicos. Portanto,os objetivos do presente estudo são determinar a herdabilidade da velocidade de onda de pulso (VOP) e avaliar a associação do perfil lipídico e do controle glicêmico com o fenótipo de rigidez arterial em uma população brasileira.MÉTODOS:Foram selecionados 1675 indivíduos (ambos os gêneros com idade entre 18 e 102 anos) distribuídos em 109 famílias residentes no município de Baependi-MG. A VOP carótida-femoral foi avaliada de forma não invasiva através de um dispositivo automático.As variáveis lipídicas e a glicemia de jejum foram determinadas pelo método enzimático colorimétrico. Os níveis de hemoglobina glicada (HbA1c) foram determinados pelo método de cromatografia líquida de alta eficiência. As estimativas da herdabilidade da VOP foram calculadas utilizando-se a metodologia de componentes de variância implementadas no software SOLAR. RESULTADOS: A herdabilidade estimada para a VOP foi de 26%, sendo ajustada para idade, gênero, HbA1c e pressão arterial média. Os níveis de HbA1c foram associados a rigidez arterial, onde a elevação de uma unidade percentual da HbA1c representou um incremento de 54% na chance de risco para rigidez arterial aumentada. As variáveis lipídicas (LDL-c, HDL-c, colesterol não- HDL-c, colesterol total e triglicérides) apresentaram fraca correlação com a VOP. Além disso, uma análise de regressão linear estratificada para idade (ponto de corte >= 45 anos) demonstrou uma relação inversa entre LDL-c e VOP em mulheres com idade >= 45 anos. CONCLUSÃO: Os resultados indicam que a VOP apresenta herdabilidade intermediária (26%); a HbA1c esta fortemente associada a rigidez arterial aumentada; o LDL-c é inversamente relacionado com a VOP em mulheres com idade >= 45 anos, possivelmente devido às alterações metabólicas associadas à falência ovariana

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A eficiência econômica da bovinocultura leiteira está relacionada à utilização de animais que apresentem, concomitantemente, bom desempenho quanto à produção, reprodução, saúde e longevidade. Nisto, o índice de seleção configura-se como ferramenta importante ao aumento da lucratividade nesse sistema, visto que permite a seleção de reprodutores para várias características simultaneamente, considerando a relação entre elas bem como a relevância econômica das mesmas. Com a recente disponibilidade de dados genômicos tornou-se ainda possível expandir a abrangência e acurácia dos índices de seleção por meio do aumento do número e qualidade das informações consideradas. Nesse contexto, dois estudos foram desenvolvidos. No primeiro, o objetivo foi estimar parâmetros genéticos e valores genéticos (VG) para características relacionadas à produção e qualidade do leite incluindo-se a informação genômica na avaliação genética. Foram utilizadas medidas de idade ao primeiro parto (IPP), produção de leite (PROD), teor de gordura (GOR), proteína (PROT), lactose, caseína, escore de células somáticas (ECS) e perfil de ácidos graxos de 4.218 vacas bem como os genótipos de 755 vacas para 57.368 polimorfismos de nucleotídeo único (SNP). Os componentes de variância e VG foram obtidos por meio de um modelo misto animal, incluindo-se os efeitos de grupos de contemporâneas, ordem de lactação, dias em lactação e os efeitos aditivo genético, ambiente permanente e residual. Duas abordagens foram desenvolvidas: uma tradicional, na qual a matriz de relacionamentos é baseada no pedigree; e uma genômica, na qual esta matriz é construída combinando-se a informação de pedigree e dos SNP. As herdabilidades variaram de 0,07 a 0,39. As correlações genéticas entre PROD e os componentes do leite variaram entre -0,45 e -0,13 enquanto correlações altas e positivas foram estimadas entre GOR e os ácidos graxos. O uso da abordagem genômica não alterou as estimativas de parâmetros genéticos; contudo, houve aumento entre 1,5% e 6,8% na acurácia dos VG, à exceção de IPP, para a qual houve uma redução de 1,9%. No segundo estudo, o objetivo foi incorporar a informação genômica no desenvolvimento de índices econômicos de seleção. Neste, os VG para PROD, GOR, PROT, teor de ácidos graxos insaturados (INSAT), ECS e vida produtiva foram combinados em índices de seleção ponderados por valores econômicos estimados sob três cenários de pagamento: exclusivamente por volume de leite (PAG1); por volume e por componentes do leite (PAG2); por volume e componentes do leite incluindo INSAT (PAG3). Esses VG foram preditos a partir de fenótipos de 4.293 vacas e genótipos de 755 animais em um modelo multi-característica sob as abordagens tradicional e genômica. O uso da informação genômica influenciou os componentes de variância, VG e a resposta à seleção. Entretanto, as correlações de ranking entre as abordagens foram altas nos três cenários, com valores entre 0,91 e 0,99. Diferenças foram principalmente observadas entre PAG1 e os demais cenários, com correlações entre 0,67 e 0,88. A importância relativa das características e o perfil dos melhores animais foram sensíveis ao cenário de remuneração considerado. Assim, verificou-se como essencial a consideração dos valores econômicos das características na avaliação genética e decisões de seleção.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. The niche variation hypothesis predicts that among-individual variation in niche use will increase in the presence of intraspecific competition and decrease in the presence of interspecific competition. We sought to determine whether the local isotopic niche breadth of fish inhabiting a wetland was best explained by competition for resources and the niche variation hypothesis, by dispersal of individuals from locations with different prey resources or by a combination of the two. We analysed stable isotopes of carbon and nitrogen as indices of feeding niche and compared metrics of within-site spread to characterise site-level isotopic niche breadth. We then evaluated the explanatory power of competing models of the direct and indirect effects of several environmental variables spanning gradients of disturbance, competition strength and food availability on among-individual variation of the eastern mosquitofish (Gambusia holbrooki). 2. The Dispersal model posits that only the direct effect of disturbance (i.e. changes in water level known to induce fish movement) influences among-individual variation in isotopic niche. The Partitioning model allows for only direct effects of local food availability on among-individual variation. The Combined model allows for both hypotheses by including the direct effects of disturbance and food availability. 3. A linear regression of the Combined model described more variance than models limited to the variables of either the Dispersal or Partitioning models. Of the independent variables considered, the food availability variable (per cent edible periphyton) explained the most variation in isotopic niche breadth, followed closely by the disturbance variable (days since last drying event). 4. Structural equation modelling provided further evidence that the Combined model was best supported by the data, with the Partitioning and the Dispersal models only modestly less informative. Again, the per cent edible periphyton was the variable with the largest direct effect on niche variability, with other food availability variables and the disturbance variable only slightly less important. Indirect effects of heterospecific and conspecific competitor densities were also important, through their effects on prey density. 5. Our results support the Combined hypotheses, although partitioning mechanisms appear to explain the most diet variation among individuals in the eastern mosquitofish. The results also support some predictions of the niche variation hypothesis, although both conspecific and interspecific competition appeared to increase isotopic niche breadth in contrast to predictions that interspecific competition would decrease it. We think this resulted from high diet overlap of co-occurring species, most of which consume similar macroinvertebrates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Activity of 7-ethoxyresorufin-O-deethylase (EROD) in fish is certainly the best-studied biomarker of exposure applied in the field to evaluate biological effects of contamination in the marine environment. Since 1991, a feasibility study for a monitoring network using this biomarker of exposure has been conducted along French coasts. Using data obtained during several cruises, this study aims to determine the number of fish required to detect a given difference between 2 mean EROD activities, i.e. to achieve an a priori fixed statistical power (l-beta) given significance level (alpha), variance estimations and projected ratio of unequal sample sizes (k). Mean EROD activity and standard error were estimated at each of 82 sampling stations. The inter-individual variance component was dominant in estimating the variance of mean EROD activity. Influences of alpha, beta, k and variability on sample sizes are illustrated and discussed in terms of costs. In particular, sample sizes do not have to be equal, especially if such a requirement would lead to a significant cost in sampling extra material. Finally, the feasibility of longterm monitoring is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Every construction process (whatever buildings, machines, software, etc.) requires first to make a model of the artifact that is going to be built. This model should be based on a paradigm or meta-model, which defines the basic modeling elements: which real world concepts can be represented, which relationships can be established among them, and son on. There also should be a language to represent, manipulate and think about that model. Usually this model should be redefined at various levels of abstraction. So both, the paradigm an the language, must have abstraction capacity. In this paper I characterize the relationships that exist between these concepts: model, language and abstraction. I also analyze some historical models, like the relational model for databases, the imperative programming model and the object oriented model. Finally, I remark the need to teach that model-driven approach to students, and even go further to higher level models, like component models o business models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50μgcm(-2).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In one-component Abelian sandpile models, the toppling probabilities are independent quantities. This is not the case in multicomponent models. The condition of associativity of the underlying Abelian algebras imposes nonlinear relations among the toppling probabilities. These relations are derived for the case of two-component quadratic Abelian algebras. We show that Abelian sandpile models with two conservation laws have only trivial avalanches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Leakage detection is an important issue in many chemical sensing applications. Leakage detection hy thresholds suffers from important drawbacks when sensors have serious drifts or they are affected by cross-sensitivities. Here we present an adaptive method based in a Dynamic Principal Component Analysis that models the relationships between the sensors in the may. In normal conditions a certain variance distribution characterizes sensor signals. However, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drifts the model is adaptive and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic signals and with real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method.