970 resultados para generic exponential family duration modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a physically-based compact model for the sub-threshold behavior in a TFT with an amorphous semiconductor channel. Both drift and diffusion current components are considered and combined using an harmonic average. Here, the diffusion component describes the exponential current behavior due to interfacial deep states, while the drift component is associated with presence of localized deep states formed by dangling bonds broken from weak bonds in the bulk and follows a power law. The proposed model yields good agreement with measured results. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a user manual for your electronic assistive technology environmental control system trial pack or in simple words – a few bits of technology that can let you control some household appliances. This information is intended for you, your family and carers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate representation of the coupled effects between turbulent fluid flow with a free surface, heat transfer, solidification, and mold deformation has been shown to be necessary for the realistic prediction of several defects in castings and also for determining the final crystalline structure. A core component of the computational modeling of casting processes involves mold filling, which is the most computationally intensive aspect of casting simulation at the continuum level. Considering the complex geometries involved in shape casting, the evolution of the free surface, gas entrapment, and the entrainment of oxide layers into the casting make this a very challenging task in every respect. Despite well over 30 years of effort in developing algorithms, this is by no means a closed subject. In this article, we will review the full range of computational methods used, from unstructured finite-element (FE) and finite-volume (FV) methods through fully structured and block-structured approaches utilizing the cut-cell family of techniques to capture the geometric complexity inherent in shape casting. This discussion will include the challenges of generating rapid solutions on high-performance parallel cluster technology and how mold filling links in with the full spectrum of physics involved in shape casting. Finally, some indications as to novel techniques emerging now that can address genuinely arbitrarily complex geometries are briefly outlined and their advantages and disadvantages are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper is primarily concerned with the modelling of aircraft manufacturing cost. The aim is to establish an integrated life cycle balanced design process through a systems engineering approach to interdisciplinary analysis and control. The cost modelling is achieved using the genetic causal approach that enforces product family categorisation and the subsequent generation of causal relationships between deterministic cost components and their design source. This utilises causal parametric cost drivers and the definition of the physical architecture from the Work Breakdown Structure (WBS) to identify product families. The paper presents applications to the overall aircraft design with a particular focus on the fuselage as a subsystem of the aircraft, including fuselage panels and localised detail, as well as engine nacelles. The higher level application to aircraft requirements and functional analysis is investigated and verified relative to life cycle design issues for the relationship between acquisition cost and Direct Operational Cost (DOC), for a range of both metal and composite subsystems. Maintenance is considered in some detail as an important contributor to DOC and life cycle cost. The lower level application to aircraft physical architecture is investigated and verified for the WBS of an engine nacelle, including a sequential build stage investigation of the materials, fabrication and assembly costs. The studies are then extended by investigating the acquisition cost of aircraft fuselages, including the recurring unit cost and the non-recurring design cost of the airframe sub-system. The systems costing methodology is facilitated by the genetic causal cost modeling technique as the latter is highly generic, interdisciplinary, flexible, multilevel and recursive in nature, and can be applied at the various analysis levels required of systems engineering. Therefore, the main contribution of paper is a methodology for applying systems engineering costing, supported by the genetic causal cost modeling approach, whether at a requirements, functional or physical level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of the duration of X-ray lasing pumped with picosecond pulses from the VULCAN optical laser are obtained using a streak camera with 700 fs temporal resolution. Combined with a temporal smearing due to the spectrometer employed, we have measured X-ray laser pulse durations for Ni-like silver at 13.9 nm with a total time resolution of 1.1 ps. For Ni-like silver, the X-ray laser output has a steep rise followed by an approximately exponential temporal decay with measured full-width at half-maximum (FWHM) of 3.7 (+/-0.5) ps. For Ne-like nickel lasing at 23.1 nm, the measured duration of lasing is approximate to10.7 (+/-1) ps (FWHM). An estimate of the duration of the X-ray laser gain has been obtained by temporally resolving spectrally integrated continuum and resonance line emission. For Ni-like silver, this time of emission is approximate to22 (+/-2) ps (FWHM), while for Ne-like nickel we measure approximate to35 (+/-2) ps (FWHM). Assuming that these times of emission correspond to the gain duration, we show that a simple model consistently relates the gain durations to the measured durations of X-ray lasing. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Architects use cycle-by-cycle simulation to evaluate design choices and understand tradeoffs and interactions among design parameters. Efficiently exploring exponential-size design spaces with many interacting parameters remains an open problem: the sheer number of experiments renders detailed simulation intractable. We attack this problem via an automated approach that builds accurate, confident predictive design-space models. We simulate sampled points, using the results to teach our models the function describing relationships among design parameters. The models produce highly accurate performance estimates for other points in the space, can be queried to predict performance impacts of architectural changes, and are very fast compared to simulation, enabling efficient discovery of tradeoffs among parameters in different regions. We validate our approach via sensitivity studies on memory hierarchy and CPU design spaces: our models generally predict IPC with only 1-2% error and reduce required simulation by two orders of magnitude. We also show the efficacy of our technique for exploring chip multiprocessor (CMP) design spaces: when trained on a 1% sample drawn from a CMP design space with 250K points and up to 55x performance swings among different system configurations, our models predict performance with only 4-5% error on average. Our approach combines with techniques to reduce time per simulation, achieving net time savings of three-four orders of magnitude. Copyright © 2006 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficiently exploring exponential-size architectural design spaces with many interacting parameters remains an open problem: the sheer number of experiments required renders detailed simulation intractable.We attack this via an automated approach that builds accurate predictive models. We simulate sampled points, using results to teach our models the function describing relationships among design parameters. The models can be queried and are very fast, enabling efficient design tradeoff discovery. We validate our approach via two uniprocessor sensitivity studies, predicting IPC with only 1–2% error. In an experimental study using the approach, training on 1% of a 250-K-point CMP design space allows our models to predict performance with only 4–5% error. Our predictive modeling combines well with techniques that reduce the time taken by each simulation experiment, achieving net time savings of three-four orders of magnitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Product Line software Engineering depends on capturing the commonality and variability within a family of products, typically using feature modeling, and using this information to evolve a generic reference architecture for the family. For embedded systems, possible variability in hardware and operating system platforms is an added complication. The design process can be facilitated by first exploring the behavior associated with features. In this paper we outline a bidirectional feature modeling scheme that supports the capture of commonality and variability in the platform environment as well as within the required software. Additionally, 'behavior' associated with features can be included in the overall model. This is achieved by integrating the UCM path notation in a way that exploits UCM's static and dynamic stubs to capture behavioral variability and link it to the feature model structure. The resulting model is a richer source of information to support the architecture development process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correlations between intergroup violence and youth aggression are often reported. Yet longitudinal research is needed to understand the developmental factors underlying this relation, including between-person differences in within-person change in aggression through the adolescent years. Multilevel modeling was used to explore developmental and contextual influences related to risk for youth aggression using 4 waves of a prospective, longitudinal study of adolescent/mother dyad reports (N = 820; 51% female; 10–20 years old) in Belfast, Northern Ireland, a setting of protracted political conflict. Experience with sectarian (i.e., intergroup) antisocial behavior predicted greater youth aggression; however, that effect declined with age, and youth were buffered by a cohesive family environment. The trajectory of aggression (i.e., intercepts and linear slopes) related to more youth engagement in sectarian antisocial behavior; however, being female and having a more cohesive family were associated with lower levels of youth participation in sectarian acts. The findings are discussed in terms of protective and risk factors for adolescent aggression, and more specifically, participation in sectarian antisocial behavior. The article concludes with clinical and intervention implications, which may decrease youth aggression and the perpetuation of intergroup violence in contexts of ongoing conflict.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Over one billion children are exposed worldwide to political violence and armed conflict. Currently, conclusions about bases for adjustment problems are qualified by limited longitudinal research from a process-oriented, social-ecological perspective. In this study, we examined a theoretically-based model for the impact of multiple levels of the social ecology (family, community) on adolescent delinquency. Specifically, this study explored the impact of children’s emotional insecurity about both the family and community on youth delinquency in Northern Ireland. Methods: In the context of a five-wave longitudinal research design, participants included 999 mother-child dyads in Belfast (482 boys, 517 girls), drawn from socially-deprived, ethnically-homogenous areas that had experienced political violence. Youth ranged in age from 10 to 20 and were 12.18 (SD = 1.82) years old on average at Time 1. Findings: The longitudinal analyses were conducted in hierarchical linear modeling (HLM), allowing for the modeling of inter-individual differences in intra-individual change. Intra-individual trajectories of emotional insecurity about the family related to children’s delinquency. Greater insecurity about the community worsened the impact of family conflict on youth’s insecurity about the family, consistent with the notion that youth’s insecurity about the community sensitizes them to exposure to family conflict in the home. Conclusions: The results suggest that ameliorating children’s insecurity about family and community in contexts of political violence is an important goal toward improving adolescents’ well-being, including reduced risk for delinquency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de mestrado em Bioestatística, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2013

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uma linha de pesquisa e desenvolvimento na área da robótica, que tem recebido atenção crescente nos últimos anos, é o desenvolvimento de robôs biologicamente inspirados. A ideia é adquirir conhecimento de seres biológicos, cuja evolução ocorreu ao longo de milhões de anos, e aproveitar o conhecimento assim adquirido para implementar a locomoção pelos mesmos métodos (ou pelo menos usar a inspiração biológica) nas máquinas que se constroem. Acredita-se que desta forma é possível desenvolver máquinas com capacidades semelhantes às dos seres biológicos em termos de capacidade e eficiência energética de locomoção. Uma forma de compreender melhor o funcionamento destes sistemas, sem a necessidade de desenvolver protótipos dispendiosos e com longos tempos de desenvolvimento é usar modelos de simulação. Com base nestas ideias, o objectivo deste trabalho passa por efectuar um estudo da biomecânica da santola (Maja brachydactyla), uma espécie de caranguejo comestível pertencente à família Majidae de artrópodes decápodes, usando a biblioteca de ferramentas SimMechanics da aplicação Matlab / Simulink. Esta tese descreve a anatomia e locomoção da santola, a sua modelação biomecânica e a simulação do seu movimento no ambiente Matlab / SimMechanics e SolidWorks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’évolution des protéines est un domaine important de la recherche en bioinformatique et catalyse l'intérêt de trouver des outils d'alignement qui peuvent être utilisés de manière fiable et modéliser avec précision l'évolution d'une famille de protéines. TM-Align (Zhang and Skolnick, 2005) est considéré comme l'outil idéal pour une telle tâche, en termes de rapidité et de précision. Par conséquent, dans cette étude, TM-Align a été utilisé comme point de référence pour faciliter la détection des autres outils d'alignement qui sont en mesure de préciser l'évolution des protéines. En parallèle, nous avons élargi l'actuel outil d'exploration de structures secondaires de protéines, Helix Explorer (Marrakchi, 2006), afin qu'il puisse également être utilisé comme un outil pour la modélisation de l'évolution des protéines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le développement d’un médicament est non seulement complexe mais les retours sur investissment ne sont pas toujours ceux voulus ou anticipés. Plusieurs médicaments échouent encore en Phase III même avec les progrès technologiques réalisés au niveau de plusieurs aspects du développement du médicament. Ceci se traduit en un nombre décroissant de médicaments qui sont commercialisés. Il faut donc améliorer le processus traditionnel de développement des médicaments afin de faciliter la disponibilité de nouveaux produits aux patients qui en ont besoin. Le but de cette recherche était d’explorer et de proposer des changements au processus de développement du médicament en utilisant les principes de la modélisation avancée et des simulations d’essais cliniques. Dans le premier volet de cette recherche, de nouveaux algorithmes disponibles dans le logiciel ADAPT 5® ont été comparés avec d’autres algorithmes déjà disponibles afin de déterminer leurs avantages et leurs faiblesses. Les deux nouveaux algorithmes vérifiés sont l’itératif à deux étapes (ITS) et le maximum de vraisemblance avec maximisation de l’espérance (MLEM). Les résultats de nos recherche ont démontré que MLEM était supérieur à ITS. La méthode MLEM était comparable à l’algorithme d’estimation conditionnelle de premier ordre (FOCE) disponible dans le logiciel NONMEM® avec moins de problèmes de rétrécissement pour les estimés de variances. Donc, ces nouveaux algorithmes ont été utilisés pour la recherche présentée dans cette thèse. Durant le processus de développement d’un médicament, afin que les paramètres pharmacocinétiques calculés de façon noncompartimentale soient adéquats, il faut que la demi-vie terminale soit bien établie. Des études pharmacocinétiques bien conçues et bien analysées sont essentielles durant le développement des médicaments surtout pour les soumissions de produits génériques et supergénériques (une formulation dont l'ingrédient actif est le même que celui du médicament de marque, mais dont le profil de libération du médicament est différent de celui-ci) car elles sont souvent les seules études essentielles nécessaires afin de décider si un produit peut être commercialisé ou non. Donc, le deuxième volet de la recherche visait à évaluer si les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte pour un individu pouvaient avoir une incidence sur les conclusions d’une étude de bioéquivalence et s’ils devaient être soustraits d’analyses statistiques. Les résultats ont démontré que les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte influençaient de façon négative les résultats si ceux-ci étaient maintenus dans l’analyse de variance. Donc, le paramètre de surface sous la courbe à l’infini pour ces sujets devrait être enlevé de l’analyse statistique et des directives à cet effet sont nécessaires a priori. Les études finales de pharmacocinétique nécessaires dans le cadre du développement d’un médicament devraient donc suivre cette recommandation afin que les bonnes décisions soient prises sur un produit. Ces informations ont été utilisées dans le cadre des simulations d’essais cliniques qui ont été réalisées durant la recherche présentée dans cette thèse afin de s’assurer d’obtenir les conclusions les plus probables. Dans le dernier volet de cette thèse, des simulations d’essais cliniques ont amélioré le processus du développement clinique d’un médicament. Les résultats d’une étude clinique pilote pour un supergénérique en voie de développement semblaient très encourageants. Cependant, certaines questions ont été soulevées par rapport aux résultats et il fallait déterminer si le produit test et référence seraient équivalents lors des études finales entreprises à jeun et en mangeant, et ce, après une dose unique et des doses répétées. Des simulations d’essais cliniques ont été entreprises pour résoudre certaines questions soulevées par l’étude pilote et ces simulations suggéraient que la nouvelle formulation ne rencontrerait pas les critères d’équivalence lors des études finales. Ces simulations ont aussi aidé à déterminer quelles modifications à la nouvelle formulation étaient nécessaires afin d’améliorer les chances de rencontrer les critères d’équivalence. Cette recherche a apporté des solutions afin d’améliorer différents aspects du processus du développement d’un médicament. Particulièrement, les simulations d’essais cliniques ont réduit le nombre d’études nécessaires pour le développement du supergénérique, le nombre de sujets exposés inutilement au médicament, et les coûts de développement. Enfin, elles nous ont permis d’établir de nouveaux critères d’exclusion pour des analyses statistiques de bioéquivalence. La recherche présentée dans cette thèse est de suggérer des améliorations au processus du développement d’un médicament en évaluant de nouveaux algorithmes pour des analyses compartimentales, en établissant des critères d’exclusion de paramètres pharmacocinétiques (PK) pour certaines analyses et en démontrant comment les simulations d’essais cliniques sont utiles.