900 resultados para Field-based model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The reported productivity gains while using models and model transformations to develop entire systems, after almost a decade of experience applying model-driven approaches for system development, are already undeniable benefits of this approach. However, the slowness of higher-level, rule based model transformation languages hinders the applicability of this approach to industrial scales. Lower-level, and efficient, languages can be used but productivity and easy maintenance seize to exist. The abstraction penalty problem is not new, it also exists for high-level, object oriented languages but everyone is using them now. Why is not everyone using rule based model transformation languages then? In this thesis, we propose a framework, comprised of a language and its respective environment, designed to tackle the most performance critical operation of high-level model transformation languages: the pattern matching. This framework shows that it is possible to mitigate the performance penalty while still using high-level model transformation languages.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work is divided into two distinct parts. The first part consists of the study of the metal organic framework UiO-66Zr, where the aim was to determine the force field that best describes the adsorption equilibrium properties of two different gases, methane and carbon dioxide. The other part of the work focuses on the study of the single wall carbon nanotube topology for ethane adsorption; the aim was to simplify as much as possible the solid-fluid force field model to increase the computational efficiency of the Monte Carlo simulations. The choice of both adsorbents relies on their potential use in adsorption processes, such as the capture and storage of carbon dioxide, natural gas storage, separation of components of biogas, and olefin/paraffin separations. The adsorption studies on the two porous materials were performed by molecular simulation using the grand canonical Monte Carlo (μ,V,T) method, over the temperature range of 298-343 K and pressure range 0.06-70 bar. The calibration curves of pressure and density as a function of chemical potential and temperature for the three adsorbates under study, were obtained Monte Carlo simulation in the canonical ensemble (N,V,T); polynomial fit and interpolation of the obtained data allowed to determine the pressure and gas density at any chemical potential. The adsorption equilibria of methane and carbon dioxide in UiO-66Zr were simulated and compared with the experimental data obtained by Jasmina H. Cavka et al. The results show that the best force field for both gases is a chargeless united-atom force field based on the TraPPE model. Using this validated force field it was possible to estimate the isosteric heats of adsorption and the Henry constants. In the Grand-Canonical Monte Carlo simulations of carbon nanotubes, we conclude that the fastest type of run is obtained with a force field that approximates the nanotube as a smooth cylinder; this approximation gives execution times that are 1.6 times faster than the typical atomistic runs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computational modeling has become a widely used tool for unraveling the mechanisms of higher level cooperative cell behavior during vascular morphogenesis. However, experimenting with published simulation models or adding new assumptions to those models can be daunting for novice and even for experienced computational scientists. Here, we present a step-by-step, practical tutorial for building cell-based simulations of vascular morphogenesis using the Tissue Simulation Toolkit (TST). The TST is a freely available, open-source C++ library for developing simulations with the two-dimensional cellular Potts model, a stochastic, agent-based framework to simulate collective cell behavior. We will show the basic use of the TST to simulate and experiment with published simulations of vascular network formation. Then, we will present step-by-step instructions and explanations for building a recent simulation model of tumor angiogenesis. Demonstrated mechanisms include cell-cell adhesion, chemotaxis, cell elongation, haptotaxis, and haptokinesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, many of the health care systems are large and complex environments and quite dynamic, specifically Emergency Departments, EDs. It is opened and working 24 hours per day throughout the year with limited resources, whereas it is overcrowded. Thus, is mandatory to simulate EDs to improve qualitatively and quantitatively their performance. This improvement can be achieved modelling and simulating EDs using Agent-Based Model, ABM and optimising many different staff scenarios. This work optimises the staff configuration of an ED. In order to do optimisation, objective functions to minimise or maximise have to be set. One of those objective functions is to find the best or optimum staff configuration that minimise patient waiting time. The staff configuration comprises: doctors, triage nurses, and admissions, the amount and sort of them. Staff configuration is a combinatorial problem, that can take a lot of time to be solved. HPC is used to run the experiments, and encouraging results were obtained. However, even with the basic ED used in this work the search space is very large, thus, when the problem size increases, it is going to need more resources of processing in order to obtain results in an acceptable time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We report experimental and numerical results showing how certain N-dimensional dynamical systems are able to exhibit complex time evolutions based on the nonlinear combination of N-1 oscillation modes. The experiments have been done with a family of thermo-optical systems of effective dynamical dimension varying from 1 to 6. The corresponding mathematical model is an N-dimensional vector field based on a scalar-valued nonlinear function of a single variable that is a linear combination of all the dynamic variables. We show how the complex evolutions appear associated with the occurrence of successive Hopf bifurcations in a saddle-node pair of fixed points up to exhaust their instability capabilities in N dimensions. For this reason the observed phenomenon is denoted as the full instability behavior of the dynamical system. The process through which the attractor responsible for the observed time evolution is formed may be rather complex and difficult to characterize. Nevertheless, the well-organized structure of the time signals suggests some generic mechanism of nonlinear mode mixing that we associate with the cluster of invariant sets emerging from the pair of fixed points and with the influence of the neighboring saddle sets on the flow nearby the attractor. The generation of invariant tori is likely during the full instability development and the global process may be considered as a generalized Landau scenario for the emergence of irregular and complex behavior through the nonlinear superposition of oscillatory motions

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We construct a utility-based model of fluctuations, with nominal rigidities andunemployment, and draw its implications for the unemployment-inflation trade-off and for the conduct of monetary policy.We proceed in two steps. We first leave nominal rigidities aside. We show that,under a standard utility specification, productivity shocks have no effect onunemployment in the constrained efficient allocation. We then focus on theimplications of alternative real wage setting mechanisms for fluctuations in un-employment. We show the role of labor market frictions and real wage rigiditiesin determining the effects of productivity shocks on unemployment.We then introduce nominal rigidities in the form of staggered price setting byfirms. We derive the relation between inflation and unemployment and discusshow it is influenced by the presence of labor market frictions and real wagerigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and its dependence on labor market characteristics. We draw the implications for optimal monetary policy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The complex network dynamics that arise from the interaction of the brain's structural and functional architectures give rise to mental function. Theoretical models demonstrate that the structure-function relation is maximal when the global network dynamics operate at a critical point of state transition. In the present work, we used a dynamic mean-field neural model to fit empirical structural connectivity (SC) and functional connectivity (FC) data acquired in humans and macaques and developed a new iterative-fitting algorithm to optimize the SC matrix based on the FC matrix. A dramatic improvement of the fitting of the matrices was obtained with the addition of a small number of anatomical links, particularly cross-hemispheric connections, and reweighting of existing connections. We suggest that the notion of a critical working point, where the structure-function interplay is maximal, may provide a new way to link behavior and cognition, and a new perspective to understand recovery of function in clinical conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract : The existence of a causal relationship between the spatial distribution of living organisms and their environment, in particular climate, has been long recognized and is the central principle of biogeography. In turn, this recognition has led scientists to the idea of using the climatic, topographic, edaphic and biotic characteristics of the environment to predict its potential suitability for a given species or biological community. In this thesis, my objective is to contribute to the development of methodological improvements in the field of species distribution modeling. More precisely, the objectives are to propose solutions to overcome limitations of species distribution models when applied to conservation biology issues, or when .used as an assessment tool of the potential impacts of global change. The first objective of my thesis is to contribute to evidence the potential of species distribution models for conservation-related applications. I present a methodology to generate pseudo-absences in order to overcome the frequent lack of reliable absence data. I also demonstrate, both theoretically (simulation-based) and practically (field-based), how species distribution models can be successfully used to model and sample rare species. Overall, the results of this first part of the thesis demonstrate the strong potential of species distribution models as a tool for practical applications in conservation biology. The second objective this thesis is to contribute to improve .projections of potential climate change impacts on species distributions, and in particular for mountain flora. I develop and a dynamic model, MIGCLIM, that allows the implementation of dispersal limitations into classic species distribution models and present an application of this model to two virtual species. Given that accounting for dispersal limitations requires information on seed dispersal, distances, a general methodology to classify species into broad dispersal types is also developed. Finally, the M~GCLIM model is applied to a large number of species in a study area of the western Swiss Alps. Overall, the results indicate that while dispersal limitations can have an important impact on the outcome of future projections of species distributions under climate change scenarios, estimating species threat levels (e.g. species extinction rates) for a mountainous areas of limited size (i.e. regional scale) can also be successfully achieved when considering dispersal as unlimited (i.e. ignoring dispersal limitations, which is easier from a practical point of view). Finally, I present the largest fine scale assessment of potential climate change impacts on mountain vegetation that has been carried-out to date. This assessment involves vegetation from 12 study areas distributed across all major western and central European mountain ranges. The results highlight that some mountain ranges (the Pyrenees and the Austrian Alps) are expected to be more affected by climate change than others (Norway and the Scottish Highlands). The results I obtain in this study also indicate that the threat levels projected by fine scale models are less severe than those derived from coarse scale models. This result suggests that some species could persist in small refugias that are not detected by coarse scale models. Résumé : L'existence d'une relation causale entre la répartition des espèces animales et végétales et leur environnement, en particulier le climat, a été mis en évidence depuis longtemps et est un des principes centraux en biogéographie. Ce lien a naturellement conduit à l'idée d'utiliser les caractéristiques climatiques, topographiques, édaphiques et biotiques de l'environnement afin d'en prédire la qualité pour une espèce ou une communauté. Dans ce travail de thèse, mon objectif est de contribuer au développement d'améliorations méthodologiques dans le domaine de la modélisation de la distribution d'espèces dans le paysage. Plus précisément, les objectifs sont de proposer des solutions afin de surmonter certaines limitations des modèles de distribution d'espèces dans des applications pratiques de biologie de la conservation ou dans leur utilisation pour évaluer l'impact potentiel des changements climatiques sur l'environnement. Le premier objectif majeur de mon travail est de contribuer à démontrer le potentiel des modèles de distribution d'espèces pour des applications pratiques en biologie de la conservation. Je propose une méthode pour générer des pseudo-absences qui permet de surmonter le problème récurent du manque de données d'absences fiables. Je démontre aussi, de manière théorique (par simulation) et pratique (par échantillonnage de terrain), comment les modèles de distribution d'espèces peuvent être utilisés pour modéliser et améliorer l'échantillonnage des espèces rares. Ces résultats démontrent le potentiel des modèles de distribution d'espèces comme outils pour des applications de biologie de la conservation. Le deuxième objectif majeur de ce travail est de contribuer à améliorer les projections d'impacts potentiels des changements climatiques sur la flore, en particulier dans les zones de montagnes. Je développe un modèle dynamique de distribution appelé MigClim qui permet de tenir compte des limitations de dispersion dans les projections futures de distribution potentielle d'espèces, et teste son application sur deux espèces virtuelles. Vu que le fait de prendre en compte les limitations dues à la dispersion demande des données supplémentaires importantes (p.ex. la distance de dispersion des graines), ce travail propose aussi une méthode de classification simplifiée des espèces végétales dans de grands "types de disperseurs", ce qui permet ainsi de d'obtenir de bonnes approximations de distances de dispersions pour un grand nombre d'espèces. Finalement, j'applique aussi le modèle MIGCLIM à un grand nombre d'espèces de plantes dans une zone d'études des pré-Alpes vaudoises. Les résultats montrent que les limitations de dispersion peuvent avoir un impact considérable sur la distribution potentielle d'espèces prédites sous des scénarios de changements climatiques. Cependant, quand les modèles sont utilisés pour évaluer les taux d'extinction d'espèces dans des zones de montages de taille limitée (évaluation régionale), il est aussi possible d'obtenir de bonnes approximations en considérant la dispersion des espèces comme illimitée, ce qui est nettement plus simple d'un point dé vue pratique. Pour terminer je présente la plus grande évaluation à fine échelle d'impact potentiel des changements climatiques sur la flore des montagnes conduite à ce jour. Cette évaluation englobe 12 zones d'études réparties sur toutes les chaines de montages principales d'Europe occidentale et centrale. Les résultats montrent que certaines chaines de montagnes (les Pyrénées et les Alpes Autrichiennes) sont projetées comme plus sensibles aux changements climatiques que d'autres (les Alpes Scandinaves et les Highlands d'Ecosse). Les résultats obtenus montrent aussi que les modèles à échelle fine projettent des impacts de changement climatiques (p. ex. taux d'extinction d'espèces) moins sévères que les modèles à échelle large. Cela laisse supposer que les modèles a échelle fine sont capables de modéliser des micro-niches climatiques non-détectées par les modèles à échelle large.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work presents an analysis of hysteresis and dissipation in quasistatically driven disordered systems. The study is based on the random field Ising model with fluctuationless dynamics. It enables us to sort out the fraction of the energy input by the driving field stored in the system and the fraction dissipated in every step of the transformation. The dissipation is directly related to the occurrence of avalanches, and does not scale with the size of Barkhausen magnetization jumps. In addition, the change in magnetic field between avalanches provides a measure of the energy barriers between consecutive metastable states