981 resultados para Chain Monte-carlo


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The energy and structure of dilute hard- and soft-sphere Bose gases are systematically studied in the framework of several many-body approaches, such as the variational correlated theory, the Bogoliubov model, and the uniform limit approximation, valid in the weak-interaction regime. When possible, the results are compared with the exact diffusion Monte Carlo ones. Jastrow-type correlation provides a good description of the systems, both hard- and soft-spheres, if the hypernetted chain energy functional is freely minimized and the resulting Euler equation is solved. The study of the soft-sphere potentials confirms the appearance of a dependence of the energy on the shape of the potential at gas paremeter values of x~0.001. For quantities other than the energy, such as the radial distribution functions and the momentum distributions, the dependence appears at any value of x. The occurrence of a maximum in the radial distribution function, in the momentum distribution, and in the excitation spectrum is a natural effect of the correlations when x increases. The asymptotic behaviors of the functions characterizing the structure of the systems are also investigated. The uniform limit approach is very easy to implement and provides a good description of the soft-sphere gas. Its reliability improves when the interaction weakens.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gel electrophoresis allows one to separate knotted DNA (nicked circular) of equal length according to the knot type. At low electric fields, complex knots, being more compact, drift faster than simpler knots. Recent experiments have shown that the drift velocity dependence on the knot type is inverted when changing from low to high electric fields. We present a computer simulation on a lattice of a closed, knotted, charged DNA chain drifting in an external electric field in a topologically restricted medium. Using a Monte Carlo algorithm, the dependence of the electrophoretic migration of the DNA molecules on the knot type and on the electric field intensity is investigated. The results are in qualitative and quantitative agreement with electrophoretic experiments done under conditions of low and high electric fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We report variational calculations, in the hypernetted-chain (HNC)-Fermi-HNC scheme, of one-body density matrices and one-particle momentum distributions for 3He-4He mixtures described by a Jastrow correlated wave function. The 4He condensate fractions and the 3He strength poles are examined and compared with the Monte Carlo available results. The agreement has been found to be very satisfactory. Their density dependence is also studied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The magnetic structure of the edge-sharing cuprate compound Li2CuO2 has been investigated with highly correlated ab initio electronic structure calculations. The first- and second-neighbor in-chain magnetic interactions are calculated to be 142 and -22 K, respectively. The ratio between the two parameters is smaller than suggested previously in the literature. The interchain interactions are antiferromagnetic in nature and of the order of a few K only. Monte Carlo simulations using the ab initio parameters to define the spin model Hamiltonian result in a Nel temperature in good agreement with experiment. Spin population analysis situates the magnetic moment on the copper and oxygen ions between the completely localized picture derived from experiment and the more delocalized picture based on local-density calculations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The magnetic structure of the edge-sharing cuprate compound Li2CuO2 has been investigated with highly correlated ab initio electronic structure calculations. The first- and second-neighbor in-chain magnetic interactions are calculated to be 142 and -22 K, respectively. The ratio between the two parameters is smaller than suggested previously in the literature. The interchain interactions are antiferromagnetic in nature and of the order of a few K only. Monte Carlo simulations using the ab initio parameters to define the spin model Hamiltonian result in a Nel temperature in good agreement with experiment. Spin population analysis situates the magnetic moment on the copper and oxygen ions between the completely localized picture derived from experiment and the more delocalized picture based on local-density calculations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Anal condylomata acuminata (ACA) are caused by human papilloma virus (HPV) infection which is transmitted by close physical and sexual contact. The result of surgical treatment of ACA has an overall success rate of 71% to 93%, with a recurrence rate between 4% and 29%. The aim of this study was to assess a possible association between HPV type and ACA recurrence after surgical treatment. METHODS: We performed a retrospective analysis of 140 consecutive patients who underwent surgery for ACA from January 1990 to December 2005 at our tertiary University Hospital. We confirmed ACA by histopathological analysis and determined the HPV typing using the polymerase chain reaction. Patients gave consent for HPV testing and completed a questionnaire. We looked at the association of ACA, HPV typing, and HIV disease. We used chi, the Monte Carlo simulation, and Wilcoxon tests for statistical analysis. RESULTS: Among the 140 patients (123 M/17 F), HPV 6 and 11 were the most frequently encountered viruses (51% and 28%, respectively). Recurrence occurred in 35 (25%) patients. HPV 11 was present in 19 (41%) of these recurrences, which is statistically significant, when compared with other HPVs. There was no significant difference between recurrence rates in the 33 (24%) HIV-positive and the HIV-negative patients. CONCLUSIONS: HPV 11 is associated with higher recurrence rate of ACA. This makes routine clinical HPV typing questionable. Follow-up is required to identify recurrence and to treat it early, especially if HPV 11 has been identified.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tässä diplomityössä on esitetty työn yhteydessä toteutetun Serpent-ARES-laskentaketjun muodostamiseksi tarvittavat toimenpiteet. ARES-reaktorisydän-simulaattorissa tarvittavien homogenisoitujen ryhmävakiokirjastojen muodostaminen Serpentiä käyttäen tekee laskentaketjusta muiden käytössä olevien reaktorisydämen laskentaketjujen mahdollisista virhelähteistä riippumattoman. Monte Carlo-laskentamenetelmään perustuvaa reaktorifysiikan laskentaohjelmaa käyttämällä ryhmävakiokirjastot muodostetaan uudella menetelmällä ja näin saadaan viranomaiskäyttöön voimayhtiöiden käyttämistä menetelmistä riippumaton laskentaketju reaktorien turvallisuusmarginaalien laskentaan. Työn yhteydessä muodostetun laskentaketjun ja tehtyjen vaikutusalakirjastojen muodostamisrutiinien sekä parametrisovitteiden toimivuus on todettu laskemalla Olkiluoto 3 - reaktorin alkulatauksen säätösauvojen tehokkuuksia ja sammutusmarginaaleja eri olosuhteissa. Menetelmä on todettu toimivaksi parametrien pätevyysalueella ja saadut laskentatulokset ovat oikeaa suuruusluokkaa. Parametrimallin tarkkuutta ja pätevyysaluetta on syytä vielä kehittää, ennen kuin laskentaketjua voidaan käyttää varmentamaan muilla menetelmillä laskettujen tulosten oikeellisuutta.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Digital business ecosystems (DBE) are becoming an increasingly popular concept for modelling and building distributed systems in heterogeneous, decentralized and open environments. Information- and communication technology (ICT) enabled business solutions have created an opportunity for automated business relations and transactions. The deployment of ICT in business-to-business (B2B) integration seeks to improve competitiveness by establishing real-time information and offering better information visibility to business ecosystem actors. The products, components and raw material flows in supply chains are traditionally studied in logistics research. In this study, we expand the research to cover the processes parallel to the service and information flows as information logistics integration. In this thesis, we show how better integration and automation of information flows enhance the speed of processes and, thus, provide cost savings and other benefits for organizations. Investments in DBE are intended to add value through business automation and are key decisions in building up information logistics integration. Business solutions that build on automation are important sources of value in networks that promote and support business relations and transactions. Value is created through improved productivity and effectiveness when new, more efficient collaboration methods are discovered and integrated into DBE. Organizations, business networks and collaborations, even with competitors, form DBE in which information logistics integration has a significant role as a value driver. However, traditional economic and computing theories do not focus on digital business ecosystems as a separate form of organization, and they do not provide conceptual frameworks that can be used to explore digital business ecosystems as value drivers—combined internal management and external coordination mechanisms for information logistics integration are not the current practice of a company’s strategic process. In this thesis, we have developed and tested a framework to explore the digital business ecosystems developed and a coordination model for digital business ecosystem integration; moreover, we have analysed the value of information logistics integration. The research is based on a case study and on mixed methods, in which we use the Delphi method and Internetbased tools for idea generation and development. We conducted many interviews with key experts, which we recoded, transcribed and coded to find success factors. Qualitative analyses were based on a Monte Carlo simulation, which sought cost savings, and Real Option Valuation, which sought an optimal investment program for the ecosystem level. This study provides valuable knowledge regarding information logistics integration by utilizing a suitable business process information model for collaboration. An information model is based on the business process scenarios and on detailed transactions for the mapping and automation of product, service and information flows. The research results illustrate the current cap of understanding information logistics integration in a digital business ecosystem. Based on success factors, we were able to illustrate how specific coordination mechanisms related to network management and orchestration could be designed. We also pointed out the potential of information logistics integration in value creation. With the help of global standardization experts, we utilized the design of the core information model for B2B integration. We built this quantitative analysis by using the Monte Carlo-based simulation model and the Real Option Value model. This research covers relevant new research disciplines, such as information logistics integration and digital business ecosystems, in which the current literature needs to be improved. This research was executed by high-level experts and managers responsible for global business network B2B integration. However, the research was dominated by one industry domain, and therefore a more comprehensive exploration should be undertaken to cover a larger population of business sectors. Based on this research, the new quantitative survey could provide new possibilities to examine information logistics integration in digital business ecosystems. The value activities indicate that further studies should continue, especially with regard to the collaboration issues on integration, focusing on a user-centric approach. We should better understand how real-time information supports customer value creation by imbedding the information into the lifetime value of products and services. The aim of this research was to build competitive advantage through B2B integration to support a real-time economy. For practitioners, this research created several tools and concepts to improve value activities, information logistics integration design and management and orchestration models. Based on the results, the companies were able to better understand the formulation of the digital business ecosystem and the importance of joint efforts in collaboration. However, the challenge of incorporating this new knowledge into strategic processes in a multi-stakeholder environment remains. This challenge has been noted, and new projects have been established in pursuit of a real-time economy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The investments have always been considered as an essential backbone and so-called ‘locomotive’ for the competitive economies. However, in various countries, the state has been put under tight budget constraints for the investments in capital intensive projects. In response to this situation, the cooperation between public and private sector has grown based on public-private mechanism. The promotion of favorable arrangement for collaboration between public and private sectors for the provision of policies, services, and infrastructure in Russia can help to address the problems of dry ports development that neither municipalities nor the private sector can solve alone. Especially, the stimulation of public-private collaboration is significant under the exposure to externalities that affect the magnitude of the risks during all phases of project realization. In these circumstances, the risk in the projects also is becoming increasingly a part of joint research and risk management practice, which is viewed as a key approach, aiming to take active actions on existing global and specific factors of uncertainties. Meanwhile, a relatively little progress has been made on the inclusion of the resilience aspects into the planning process of a dry ports construction that would instruct the capacity planner, on how to mitigate the occurrence of disruptions that may lead to million dollars of losses due to the deviation of the future cash flows from the expected financial flows on the project. The current experience shows that the existing methodological base is developed fragmentary within separate steps of supply chain risk management (SCRM) processes: risk identification, risk evaluation, risk mitigation, risk monitoring and control phases. The lack of the systematic approach hinders the solution of the problem of risk management processes of dry port implementation. Therefore, management of various risks during the investments phases of dry port projects still presents a considerable challenge from the practical and theoretical points of view. In this regard, the given research became a logical continuation of fundamental research, existing in the financial models and theories (e.g., capital asset pricing model and real option theory), as well as provided a complementation for the portfolio theory. The goal of the current study is in the design of methods and models for the facilitation of dry port implementation through the mechanism of public-private partnership on the national market that implies the necessity to mitigate, first and foremost, the shortage of the investments and consequences of risks. The problem of the research was formulated on the ground of the identified contradictions. They rose as a continuation of the trade-off between the opportunities that the investors can gain from the development of terminal business in Russia (i.e. dry port implementation) and risks. As a rule, the higher the investment risk, the greater should be their expected return. However, investors have a different tolerance for the risks. That is why it would be advisable to find an optimum investment. In the given study, the optimum relates to the search for the efficient portfolio, which can provide satisfaction to the investor, depending on its degree of risk aversion. There are many theories and methods in finance, concerning investment choices. Nevertheless, the appropriateness and effectiveness of particular methods should be considered with the allowance of the specifics of the investment projects. For example, the investments in dry ports imply not only the lump sum of financial inflows, but also the long-term payback periods. As a result, capital intensity and longevity of their construction determine the necessity from investors to ensure the return on investment (profitability), along with the rapid return on investment (liquidity), without precluding the fact that the stochastic nature of the project environment is hardly described by the formula-based approach. The current theoretical base for the economic appraisals of the dry port projects more often perceives net present value (NPV) as a technique superior to other decision-making criteria. For example, the portfolio theory, which considers different risk preference of an investor and structures of utility, defines net present value as a better criterion of project appraisal than discounted payback period (DPP). Meanwhile, in business practice, the DPP is more popular. Knowing that the NPV is based on the assumptions of certainty of project life, it cannot be an accurate appraisal approach alone to determine whether or not the project should be accepted for the approval in the environment that is not without of uncertainties. In order to reflect the period or the project’s useful life that is exposed to risks due to changes in political, operational, and financial factors, the second capital budgeting criterion – discounted payback period is profoundly important, particularly for the Russian environment. Those statements represent contradictions that exist in the theory and practice of the applied science. Therefore, it would be desirable to relax the assumptions of portfolio theory and regard DPP as not fewer relevant appraisal approach for the assessment of the investment and risk measure. At the same time, the rationality of the use of both project performance criteria depends on the methods and models, with the help of which these appraisal approaches are calculated in feasibility studies. The deterministic methods cannot ensure the required precision of the results, while the stochastic models guarantee the sufficient level of the accuracy and reliability of the obtained results, providing that the risks are properly identified, evaluated, and mitigated. Otherwise, the project performance indicators may not be confirmed during the phase of project realization. For instance, the economic and political instability can result in the undoing of hard-earned gains, leading to the need for the attraction of the additional finances for the project. The sources of the alternative investments, as well as supportive mitigation strategies, can be studied during the initial phases of project development. During this period, the effectiveness of the investments undertakings can also be improved by the inclusion of the various investors, e.g. Russian Railways’ enterprises and other private companies in the dry port projects. However, the evaluation of the effectiveness of the participation of different investors in the project lack the methods and models that would permit doing the particular feasibility study, foreseeing the quantitative characteristics of risks and their mitigation strategies, which can meet the tolerance of the investors to the risks. For this reason, the research proposes a combination of Monte Carlo method, discounted cash flow technique, the theory of real options, and portfolio theory via a system dynamics simulation approach. The use of this methodology allows for comprehensive risk management process of dry port development to cover all aspects of risk identification, risk evaluation, risk mitigation, risk monitoring, and control phases. A designed system dynamics model can be recommended for the decision-makers on the dry port projects that are financed via a public-private partnership. It permits investors to make a decision appraisal based on random variables of net present value and discounted payback period, depending on different risks factors, e.g. revenue risks, land acquisition risks, traffic volume risks, construction hazards, and political risks. In this case, the statistical mean is used for the explication of the expected value of the DPP and NPV; the standard deviation is proposed as a characteristic of risks, while the elasticity coefficient is applied for rating of risks. Additionally, the risk of failure of project investments and guaranteed recoupment of capital investment can be considered with the help of the model. On the whole, the application of these modern methods of simulation creates preconditions for the controlling of the process of dry port development, i.e. making managerial changes and identifying the most stable parameters that contribute to the optimal alternative scenarios of the project realization in the uncertain environment. System dynamics model allows analyzing the interactions in the most complex mechanism of risk management process of the dry ports development and making proposals for the improvement of the effectiveness of the investments via an estimation of different risk management strategies. For the comparison and ranking of these alternatives in their order of preference to the investor, the proposed indicators of the efficiency of the investments, concerning the NPV, DPP, and coefficient of variation, can be used. Thus, rational investors, who averse to taking increased risks unless they are compensated by the commensurate increase in the expected utility of a risky prospect of dry port development, can be guided by the deduced marginal utility of investments. It is computed on the ground of the results from the system dynamics model. In conclusion, the outlined theoretical and practical implications for the management of risks, which are the key characteristics of public-private partnerships, can help analysts and planning managers in budget decision-making, substantially alleviating the effect from various risks and avoiding unnecessary cost overruns in dry port projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since its discovery, chaos has been a very interesting and challenging topic of research. Many great minds spent their entire lives trying to give some rules to it. Nowadays, thanks to the research of last century and the advent of computers, it is possible to predict chaotic phenomena of nature for a certain limited amount of time. The aim of this study is to present a recently discovered method for the parameter estimation of the chaotic dynamical system models via the correlation integral likelihood, and give some hints for a more optimized use of it, together with a possible application to the industry. The main part of our study concerned two chaotic attractors whose general behaviour is diff erent, in order to capture eventual di fferences in the results. In the various simulations that we performed, the initial conditions have been changed in a quite exhaustive way. The results obtained show that, under certain conditions, this method works very well in all the case. In particular, it came out that the most important aspect is to be very careful while creating the training set and the empirical likelihood, since a lack of information in this part of the procedure leads to low quality results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La tâche de maintenance ainsi que la compréhension des programmes orientés objet (OO) deviennent de plus en plus coûteuses. L’analyse des liens de dépendance peut être une solution pour faciliter ces tâches d’ingénierie. Cependant, analyser les liens de dépendance est une tâche à la fois importante et difficile. Nous proposons une approche pour l'étude des liens de dépendance internes pour des programmes OO, dans un cadre probabiliste, où les entrées du programme peuvent être modélisées comme un vecteur aléatoire, ou comme une chaîne de Markov. Dans ce cadre, les métriques de couplage deviennent des variables aléatoires dont les distributions de probabilité peuvent être étudiées en utilisant les techniques de simulation Monte-Carlo. Les distributions obtenues constituent un point d’entrée pour comprendre les liens de dépendance internes entre les éléments du programme, ainsi que leur comportement général. Ce travail est valable dans le cas où les valeurs prises par la métrique dépendent des entrées du programme et que ces entrées ne sont pas fixées à priori. Nous illustrons notre approche par deux études de cas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cette thèse présente des méthodes de traitement de données de comptage en particulier et des données discrètes en général. Il s'inscrit dans le cadre d'un projet stratégique du CRNSG, nommé CC-Bio, dont l'objectif est d'évaluer l'impact des changements climatiques sur la répartition des espèces animales et végétales. Après une brève introduction aux notions de biogéographie et aux modèles linéaires mixtes généralisés aux chapitres 1 et 2 respectivement, ma thèse s'articulera autour de trois idées majeures. Premièrement, nous introduisons au chapitre 3 une nouvelle forme de distribution dont les composantes ont pour distributions marginales des lois de Poisson ou des lois de Skellam. Cette nouvelle spécification permet d'incorporer de l'information pertinente sur la nature des corrélations entre toutes les composantes. De plus, nous présentons certaines propriétés de ladite distribution. Contrairement à la distribution multidimensionnelle de Poisson qu'elle généralise, celle-ci permet de traiter les variables avec des corrélations positives et/ou négatives. Une simulation permet d'illustrer les méthodes d'estimation dans le cas bidimensionnel. Les résultats obtenus par les méthodes bayésiennes par les chaînes de Markov par Monte Carlo (CMMC) indiquent un biais relatif assez faible de moins de 5% pour les coefficients de régression des moyennes contrairement à ceux du terme de covariance qui semblent un peu plus volatils. Deuxièmement, le chapitre 4 présente une extension de la régression multidimensionnelle de Poisson avec des effets aléatoires ayant une densité gamma. En effet, conscients du fait que les données d'abondance des espèces présentent une forte dispersion, ce qui rendrait fallacieux les estimateurs et écarts types obtenus, nous privilégions une approche basée sur l'intégration par Monte Carlo grâce à l'échantillonnage préférentiel. L'approche demeure la même qu'au chapitre précédent, c'est-à-dire que l'idée est de simuler des variables latentes indépendantes et de se retrouver dans le cadre d'un modèle linéaire mixte généralisé (GLMM) conventionnel avec des effets aléatoires de densité gamma. Même si l'hypothèse d'une connaissance a priori des paramètres de dispersion semble trop forte, une analyse de sensibilité basée sur la qualité de l'ajustement permet de démontrer la robustesse de notre méthode. Troisièmement, dans le dernier chapitre, nous nous intéressons à la définition et à la construction d'une mesure de concordance donc de corrélation pour les données augmentées en zéro par la modélisation de copules gaussiennes. Contrairement au tau de Kendall dont les valeurs se situent dans un intervalle dont les bornes varient selon la fréquence d'observations d'égalité entre les paires, cette mesure a pour avantage de prendre ses valeurs sur (-1;1). Initialement introduite pour modéliser les corrélations entre des variables continues, son extension au cas discret implique certaines restrictions. En effet, la nouvelle mesure pourrait être interprétée comme la corrélation entre les variables aléatoires continues dont la discrétisation constitue nos observations discrètes non négatives. Deux méthodes d'estimation des modèles augmentés en zéro seront présentées dans les contextes fréquentiste et bayésien basées respectivement sur le maximum de vraisemblance et l'intégration de Gauss-Hermite. Enfin, une étude de simulation permet de montrer la robustesse et les limites de notre approche.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cette thèse porte sur les phénomènes critiques survenant dans les modèles bidimensionnels sur réseau. Les résultats sont l'objet de deux articles : le premier porte sur la mesure d'exposants critiques décrivant des objets géométriques du réseau et, le second, sur la construction d'idempotents projetant sur des modules indécomposables de l'algèbre de Temperley-Lieb pour la chaîne de spins XXZ. Le premier article présente des expériences numériques Monte Carlo effectuées pour une famille de modèles de boucles en phase diluée. Baptisés "dilute loop models (DLM)", ceux-ci sont inspirés du modèle O(n) introduit par Nienhuis (1990). La famille est étiquetée par les entiers relativement premiers p et p' ainsi que par un paramètre d'anisotropie. Dans la limite thermodynamique, il est pressenti que le modèle DLM(p,p') soit décrit par une théorie logarithmique des champs conformes de charge centrale c(\kappa)=13-6(\kappa+1/\kappa), où \kappa=p/p' est lié à la fugacité du gaz de boucles \beta=-2\cos\pi/\kappa, pour toute valeur du paramètre d'anisotropie. Les mesures portent sur les exposants critiques représentant la loi d'échelle des objets géométriques suivants : l'interface, le périmètre externe et les liens rouges. L'algorithme Metropolis-Hastings employé, pour lequel nous avons introduit de nombreuses améliorations spécifiques aux modèles dilués, est détaillé. Un traitement statistique rigoureux des données permet des extrapolations coïncidant avec les prédictions théoriques à trois ou quatre chiffres significatifs, malgré des courbes d'extrapolation aux pentes abruptes. Le deuxième article porte sur la décomposition de l'espace de Hilbert \otimes^nC^2 sur lequel la chaîne XXZ de n spins 1/2 agit. La version étudiée ici (Pasquier et Saleur (1990)) est décrite par un hamiltonien H_{XXZ}(q) dépendant d'un paramètre q\in C^\times et s'exprimant comme une somme d'éléments de l'algèbre de Temperley-Lieb TL_n(q). Comme pour les modèles dilués, le spectre de la limite continue de H_{XXZ}(q) semble relié aux théories des champs conformes, le paramètre q déterminant la charge centrale. Les idempotents primitifs de End_{TL_n}\otimes^nC^2 sont obtenus, pour tout q, en termes d'éléments de l'algèbre quantique U_qsl_2 (ou d'une extension) par la dualité de Schur-Weyl quantique. Ces idempotents permettent de construire explicitement les TL_n-modules indécomposables de \otimes^nC^2. Ceux-ci sont tous irréductibles, sauf si q est une racine de l'unité. Cette exception est traitée séparément du cas où q est générique. Les problèmes résolus par ces articles nécessitent une grande variété de résultats et d'outils. Pour cette raison, la thèse comporte plusieurs chapitres préparatoires. Sa structure est la suivante. Le premier chapitre introduit certains concepts communs aux deux articles, notamment une description des phénomènes critiques et de la théorie des champs conformes. Le deuxième chapitre aborde brièvement la question des champs logarithmiques, l'évolution de Schramm-Loewner ainsi que l'algorithme de Metropolis-Hastings. Ces sujets sont nécessaires à la lecture de l'article "Geometric Exponents of Dilute Loop Models" au chapitre 3. Le quatrième chapitre présente les outils algébriques utilisés dans le deuxième article, "The idempotents of the TL_n-module \otimes^nC^2 in terms of elements of U_qsl_2", constituant le chapitre 5. La thèse conclut par un résumé des résultats importants et la proposition d'avenues de recherche qui en découlent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The magnetic structure of the edge-sharing cuprate compound Li2CuO2 has been investigated with highly correlated ab initio electronic structure calculations. The first- and second-neighbor in-chain magnetic interactions are calculated to be 142 and -22 K, respectively. The ratio between the two parameters is smaller than suggested previously in the literature. The interchain interactions are antiferromagnetic in nature and of the order of a few K only. Monte Carlo simulations using the ab initio parameters to define the spin model Hamiltonian result in a Nel temperature in good agreement with experiment. Spin population analysis situates the magnetic moment on the copper and oxygen ions between the completely localized picture derived from experiment and the more delocalized picture based on local-density calculations.