971 resultados para Generalized Jkr Model
Resumo:
The Belt and Road Initiative (BRI) is a project launched by the Chinese Government whose main goal is to connect more than 65 countries in Asia, Europe, Africa and Oceania developing infrastructures and facilities. To support the prevention or mitigation of landslide hazards, which may affect the mainland infrastructures of BRI, a landslide susceptibility analysis in the countries involved has been carried out. Due to the large study area, the analysis has been carried out using a multi-scale approach which consists of mapping susceptibility firstly at continental scale, and then at national scale. The study area selected for the continental assessment is the south-Asia, where a pixel-based landslide susceptibility map has been carried out using the Weight of Evidence method and validated by Receiving Operating Characteristic (ROC) curves. Then, we selected the regions of west Tajikistan and north-east India to be investigated at national scale. Data scarcity is a common condition for many countries involved into the Initiative. Therefore in addition to the landslide susceptibility assessment of west Tajikistan, which has been conducted using a Generalized Additive Model and validated by ROC curves, we have examined, in the same study area, the effect of incomplete landslide dataset on the prediction capacity of statistical models. The entire PhD research activity has been conducted using only open data and open-source software. In this context, to support the analysis of the last years an open-source plugin for QGIS has been implemented. The SZ-tool allows the user to make susceptibility assessments from the data preprocessing, susceptibility mapping, to the final classification. All the output data of the analysis conducted are freely available and downloadable. This text describes the research activity of the last three years. Each chapter reports the text of the articles published in international scientific journal during the PhD.
Resumo:
We study how the crossover exponent, phi, between the directed percolation (DP) and compact directed percolation (CDP) behaves as a function of the diffusion rate in a model that generalizes the contact process. Our conclusions are based in results pointed by perturbative series expansions and numerical simulations, and are consistent with a value phi = 2 for finite diffusion rates and phi = 1 in the limit of infinite diffusion rate.
Resumo:
In the two-Higgs-doublet model (THDM), generalized-CP transformations (phi(i) -> X-ij phi(*)(j) where X is unitary) and unitary Higgs-family transformations (phi(i) -> U-ij phi(j)) have recently been examined in a series of papers. In terms of gauge-invariant bilinear functions of the Higgs fields phi(i), the Higgs-family transformations and the generalized-CP transformations possess a simple geometric description. Namely, these transformations correspond in the space of scalar-field bilinears to proper and improper rotations, respectively. In this formalism, recent results relating generalized CP transformations with Higgs-family transformations have a clear geometric interpretation. We will review what is known regarding THDM symmetries, as well as derive new results concerning those symmetries, namely how they can be interpreted geometrically as applications of several CP transformations.
Resumo:
Composition is a practice of key importance in software engineering. When real-time applications are composed it is necessary that their timing properties (such as meeting the deadlines) are guaranteed. The composition is performed by establishing an interface between the application and the physical platform. Such an interface does typically contain information about the amount of computing capacity needed by the application. In multiprocessor platforms, the interface should also present information about the degree of parallelism. Recently there have been quite a few interface proposals. However, they are either too complex to be handled or too pessimistic.In this paper we propose the Generalized Multiprocessor Periodic Resource model (GMPR) that is strictly superior to the MPR model without requiring a too detailed description. We describe a method to generate the interface from the application specification. All these methods have been implemented in Matlab routines that are publicly available.
Resumo:
General introductionThe Human Immunodeficiency/Acquired Immunodeficiency Syndrome (HIV/AIDS) epidemic, despite recent encouraging announcements by the World Health Organization (WHO) is still today one of the world's major health care challenges.The present work lies in the field of health care management, in particular, we aim to evaluate the behavioural and non-behavioural interventions against HIV/AIDS in developing countries through a deterministic simulation model, both in human and economic terms. We will focus on assessing the effectiveness of the antiretroviral therapies (ART) in heterosexual populations living in lesser developed countries where the epidemic has generalized (formerly defined by the WHO as type II countries). The model is calibrated using Botswana as a case study, however our model can be adapted to other countries with similar transmission dynamics.The first part of this thesis consists of reviewing the main mathematical concepts describing the transmission of infectious agents in general but with a focus on human immunodeficiency virus (HIV) transmission. We also review deterministic models assessing HIV interventions with a focus on models aimed at African countries. This review helps us to recognize the need for a generic model and allows us to define a typical structure of such a generic deterministic model.The second part describes the main feed-back loops underlying the dynamics of HIV transmission. These loops represent the foundation of our model. This part also provides a detailed description of the model, including the various infected and non-infected population groups, the type of sexual relationships, the infection matrices, important factors impacting HIV transmission such as condom use, other sexually transmitted diseases (STD) and male circumcision. We also included in the model a dynamic life expectancy calculator which, to our knowledge, is a unique feature allowing more realistic cost-efficiency calculations. Various intervention scenarios are evaluated using the model, each of them including ART in combination with other interventions, namely: circumcision, campaigns aimed at behavioral change (Abstain, Be faithful or use Condoms also named ABC campaigns), and treatment of other STD. A cost efficiency analysis (CEA) is performed for each scenario. The CEA consists of measuring the cost per disability-adjusted life year (DALY) averted. This part also describes the model calibration and validation, including a sensitivity analysis.The third part reports the results and discusses the model limitations. In particular, we argue that the combination of ART and ABC campaigns and ART and treatment of other STDs are the most cost-efficient interventions through 2020. The main model limitations include modeling the complexity of sexual relationships, omission of international migration and ignoring variability in infectiousness according to the AIDS stage.The fourth part reviews the major contributions of the thesis and discusses model generalizability and flexibility. Finally, we conclude that by selecting the adequate interventions mix, policy makers can significantly reduce the adult prevalence in Botswana in the coming twenty years providing the country and its donors can bear the cost involved.Part I: Context and literature reviewIn this section, after a brief introduction to the general literature we focus in section two on the key mathematical concepts describing the transmission of infectious agents in general with a focus on HIV transmission. Section three provides a description of HIV policy models, with a focus on deterministic models. This leads us in section four to envision the need for a generic deterministic HIV policy model and briefly describe the structure of such a generic model applicable to countries with generalized HIV/AIDS epidemic, also defined as pattern II countries by the WHO.
Resumo:
Models of codon evolution have attracted particular interest because of their unique capabilities to detect selection forces and their high fit when applied to sequence evolution. We described here a novel approach for modeling codon evolution, which is based on Kronecker product of matrices. The 61 × 61 codon substitution rate matrix is created using Kronecker product of three 4 × 4 nucleotide substitution matrices, the equilibrium frequency of codons, and the selection rate parameter. The entities of the nucleotide substitution matrices and selection rate are considered as parameters of the model, which are optimized by maximum likelihood. Our fully mechanistic model allows the instantaneous substitution matrix between codons to be fully estimated with only 19 parameters instead of 3,721, by using the biological interdependence existing between positions within codons. We illustrate the properties of our models using computer simulations and assessed its relevance by comparing the AICc measures of our model and other models of codon evolution on simulations and a large range of empirical data sets. We show that our model fits most biological data better compared with the current codon models. Furthermore, the parameters in our model can be interpreted in a similar way as the exchangeability rates found in empirical codon models.
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.
Resumo:
In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.
Resumo:
Four classes of variables are apparent in the problem of scour around bridge piers and abutments--geometry of piers and abutments, stream-flow characteristics, sediment characteristics, and geometry of site. The laboratory investigation, from its inception, has been divided into four phases based on these classes. In each phase the variables in three of the classes are held constant and those in the pertinent class are varied. To date, the first three phases have been studied. Typical scour bole patterns related to the geometry of the pier or abutment have been found. For equilibrium conditions of scour with uniform sand, the velocity of flow and the sand size do not appear to have any measurable effects on the depth of scour. This result is especially encouraging in the search for correlation between model and prototype since it would indicate that, primarily, only the depth of flow might be involved in the scale effect. The technique of model testing has been simplified, therefore, because rate of sediment transportation does not need to be scaled. Prior to the establishment of equilibrium conditions, however, depths of scour in excess of those for equilibrium conditions have been found. A concept of active scour as an imbalance between sediment transport capacity and rate of sediment supply has been used to explain the laboratory observations.
Resumo:
This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.
Resumo:
Resumen tomado de la publicaci??n
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
We present a large-scale systematics of charge densities, excitation energies and deformation parameters For hundreds of heavy nuclei The systematics is based on a generalized rotation vibration model for the quadrupole and octupole modes and takes into account second-order contributions of the deformations as well as the effects of finite diffuseness values for the nuclear densities. We compare our results with the predictions of classical surface vibrations in the hydrodynamical approximation. (C) 2010 Elsevier B V All rights reserved.