885 resultados para empirical shell model
                                
Resumo:
This empirical work applies a duration model to the study of factors determining privatization of local water services. I assess how factors determining privatization decision evolve as time goes by. A sample of 133 Spanish municipalities during the six terms of office taken place during the 1980-2002 period is analyzed. A dynamic neighboring effect is hypothesized and successfully tested. In a first stage, private water supply firms may try to expand to regions where there is no service privatized, in order to spread over this region after having being installed thanks to its scale advantages. Other factors influencing privatization decision evolve during the two decades under study, from the priority to fix old infrastructures to the concern about service efficiency. Some complementary results regarding political and budgetary factors are also obtained
                                
Resumo:
Isotopic and isotonic chains of superheavy nuclei are analyzed to search for spherical double shell closures beyond Z=82 and N=126 within the new effective field theory model of Furnstahl, Serot, and Tang for the relativistic nuclear many-body problem. We take into account several indicators to identify the occurrence of possible shell closures, such as two-nucleon separation energies, two-nucleon shell gaps, average pairing gaps, and the shell correction energy. The effective Lagrangian model predicts N=172 and Z=120 and N=258 and Z=120 as spherical doubly magic superheavy nuclei, whereas N=184 and Z=114 show some magic character depending on the parameter set. The magicity of a particular neutron (proton) number in the analyzed mass region is found to depend on the number of protons (neutrons) present in the nucleus.
                                
Resumo:
An optical-model potential for systematic calculations of elastic scattering of electrons and positrons by atoms and positive ions is proposed. The electrostatic interaction is determined from the Dirac-Hartree-Fock self-consistent atomic electron density. In the case of electron projectiles, the exchange interaction is described by means of the local-approximation of Furness and McCarthy. The correlation-polarization potential is obtained by combining the correlation potential derived from the local density approximation with a long-range polarization interaction, which is represented by means of a Buckingham potential with an empirical energy-dependent cutoff parameter. The absorption potential is obtained from the local-density approximation, using the Born-Ochkur approximation and the Lindhard dielectric function to describe the binary collisions with a free-electron gas. The strength of the absorption potential is adjusted by means of an empirical parameter, which has been determined by fitting available absolute elastic differential cross-section data for noble gases and mercury. The Dirac partial-wave analysis with this optical-model potential provides a realistic description of elastic scattering of electrons and positrons with energies in the range from ~100 eV up to ~5 keV. At higher energies, correlation-polarization and absorption corrections are small and the usual static-exchange approximation is sufficiently accurate for most practical purposes.
                                
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
                                
Resumo:
We report the results of Monte Carlo simulations with the aim to clarify the microscopic origin of exchange bias in the magnetization hysteresis loops of a model of individual core/shell nanoparticles. Increase of the exchange coupling across the core/shell interface leads to an enhancement of exchange bias and to an increasing asymmetry between the two branches of the loops which is due to different reversal mechanisms. A detailed study of the magnetic order of the interfacial spins shows compelling evidence that the existence of a net magnetization due to uncompensated spins at the shell interface is responsible for both phenomena and allows to quantify the loop shifts directly in terms of microscopic parameters with striking agreement with the macroscopic observed values.
                                
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
                                
Resumo:
QUESTIONS UNDER STUDY: The starting point of the interdisciplinary project "Assessing the impact of diagnosis related groups (DRGs) on patient care and professional practice" (IDoC) was the lack of a systematic ethical assessment for the introduction of cost containment measures in healthcare. Our aim was to contribute to the methodological and empirical basis of such an assessment. METHODS: Five sub-groups conducted separate but related research within the fields of biomedical ethics, law, nursing sciences and health services, applying a number of complementary methodological approaches. The individual research projects were framed within an overall ethical matrix. Workshops and bilateral meetings were held to identify and elaborate joint research themes. RESULTS: Four common, ethically relevant themes emerged in the results of the studies across sub-groups: (1.) the quality and safety of patient care, (2.) the state of professional practice of physicians and nurses, (3.) changes in incentives structure, (4.) vulnerable groups and access to healthcare services. Furthermore, much-needed data for future comparative research has been collected and some early insights into the potential impact of DRGs are outlined. CONCLUSIONS: Based on the joint results we developed preliminary recommendations related to conceptual analysis, methodological refinement, monitoring and implementation.
                                
Resumo:
The resilient modulus (MR) input parameters in the Mechanistic-Empirical Pavement Design Guide (MEPDG) program have a significant effect on the projected pavement performance. The MEPDG program uses three different levels of inputs depending on the desired level of accuracy. The primary objective of this research was to develop a laboratory testing program utilizing the Iowa DOT servo-hydraulic machine system for evaluating typical Iowa unbound materials and to establish a database of input values for MEPDG analysis. This was achieved by carrying out a detailed laboratory testing program designed in accordance with the AASHTO T307 resilient modulus test protocol using common Iowa unbound materials. The program included laboratory tests to characterize basic physical properties of the unbound materials, specimen preparation and repeated load triaxial tests to determine the resilient modulus. The MEPDG resilient modulus input parameter library for Iowa typical unbound pavement materials was established from the repeated load triaxial MR test results. This library includes the non-linear, stress-dependent resilient modulus model coefficients values for level 1 analysis, the unbound material properties values correlated to resilient modulus for level 2 analysis, and the typical resilient modulus values for level 3 analysis. The resilient modulus input parameters library can be utilized when designing low volume roads in the absence of any basic soil testing. Based on the results of this study, the use of level 2 analysis for MEPDG resilient modulus input is recommended since the repeated load triaxial test for level 1 analysis is complicated, time consuming, expensive, and requires sophisticated equipment and skilled operators.
                                
Resumo:
One third of all stroke survivors develop post-stroke depression (PSD). Depressive symptoms adversely affect rehabilitation and significantly increase risk of death in the post-stroke period. One of the theoretical views on the determinants of PSD focuses on psychosocial factors like disability and social support. Others emphasize biologic mechanisms such as disruption of biogenic amine neurotransmission and release of proinflammatory cytokines. The "lesion location" perspective attempts to establish a relationship between localization of stroke and occurrence of depression, but empirical results remain contradictory. These divergences are partly related to the fact that neuroimaging methods, unlike neuropathology, are not able to assess precisely the full extent of stroke-affected areas and do not specify the different types of vascular lesions. We provide here an overview of the known phenomenological profile and current pathogenic hypotheses of PSD and present neuropathological data challenging the classic "single-stroke"-based neuroanatomical model of PSD. We suggest that vascular burden due to the chronic accumulation of small macrovascular and microvascular lesions may be a crucial determinant of the development and evolution of PSD.
                                
Resumo:
Abstract Sex-determining systems often undergo high rates of turnover but for reasons that remain largely obscure. Two recent evolutionary models assign key roles, respectively, to sex-antagonistic (SA) mutations occurring on autosomes and to deleterious mutations accumulating on sex chromosomes. These two models capture essential but distinct key features of sex-chromosome evolution; accordingly, they make different predictions and present distinct limitations. Here we show that a combination of features from the two models has the potential to generate endless cycles of sex-chromosome transitions: SA alleles accruing on a chromosome after it has been co-opted for sex induce an arrest of recombination; the ensuing accumulation of deleterious mutations will soon make a new transition ineluctable. The dynamics generated by these interactions share several important features with empirical data, namely, (i) that patterns of heterogamety tend to be conserved during transitions and (ii) that autosomes are not recruited randomly, with some chromosome pairs more likely than others to be co-opted for sex.
                                
Resumo:
We presented an integrated hierarchical model of psychopathology that more accurately captures empirical patterns of comorbidity between clinical syndromes and personality disorders.In order to verify the structural validity of the model proposed, this study aimed to analyze the convergence between the Restructured Clinical (RC) scales and Personality scales (PSY-5) of the MMPI-2-RF and the Clinical Syndrome and Personality Disorder scales of the MCMI-III.The MMPI-2-RF and MCMI-III were administered to a clinical sample of 377 outpatients (167 men and 210 women).The structural hypothesiswas assessed by using a Confirmatory Factor Analytic design with four common superordinate factors. An independent-cluster-basis solution was proposed based on maximum likelihood estimation and the application of several fit indices.The fit of the proposed model can be considered as good and more so if we take into account its complexity.
                                
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
                                
Resumo:
This paper analyses the effect of R&D investment on firm growth. We use an extensive sample of Spanish manufacturing and service firms. The database comprises diverse waves of Spanish Community Innovation Survey and covers the period 2004–2008. First, a probit model corrected for sample selection analyses the role of innovation on the probability of being a high-growth firm (HGF). Second, a quantile regression technique is applied to explore the determinants of firm growth. Our database shows that a small number of firms experience fast growth rates in terms of sales or employees. Our results reveal that R&D investments positively affect the probability of becoming a HGF. However, differences appear between manufacturing and service firms. Finally, when we study the impact of R&D investment on firm growth, quantile estimations show that internal R&D presents a significant positive impact for the upper quantiles, while external R&D shows a significant positive impact up to the median. Keywords : High-growth firms, Firm growth, Innovation activity. JEL Classifications : L11, L25, L26, O30
                                
Resumo:
Peer-reviewed
                                
Resumo:
Peer-reviewed
 
                    