923 resultados para Ratio-Dependant Predator-Prey Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study investigates the impact of the managerial overconfidence bias on the capital structure of a sample of 78 firms from Chile, Peru and Colombia, during the years 1996-2014. We infer that there is a positive relation between the leverage ratio and a) the overconfidence; b) the experience and c) the male gender of the executive. Overconfidence is measured according to the status of the CEO (entrepreneur or not-entrepreneur) and the hypotheses are tested through dynamic panel data model. The empirical results show a highly significant positive correlation between overconfidence and leverage ratio and between gender and leverage ratio while, in contrast, the relation between experience and leverage ratio is negative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for the Standard Model Higgs boson produced in association with a pair of top quarks, tt¯H, is presented. The analysis uses 20.3 fb−1 of pp collision data at s√ = 8 TeV, collected with the ATLAS detector at the Large Hadron Collider during 2012. The search is designed for the H to bb¯ decay mode and uses events containing one or two electrons or muons. In order to improve the sensitivity of the search, events are categorised according to their jet and b-tagged jet multiplicities. A neural network is used to discriminate between signal and background events, the latter being dominated by tt¯+jets production. In the single-lepton channel, variables calculated using a matrix element method are included as inputs to the neural network to improve discrimination of the irreducible tt¯+bb¯ background. No significant excess of events above the background expectation is found and an observed (expected) limit of 3.4 (2.2) times the Standard Model cross section is obtained at 95% confidence level. The ratio of the measured tt¯H signal cross section to the Standard Model expectation is found to be μ=1.5±1.1 assuming a Higgs boson mass of 125 GeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for the bb¯ decay of the Standard Model Higgs boson is performed with the ATLAS experiment using the full dataset recorded at the LHC in Run 1. The integrated luminosities used from pp collisions at s√=7 and 8 TeV are 4.7 and 20.3 fb−1, respectively. The processes considered are associated (W/Z)H production, where W→eν/μν, Z→ee/μμ and Z→νν. The observed (expected) deviation from the background-only hypothesis corresponds to a significance of 1.4 (2.6) standard deviations and the ratio of the measured signal yield to the Standard Model expectation is found to be μ=0.52±0.32(stat.)±0.24(syst.) for a Higgs boson mass of 125.36 GeV. The analysis procedure is validated by a measurement of the yield of (W/Z)Z production with Z→bb¯ in the same final states as for the Higgs boson search, from which the ratio of the observed signal yield to the Standard Model expectation is found to be 0.74±0.09(stat.)±0.14(syst.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An observation of the Λ0b→ψ(2S)Λ0 decay and a comparison of its branching fraction with that of the Λ0b→J/ψΛ0 decay has been made with the ATLAS detector in proton--proton collisions at s√=8TeV at the LHC using an integrated luminosity of 20.6fb−1. The J/ψ and ψ(2S) mesons are reconstructed in their decays to a muon pair, while the Λ0→pπ− decay is exploited for the Λ0 baryon reconstruction. The Λ0b baryons are reconstructed with transverse momentum pT>10GeV and pseudorapidity |η|<2.1. The measured branching ratio of the Λ0b→ψ(2S)Λ0 and Λ0b→J/ψΛ0 decays is Γ(Λ0b→ψ(2S)Λ0)/Γ(Λ0b→J/ψΛ0)=0.501±0.033(stat)±0.019(syst), lower than the expectation from the covariant quark model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a foundation model for Marxian theories of the breakdown of capitalism based on a new falling rate of profit mechanism. All of these theories are based on one or more of "the historical tendencies": a rising capital-wage bill ratio, a rising capitalist share and a falling rate of profit. The model is a foundation in the sense that it generates these tendencies in the context of a model with a constant subsistence wage. The newly discovered generating mechanism is based on neo-classical reasoning for a model with land. It is non-Ricardian in that land augmenting technical progress can be unboundedly rapid. Finally, since the model has no steady state, it is necessary to use a new technique, Chaplygin's method, to prove the result.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a foundation model for Marxian theories of the breakdown of capitalism based on a new falling rate of profit mechanism. All of these theories are based on one or more of ?the historical tendencies?: a rising capital-wage bill ratio, a rising capitalist share and a falling rate of profit. The model is a foundation in the sense that it generates these tendencies in the context of a model with a constant subsistence wage. The newly discovered generating mechanism is based on neo-classical reasoning for a model with land. It is non-Ricardian in that land augmenting technical progress can be unboundedly rapid. Finally, since the model has no steady state, it is necessary to use a new technique, Chaplygin?s method, to prove the result.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eukaryotic cells generate energy in the form of ATP, through a network of mitochondrial complexes and electron carriers known as the oxidative phosphorylation system. In mammals, mitochondrial complex I (CI) is the largest component of this system, comprising 45 different subunits encoded by mitochondrial and nuclear DNA. Humans diagnosed with mutations in the gene NDUFS4, encoding a nuclear DNA-encoded subunit of CI (NADH dehydrogenase ubiquinone Fe-S protein 4), typically suffer from Leigh syndrome, a neurodegenerative disease with onset in infancy or early childhood. Mitochondria from NDUFS4 patients usually lack detectable NDUFS4 protein and show a CI stability/assembly defect. Here, we describe a recessive mouse phenotype caused by the insertion of a transposable element into Ndufs4, identified by a novel combined linkage and expression analysis. Designated Ndufs4(fky), the mutation leads to aberrant transcript splicing and absence of NDUFS4 protein in all tissues tested of homozygous mice. Physical and behavioral symptoms displayed by Ndufs4(fky/fky) mice include temporary fur loss, growth retardation, unsteady gait, and abnormal body posture when suspended by the tail. Analysis of CI in Ndufs4(fky/fky) mice using blue native PAGE revealed the presence of a faster migrating crippled complex. This crippled CI was shown to lack subunits of the "N assembly module", which contains the NADH binding site, but contained two assembly factors not present in intact CI. Metabolomic analysis of the blood by tandem mass spectrometry showed increased hydroxyacylcarnitine species, implying that the CI defect leads to an imbalanced NADH/NAD(+) ratio that inhibits mitochondrial fatty acid β-oxidation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper contributes to the on-going empirical debate regarding the role of the RBC model and in particular of technology shocks in explaining aggregate fluctuations. To this end we estimate the model’s posterior density using Markov-Chain Monte-Carlo (MCMC) methods. Within this framework we extend Ireland’s (2001, 2004) hybrid estimation approach to allow for a vector autoregressive moving average (VARMA) process to describe the movements and co-movements of the model’s errors not explained by the basic RBC model. The results of marginal likelihood ratio tests reveal that the more general model of the errors significantly improves the model’s fit relative to the VAR and AR alternatives. Moreover, despite setting the RBC model a more difficult task under the VARMA specification, our analysis, based on forecast error and spectral decompositions, suggests that the RBC model is still capable of explaining a significant fraction of the observed variation in macroeconomic aggregates in the post-war U.S. economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a State Space approach to explain the dynamics of rent growth, expected returns and Price-Rent ratio in housing markets. According to the present value model, movements in price to rent ratio should be matched by movements in expected returns and expected rent growth. The state space framework assume that both variables follow an autoregressive process of order one. The model is applied to the US and UK housing market, which yields series of the latent variables given the behaviour of the Price-Rent ratio. Resampling techniques and bootstrapped likelihood ratios show that expected returns tend to be highly persistent compared to rent growth. The Öltered expected returns is considered in a simple predictability of excess returns model with high statistical predictability evidenced for the UK. Overall, it is found that the present value model tends to have strong statistical predictability in the UK housing markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a State Space approach to explain the dynamics of rent growth, expected returns and Price-Rent ratio in housing markets. According to the present value model, movements in price to rent ratio should be matched by movements in expected returns and expected rent growth. The state space framework assume that both variables follow an autoregression process of order one. The model is applied to the US and UK housing market, which yields series of the latent variables given the behaviour of the Price-Rent ratio. Resampling techniques and bootstrapped likelihood ratios show that expected returns tend to be highly persistent compared to rent growth. The filtered expected returns is considered in a simple predictability of excess returns model with high statistical predictability evidence for the UK. Overall, it is found that the present value model tends to have strong statistical predictability in the UK housing markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increased male prevalence has been repeatedly reported in several neurodevelopmental disorders (NDs), leading to the concept of a "female protective model." We investigated the molecular basis of this sex-based difference in liability and demonstrated an excess of deleterious autosomal copy-number variants (CNVs) in females compared to males (odds ratio [OR] = 1.46, p = 8 × 10(-10)) in a cohort of 15,585 probands ascertained for NDs. In an independent autism spectrum disorder (ASD) cohort of 762 families, we found a 3-fold increase in deleterious autosomal CNVs (p = 7 × 10(-4)) and an excess of private deleterious single-nucleotide variants (SNVs) in female compared to male probands (OR = 1.34, p = 0.03). We also showed that the deleteriousness of autosomal SNVs was significantly higher in female probands (p = 0.0006). A similar bias was observed in parents of probands ascertained for NDs. Deleterious CNVs (>400 kb) were maternally inherited more often (up to 64%, p = 10(-15)) than small CNVs < 400 kb (OR = 1.45, p = 0.0003). In the ASD cohort, increased maternal transmission was also observed for deleterious CNVs and SNVs. Although ASD females showed higher mutational burden and lower cognition, the excess mutational burden remained, even after adjustment for those cognitive differences. These results strongly suggest that females have an increased etiological burden unlinked to rare deleterious variants on the X chromosome. Carefully phenotyped and genotyped cohorts will be required for identifying the symptoms, which show gender-specific liability to mutational burden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excessive exposure to solar ultraviolet (UV) is the main cause of skin cancer. Specific prevention should be further developed to target overexposed or highly vulnerable populations. A better characterisation of anatomical UV exposure patterns is however needed for specific prevention. To develop a regression model for predicting the UV exposure ratio (ER, ratio between the anatomical dose and the corresponding ground level dose) for each body site without requiring individual measurements. A 3D numeric model (SimUVEx) was used to compute ER for various body sites and postures. A multiple fractional polynomial regression analysis was performed to identify predictors of ER. The regression model used simulation data and its performance was tested on an independent data set. Two input variables were sufficient to explain ER: the cosine of the maximal daily solar zenith angle and the fraction of the sky visible from the body site. The regression model was in good agreement with the simulated data ER (R(2)=0.988). Relative errors up to +20% and -10% were found in daily doses predictions, whereas an average relative error of only 2.4% (-0.03% to 5.4%) was found in yearly dose predictions. The regression model predicts accurately ER and UV doses on the basis of readily available data such as global UV erythemal irradiance measured at ground surface stations or inferred from satellite information. It renders the development of exposure data on a wide temporal and geographical scale possible and opens broad perspectives for epidemiological studies and skin cancer prevention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method of objectively determining imaging performance for a mammography quality assurance programme for digital systems was developed. The method is based on the assessment of the visibility of a spherical microcalcification of 0.2 mm using a quasi-ideal observer model. It requires the assessment of the spatial resolution (modulation transfer function) and the noise power spectra of the systems. The contrast is measured using a 0.2-mm thick Al sheet and Polymethylmethacrylate (PMMA) blocks. The minimal image quality was defined as that giving a target contrast-to-noise ratio (CNR) of 5.4. Several evaluations of this objective method for evaluating image quality in mammography quality assurance programmes have been considered on computed radiography (CR) and digital radiography (DR) mammography systems. The measurement gives a threshold CNR necessary to reach the minimum standard image quality required with regards to the visibility of a 0.2-mm microcalcification. This method may replace the CDMAM image evaluation and simplify the threshold contrast visibility test used in mammography quality.