25 resultados para Standard models

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a way to incorporate NTBs for the four workhorse models of the modern trade literature in computable general equilibrium models (CGEs). CGE models feature intermediate linkages and thus allow us to study global value chains (GVCs). We show that the Ethier-Krugman monopolistic competition model, the Melitz firm heterogeneity model and the Eaton and Kortum model can be defined as an Armington model with generalized marginal costs, generalized trade costs and a demand externality. As already known in the literature in both the Ethier-Krugman model and the Melitz model generalized marginal costs are a function of the amount of factor input bundles. In the Melitz model generalized marginal costs are also a function of the price of the factor input bundles. Lower factor prices raise the number of firms that can enter the market profitably (extensive margin), reducing generalized marginal costs of a representative firm. For the same reason the Melitz model features a demand externality: in a larger market more firms can enter. We implement the different models in a CGE setting with multiple sectors, intermediate linkages, non-homothetic preferences and detailed data on trade costs. We find the largest welfare effects from trade cost reductions in the Melitz model. We also employ the Melitz model to mimic changes in Non tariff Barriers (NTBs) with a fixed cost-character by analysing the effect of changes in fixed trade costs. While we work here with a model calibrated to the GTAP database, the methods developed can also be applied to CGE models based on the WIOD database.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The rat double-SAH model is one of the standard models to simulate delayed cerebral vasospasm (CVS) in humans. However, the proof of delayed ischemic brain damage is missing so far. Our objective was, therefore, to determine histological changes in correlation with the development of symptomatic and perfusion weighted imaging (PWI) proven CVS in this animal model. CVS was induced by injection of autologous blood in the cisterna magna of 22 Sprague-Dawley rats. Histological changes were analyzed on day 3 and day 5. Cerebral blood flow (CBF) was assessed by PWI at 3 tesla magnetic resonance (MR) tomography. Neuronal cell count did not differ between sham operated and SAH rats in the hippocampus and the cerebral cortex on day 3. In contrast, on day 5 after SAH the neuronal cell count was significantly reduced in the hippocampus (p<0.001) and the inner cortical layer (p=0.03). The present investigation provides quantitative data on brain tissue damage in association with delayed CVS for the first time in a rat SAH model. Accordingly, our data suggest that the rat double-SAH model may be suitable to mimic delayed ischemic brain damage due to CVS and to investigate the neuroprotective effects of drugs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Major histocompatibility complex (MHC) antigen-presenting genes are the most variable loci in vertebrate genomes. Host-parasite co-evolution is assumed to maintain the excessive polymorphism in the MHC loci. However, the molecular mechanisms underlying the striking diversity in the MHC remain contentious. The extent to which recombination contributes to the diversity at MHC loci in natural populations is still controversial, and there have been only few comparative studies that make quantitative estimates of recombination rates. In this study, we performed a comparative analysis for 15 different ungulates species to estimate the population recombination rate, and to quantify levels of selection. As expected for all species, we observed signatures of strong positive selection, and identified individual residues experiencing selection that were congruent with those constituting the peptide-binding region of the human DRB gene. However, in addition for each species, we also observed recombination rates that were significantly different from zero on the basis of likelihood-permutation tests, and in other non-quantitative analyses. Patterns of synonymous and non-synonymous sequence diversity were consistent with differing demographic histories between species, but recent simulation studies by other authors suggest inference of selection and recombination is likely to be robust to such deviations from standard models. If high rates of recombination are common in MHC genes of other taxa, re-evaluation of many inference-based phylogenetic analyses of MHC loci, such as estimates of the divergence time of alleles and trans-specific polymorphism, may be required.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: A feasibility study on measuring kidney perfusion by a contrast-free magnetic resonance (MR) imaging technique is presented. MATERIALS AND METHODS: A flow-sensitive alternating inversion recovery (FAIR) prepared true fast imaging with steady-state precession (TrueFISP) arterial spin labeling sequence was used on a 3.0-T MR-scanner. The basis for quantification is a two-compartment exchange model proposed by Parkes that corrects for diverse assumptions in single-compartment standard models. RESULTS: Eleven healthy volunteers (mean age, 42.3 years; range 24-55) were examined. The calculated mean renal blood flow values for the exchange model (109 +/- 5 [medulla] and 245 +/- 11 [cortex] ml/min - 100 g) are in good agreement with the literature. Most important, the two-compartment exchange model exhibits a stabilizing effect on the evaluation of perfusion values if the finite permeability of the vessel wall and the venous outflow (fast solution) are considered: the values for the one-compartment standard model were 93 +/- 18 (medulla) and 208 +/- 37 (cortex) ml/min - 100 g. CONCLUSION: This improvement will increase the accuracy of contrast-free imaging of kidney perfusion in treatment renovascular disease.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A search for the direct production of charginos and neutralinos in final states with three leptons and missing transverse momentum is presented. The analysis is based on 20.3 fb−1 of √s = 8TeV proton-proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with the Standard Model expectations and limits are set in R-parity-conserving phenomenological Minimal Supersymmetric Standard Models and in simplified supersymmetric models, significantly extending previous results. For simplified supersymmetric models of direct chargino (˜χ±1 ) and next-to-lightest neutralino (˜χ02) production with decays to lightest neutralino(˜χ01) via either all three generations of sleptons, staus only, gauge bosons, or Higgs bosons, ˜χ±1 and ˜χ02 masses are excluded up to 700GeV, 380GeV, 345GeV, or 148GeV respectively, for a massless ˜χ01.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Animal models provide a basis for clarifying the complex pathogenesis of delayed cerebral vasospasm (DCVS) and for screening of potential therapeutic approaches. Arbitrary use of experimental parameters in current models can lead to results of uncertain relevance. The aim of this work was to identify and analyze the most consistent and feasible models and their parameters for each animal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE:Conventional platinum coils cause imaging artifacts that reduce imaging quality and therefore impair imaging interpretation on intraprocedural or noninvasive follow-up imaging. The purpose of this study was to evaluate imaging characteristics and artifact production of polymeric coils compared with standard platinum coils in vitro and in vivo.MATERIALS AND METHODS:Polymeric coils and standard platinum coils were evaluated in vitro with the use of 2 identical silicon aneurysm models coiled with a packing attenuation of 20% each. DSA, flat panel CT, CT, and MR imaging were performed. In vivo evaluation of imaging characteristics of polymeric coils was performed in experimentally created rabbit carotid bifurcation aneurysms. DSA, CT/CTA, and MR imaging were performed after endovascular treatment of the aneurysms. Images were evaluated regarding visibility of individual coils, coil mass, artifact production, and visibility of residual flow within the aneurysm.RESULTS:Overall, in vitro and in vivo imaging showed relevantly reduced artifact production of polymeric coils in all imaging modalities compared with standard platinum coils. Image quality of CT and MR imaging was improved with the use of polymeric coils, which permitted enhanced depiction of individual coil loops and residual aneurysm lumen as well as the peri-aneurysmal area. Remarkably, CT images demonstrated considerably improved image quality with only minor artifacts compared with standard coils. On DSA, polymeric coils showed transparency and allowed visualization of superimposed vessel structures.CONCLUSIONS:This initial experimental study showed improved imaging quality with the use of polymeric coils compared with standard platinum coils in all imaging modalities. This might be advantageous for improved intraprocedural imaging for the detection of complications and posttreatment noninvasive follow-up imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metabolomics as one of the most rapidly growing technologies in the "-omics" field denotes the comprehensive analysis of low molecular-weight compounds and their pathways. Cancer-specific alterations of the metabolome can be detected by high-throughput mass-spectrometric metabolite profiling and serve as a considerable source of new markers for the early differentiation of malignant diseases as well as their distinction from benign states. However, a comprehensive framework for the statistical evaluation of marker panels in a multi-class setting has not yet been established. We collected serum samples of 40 pancreatic carcinoma patients, 40 controls, and 23 pancreatitis patients according to standard protocols and generated amino acid profiles by routine mass-spectrometry. In an intrinsic three-class bioinformatic approach we compared these profiles, evaluated their selectivity and computed multi-marker panels combined with the conventional tumor marker CA 19-9. Additionally, we tested for non-inferiority and superiority to determine the diagnostic surplus value of our multi-metabolite marker panels. Compared to CA 19-9 alone, the combined amino acid-based metabolite panel had a superior selectivity for the discrimination of healthy controls, pancreatitis, and pancreatic carcinoma patients [Formula: see text] We combined highly standardized samples, a three-class study design, a high-throughput mass-spectrometric technique, and a comprehensive bioinformatic framework to identify metabolite panels selective for all three groups in a single approach. Our results suggest that metabolomic profiling necessitates appropriate evaluation strategies and-despite all its current limitations-can deliver marker panels with high selectivity even in multi-class settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Meta-analysis of studies of the accuracy of diagnostic tests currently uses a variety of methods. Statistically rigorous hierarchical models require expertise and sophisticated software. We assessed whether any of the simpler methods can in practice give adequately accurate and reliable results. STUDY DESIGN AND SETTING: We reviewed six methods for meta-analysis of diagnostic accuracy: four simple commonly used methods (simple pooling, separate random-effects meta-analyses of sensitivity and specificity, separate meta-analyses of positive and negative likelihood ratios, and the Littenberg-Moses summary receiver operating characteristic [ROC] curve) and two more statistically rigorous approaches using hierarchical models (bivariate random-effects meta-analysis and hierarchical summary ROC curve analysis). We applied the methods to data from a sample of eight systematic reviews chosen to illustrate a variety of patterns of results. RESULTS: In each meta-analysis, there was substantial heterogeneity between the results of different studies. Simple pooling of results gave misleading summary estimates of sensitivity and specificity in some meta-analyses, and the Littenberg-Moses method produced summary ROC curves that diverged from those produced by more rigorous methods in some situations. CONCLUSION: The closely related hierarchical summary ROC curve or bivariate models should be used as the standard method for meta-analysis of diagnostic accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to compare standard plaster models with their digital counterparts for the applicability of the Index of Complexity, Outcome, and Need (ICON). Generated study models of 30 randomly selected patients: 30 pre- (T(0)) and 30 post- (T(1)) treatment. Two examiners, calibrated in the ICON, scored the digital and plaster models. The overall ICON scores were evaluated for reliability and reproducibility using kappa statistics and reliability coefficients. The values for reliability of the total and weighted ICON scores were generally high for the T(0) sample (range 0.83-0.95) but less high for the T(1) sample (range 0.55-0.85). Differences in total ICON score between plaster and digital models resulted in mostly statistically insignificant values (P values ranging from 0.07 to 0.19), except for observer 1 in the T(1) sample. No statistically different values were found for the total ICON score on either plaster or digital models. ICON scores performed on computer-based models appear to be as accurate and reliable as ICON scores on plaster models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Activation of endothelial cells (EC) in xenotransplantation is mostly induced through binding of antibodies (Ab) and activation of the complement system. Activated EC lose their heparan sulfate proteoglycan (HSPG) layer and exhibit a procoagulant and pro-inflammatory cell surface. We have recently shown that the semi-synthetic proteoglycan analog dextran sulfate (DXS, MW 5000) blocks activation of the complement cascade and acts as an EC-protectant both in vitro and in vivo. However, DXS is a strong anticoagulant and systemic use of this substance in a clinical setting might therefore be compromised. It was the aim of this study to investigate a novel, fully synthetic EC-protectant with reduced inhibition of the coagulation system. METHOD: By screening with standard complement (CH50) and coagulation assays (activated partial thromboplastin time, aPTT), a conjugate of tyrosine sulfate to a polymer-backbone (sTyr-PAA) was identified as a candidate EC-protectant. The pathway-specificity of complement inhibition by sTyr-PAA was tested in hemolytic assays. To further characterize the substance, the effects of sTyr-PAA and DXS on complement deposition on pig cells were compared by flow cytometry and cytotoxicity assays. Using fluorescein-labeled sTyr-PAA (sTyr-PAA-Fluo), the binding of sTyr-PAA to cell surfaces was also investigated. RESULTS: Of all tested compounds, sTyr-PAA was the most effective substance in inhibiting all three pathways of complement activation. Its capacity to inhibit the coagulation cascade was significantly reduced as compared with DXS. sTyr-PAA also dose-dependently inhibited deposition of human complement on pig cells and this inhibition correlated with the binding of sTyr-PAA to the cells. Moreover, we were able to demonstrate that sTyr-PAA binds preferentially and dose-dependently to damaged EC. CONCLUSIONS: We could show that sTyr-PAA acts as an EC-protectant by binding to the cells and protecting them from complement-mediated damage. It has less effect on the coagulation system than DXS and may therefore have potential for in vivo application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.