965 resultados para PREDICTIONS
Resumo:
Aims Floral traits are frequency used in traditional plant systematics because Of their assumed constancy. One potential reason for the apparent constancy of flower size is that effective pollen transfer between flowers depends oil the accuracy of the physical fit between the flower and pollinator. Therefore, dowels are likely to he under stronger stabilizing selection for uniform size than vegetative plant parts. Moreover, as predicted by the pollinator-mediated stabilizing selection (PMSS) hypothesis, all accurate fit between flowers and their pollinators is likely to he more important for specialized pollination systems as found in many species with bilaterally symmetric (zygomorphic) flowers than for species, with radially symmetric (actinomorphic) flowers. Methods In a comparative study of 15 zygomorphic and 13 actinomorphic species ill Switzerland, we tested whether variation in flower size, among and within individuals, is smaller than variation ill leaf size and whether variation in flower size is smaller ill zygomorphic compared to actinomorphic species. Important findings Indeed, variation ill leaf length was significantly larger than variation in flower length and width. Within-individual variation ill flower and leaf sizes did not differ significantly between zygomorphic and actinomorphic species. In line with the predictions of the PMSS, among-individual variation ill flower length and flower width was significantly smaller for zygomorphic species than for actinomorphic species, while the two groups did not differ in leaf length variation. This suggests that plants with zygomorphic flowers have undergone stronger selection for uniform flowers than plants with actinomorphic flowers. This supports that the uniformity of flowers compared to vegetative structures within species, as already observed in traditional plant systematics, is, at least in part, a consequence of the requirement for effective pollination.
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.
Resumo:
We have recently derived a factorization formula for the Higgs-boson production cross section in the presence of a jet veto, which allows for a systematic resummation of large Sudakov logarithms of the form αn s lnm(pveto T /mH), along with the large virtual corrections known to affect also the total cross section. Here we determine the ingredients entering this formula at two-loop accuracy. Specifically, we compute the dependence on the jet-radius parameter R, which is encoded in the two-loop coefficient of the collinear anomaly, by means of a direct, fully analytic calculation in the framework of soft-collinear effective theory. We confirm the result obtained by Banfi et al. from a related calculation in QCD, and demonstrate that factorization-breaking, soft-collinear mixing effects do not arise at leading power in pveto T /mH, even for R = O(1). In addition, we extract the two-loop collinear beam functions numerically. We present detailed numerical predictions for the jet-veto cross section with partial next-to-next-to-next-to-leading logarithmic accuracy, matched to the next-to-next-to-leading order cross section in fixed-order perturbation theory. The only missing ingredients at this level of accuracy are the three-loop anomaly coefficient and the four-loop cusp anomalous dimension, whose numerical effects we estimate to be small.
Resumo:
The aim of this study was to evaluate the ability of dual energy X-rays absorptiometry (DXA) areal bone mineral density (aBMD) measured in different regions of the proximal part of the human femur for predicting the mechanical properties of matched proximal femora tested in two different loading configurations. 36 pairs of fresh frozen femora were DXA scanned and tested until failure in two loading configurations: a fall on the side or a one-legged standing. The ability of the DXA output from four different regions of the proximal femur in predicting the femoral mechanical properties was measured and compared for the two loading scenarios. The femoral neck DXA BMD was best correlated to the femoral ultimate force for both configurations and predicted significantly better femoral failure load (R2=0.80 vs. R2=0.66, P<0.05) when simulating a side than when simulating a standing configuration. Conversely, the work to failure was predicted similarly for both loading configurations (R2=0.54 vs. R2=0.53, P>0.05). Therefore, neck BMD should be considered as one of the key factors for discriminating femoral fracture risk in vivo. Moreover, the better predictive ability of neck BMD for femoral strength if tested in a fall compared to a one-legged stance configuration suggests that DXA's clinical relevance may not be as high for spontaneous femoral fractures than for fractures associated to a fall.
Resumo:
Based on the results from detailed structural and petrological characterisation and on up-scaled laboratory values for sorption and diffusion, blind predictions were made for the STT1 dipole tracer test performed in the Swedish A¨ spo¨ Hard Rock Laboratory. The tracers used were nonsorbing, such as uranine and tritiated water, weakly sorbing 22Na+, 85Sr2 +, 47Ca2 +and more strongly sorbing 86Rb+, 133Ba2 +, 137Cs+. Our model consists of two parts: (1) a flow part based on a 2D-streamtube formalism accounting for the natural background flow field and with an underlying homogeneous and isotropic transmissivity field and (2) a transport part in terms of the dual porosity medium approach which is linked to the flow part by the flow porosity. The calibration of the model was done using the data from one single uranine breakthrough (PDT3). The study clearly showed that matrix diffusion into a highly porous material, fault gouge, had to be included in our model evidenced by the characteristic shape of the breakthrough curve and in line with geological observations. After the disclosure of the measurements, it turned out that, in spite of the simplicity of our model, the prediction for the nonsorbing and weakly sorbing tracers was fairly good. The blind prediction for the more strongly sorbing tracers was in general less accurate. The reason for the good predictions is deemed to be the result of the choice of a model structure strongly based on geological observation. The breakthrough curves were inversely modelled to determine in situ values for the transport parameters and to draw consequences on the model structure applied. For good fits, only one additional fracture family in contact with cataclasite had to be taken into account, but no new transport mechanisms had to be invoked. The in situ values for the effective diffusion coefficient for fault gouge are a factor of 2–15 larger than the laboratory data. For cataclasite, both data sets have values comparable to laboratory data. The extracted Kd values for the weakly sorbing tracers are larger than Swedish laboratory data by a factor of 25–60, but agree within a factor of 3–5 for the more strongly sorbing nuclides. The reason for the inconsistency concerning Kds is the use of fresh granite in the laboratory studies, whereas tracers in the field experiments interact only with fracture fault gouge and to a lesser extent with cataclasite both being mineralogically very different (e.g. clay-bearing) from the intact wall rock.
Resumo:
Quantitative computer tomography (QCT)-based finite element (FE) models of vertebral body provide better prediction of vertebral strength than dual energy X-ray absorptiometry. However, most models were validated against compression of vertebral bodies with endplates embedded in polymethylmethalcrylate (PMMA). Yet, loading being as important as bone density, the absence of intervertebral disc (IVD) affects the strength. Accordingly, the aim was to assess the strength predictions of the classic FE models (vertebral body embedded) against the in vitro and in silico strengths of vertebral bodies loaded via IVDs. High resolution peripheral QCT (HR-pQCT) were performed on 13 segments (T11/T12/L1). T11 and L1 were augmented with PMMA and the samples were tested under a 4° wedge compression until failure of T12. Specimen-specific model was generated for each T12 from the HR-pQCT data. Two FE sets were created: FE-PMMA refers to the classical vertebral body embedded model under axial compression; FE-IVD to their loading via hyperelastic IVD model under the wedge compression as conducted experimentally. Results showed that FE-PMMA models overestimated the experimental strength and their strength prediction was satisfactory considering the different experimental set-up. On the other hand, the FE-IVD models did not prove significantly better (Exp/FE-PMMA: R²=0.68; Exp/FE-IVD: R²=0.71, p=0.84). In conclusion, FE-PMMA correlates well with in vitro strength of human vertebral bodies loaded via real IVDs and FE-IVD with hyperelastic IVDs do not significantly improve this correlation. Therefore, it seems not worth adding the IVDs to vertebral body models until fully validated patient-specific IVD models become available.
Resumo:
The finite element analysis is an accepted method to predict vertebral body compressive strength. This study compares measurements obtained from in vitro tests with the ones from two different simulation models: clinical quantitative computer tomography (QCT) based homogenized finite element (hFE) models and pre-clinical high-resolution peripheral QCT-based (HR-pQCT) hFE models. About 37 vertebral body sections were prepared by removing end-plates and posterior elements, scanned with QCT (390/450μm voxel size) as well as HR-pQCT (82μm voxel size), and tested in compression up to failure. Non-linear viscous damage hFE models were created from QCT/HT-pQCT images and compared to experimental results based on stiffness and ultimate load. As expected, the predictability of QCT/HR-pQCT-based hFE models for both apparent stiffness (r2=0.685/0.801r2=0.685/0.801) and strength (r2=0.774/0.924r2=0.774/0.924) increased if a better image resolution was used. An analysis of the damage distribution showed similar damage locations for all cases. In conclusion, HR-pQCT-based hFE models increased the predictability considerably and do not need any tuning of input parameters. In contrast, QCT-based hFE models usually need some tuning but are clinically the only possible choice at the moment.
Resumo:
Compared to μ→eγ and μ→eee, the process μ→e conversion in nuclei receives enhanced contributions from Higgs-induced lepton flavor violation. Upcoming μ→e conversion experiments with drastically increased sensitivity will be able to put extremely stringent bounds on Higgs-mediated μ→e transitions. We point out that the theoretical uncertainties associated with these Higgs effects, encoded in the couplings of quark scalar operators to the nucleon, can be accurately assessed using our recently developed approach based on SU(2) chiral perturbation theory that cleanly separates two- and three-flavor observables. We emphasize that with input from lattice QCD for the coupling to strangeness fNs, hadronic uncertainties are appreciably reduced compared to the traditional approach where fNs is determined from the pion-nucleon σ term by means of an SU(3) relation. We illustrate this point by considering Higgs-mediated lepton flavor violation in the standard model supplemented with higher-dimensional operators, the two-Higgs-doublet model with generic Yukawa couplings, and the minimal supersymmetric standard model. Furthermore, we compare bounds from present and future μ→e conversion and μ→eγ experiments.
Resumo:
Weak radiative decays of the B mesons belong to the most important flavor changing processes that provide constraints on physics at the TeV scale. In the derivation of such constraints, accurate standard model predictions for the inclusive branching ratios play a crucial role. In the current Letter we present an update of these predictions, incorporating all our results for the O(α2s) and lower-order perturbative corrections that have been calculated after 2006. New estimates of nonperturbative effects are taken into account, too. For the CP- and isospin-averaged branching ratios, we find Bsγ=(3.36±0.23)×10−4 and Bdγ=(1.73+0.12−0.22)×10−5, for Eγ>1.6 GeV. Both results remain in agreement with the current experimental averages. Normalizing their sum to the inclusive semileptonic branching ratio, we obtain Rγ≡(Bsγ+Bdγ)/Bcℓν=(3.31±0.22)×10−3. A new bound from Bsγ on the charged Higgs boson mass in the two-Higgs-doublet-model II reads MH±>480 GeV at 95% C.L.