63 resultados para Weak Greedy Algorithms
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
OBJECTIVES In this phantom CT study, we investigated whether images reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) with reduced tube voltage and current have equivalent quality. We evaluated the effects of different acquisition and reconstruction parameter settings on image quality and radiation doses. Additionally, patient CT studies were evaluated to confirm our phantom results. METHODS Helical and axial 256 multi-slice computed tomography scans of the phantom (Catphan(®)) were performed with varying tube voltages (80-140kV) and currents (30-200mAs). 198 phantom data sets were reconstructed applying FBP and IR with increasing iterations, and soft and sharp kernels. Further, 25 chest and abdomen CT scans, performed with high and low exposure per patient, were reconstructed with IR and FBP. Two independent observers evaluated image quality and radiation doses of both phantom and patient scans. RESULTS In phantom scans, noise reduction was significantly improved using IR with increasing iterations, independent from tissue, scan-mode, tube-voltage, current, and kernel. IR did not affect high-contrast resolution. Low-contrast resolution was also not negatively affected, but improved in scans with doses <5mGy, although object detectability generally decreased with the lowering of exposure. At comparable image quality levels, CTDIvol was reduced by 26-50% using IR. In patients, applying IR vs. FBP resulted in good to excellent image quality, while tube voltage and current settings could be significantly decreased. CONCLUSIONS Our phantom experiments demonstrate that image quality levels of FBP reconstructions can also be achieved at lower tube voltages and tube currents when applying IR. Our findings could be confirmed in patients revealing the potential of IR to significantly reduce CT radiation doses.
Resumo:
A measurement of the B 0 s →J/ψϕ decay parameters, updated to include flavor tagging is reported using 4.9 fb −1 of integrated luminosity collected by the ATLAS detector from s √ =7 TeV pp collisions recorded in 2011 at the LHC. The values measured for the physical parameters are ϕ s 0.12±0.25(stat)±0.05(syst) rad ΔΓ s 0.053±0.021(stat)±0.010(syst) ps −1 Γ s 0.677±0.007(stat)±0.004(syst) ps −1 |A ∥ (0)| 2 0.220±0.008(stat)±0.009(syst) |A 0 (0)| 2 0.529±0.006(stat)±0.012(syst) δ ⊥ =3.89±0.47(stat)±0.11(syst) rad where the parameter ΔΓ s is constrained to be positive. The S -wave contribution was measured and found to be compatible with zero. Results for ϕ s and ΔΓ s are also presented as 68% and 95% likelihood contours, which show agreement with the Standard Model expectations.
Resumo:
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here we present a graphics processor unit (GPU) based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to auto-regressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and 4 times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a 7-day high-resolution ECG is computed within less than 3 seconds. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.
Resumo:
A well developed theoretical framework is available in which paleofluid properties, such as chemical composition and density, can be reconstructed from fluid inclusions in minerals that have undergone no ductile deformation. The present study extends this framework to encompass fluid inclusions hosted by quartz that has undergone weak ductile deformation following fluid entrapment. Recent experiments have shown that such deformation causes inclusions to become dismembered into clusters of irregularly shaped relict inclusions surrounded by planar arrays of tiny, new-formed (neonate) inclusions. Comparison of the experimental samples with a naturally sheared quartz vein from Grimsel Pass, Aar Massif, Central Alps, Switzerland, reveals striking similarities. This strong concordance justifies applying the experimentally derived rules of fluid inclusion behaviour to nature. Thus, planar arrays of dismembered inclusions defining cleavage planes in quartz may be taken as diagnostic of small amounts of intracrystalline strain. Deformed inclusions preserve their pre-deformation concentration ratios of gases to electrolytes, but their H2O contents typically have changed. Morphologically intact inclusions, in contrast, preserve the pre-deformation composition and density of their originally trapped fluid. The orientation of the maximum principal compressive stress (σ1σ1) at the time of shear deformation can be derived from the pole to the cleavage plane within which the dismembered inclusions are aligned. Finally, the density of neonate inclusions is commensurate with the pressure value of σ1σ1 at the temperature and time of deformation. This last rule offers a means to estimate magnitudes of shear stresses from fluid inclusion studies. Application of this new paleopiezometer approach to the Grimsel vein yields a differential stress (σ1–σ3σ1–σ3) of ∼300 MPa∼300 MPa at View the MathML source390±30°C during late Miocene NNW–SSE orogenic shortening and regional uplift of the Aar Massif. This differential stress resulted in strain-hardening of the quartz at very low total strain (<5%<5%) while nearby shear zones were accommodating significant displacements. Further implementation of these experimentally derived rules should provide new insight into processes of fluid–rock interaction in the ductile regime within the Earth's crust.
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
In the fermion loop formulation the contributions to the partition function naturally separate into topological equivalence classes with a definite sign. This separation forms the basis for an efficient fermion simulation algorithm using a fluctuating open fermion string. It guarantees sufficient tunnelling between the topological sectors, and hence provides a solution to the fermion sign problem affecting systems with broken supersymmetry. Moreover, the algorithm shows no critical slowing down even in the massless limit and can hence handle the massless Goldstino mode emerging in the supersymmetry broken phase. In this paper – the third in a series of three – we present the details of the simulation algorithm and demonstrate its efficiency by means of a few examples.
Resumo:
Weak radiative decays of the B mesons belong to the most important flavor changing processes that provide constraints on physics at the TeV scale. In the derivation of such constraints, accurate standard model predictions for the inclusive branching ratios play a crucial role. In the current Letter we present an update of these predictions, incorporating all our results for the O(α2s) and lower-order perturbative corrections that have been calculated after 2006. New estimates of nonperturbative effects are taken into account, too. For the CP- and isospin-averaged branching ratios, we find Bsγ=(3.36±0.23)×10−4 and Bdγ=(1.73+0.12−0.22)×10−5, for Eγ>1.6 GeV. Both results remain in agreement with the current experimental averages. Normalizing their sum to the inclusive semileptonic branching ratio, we obtain Rγ≡(Bsγ+Bdγ)/Bcℓν=(3.31±0.22)×10−3. A new bound from Bsγ on the charged Higgs boson mass in the two-Higgs-doublet-model II reads MH±>480 GeV at 95% C.L.
Resumo:
We present new algorithms for M-estimators of multivariate scatter and location and for symmetrized M-estimators of multivariate scatter. The new algorithms are considerably faster than currently used fixed-point and related algorithms. The main idea is to utilize a second order Taylor expansion of the target functional and to devise a partial Newton-Raphson procedure. In connection with symmetrized M-estimators we work with incomplete U-statistics to accelerate our procedures initially.
Resumo:
The interaction of a comet with the solar wind undergoes various stages as the comet’s activity varies along its orbit. For a comet like 67P/Churyumov–Gerasimenko, the target comet of ESA’s Rosetta mission, the various features include the formation of a Mach cone, the bow shock, and close to perihelion even a diamagnetic cavity. There are different approaches to simulate this complex interplay between the solar wind and the comet’s extended neutral gas coma which include magnetohydrodynamics (MHD) and hybrid-type models. The first treats the plasma as fluids (one fluid in basic single fluid MHD) and the latter treats the ions as individual particles under the influence of the local electric and magnetic fields. The electrons are treated as a charge-neutralizing fluid in both cases. Given the different approaches both models yield different results, in particular for a low production rate comet. In this paper we will show that these differences can be reduced when using a multifluid instead of a single-fluid MHD model and increase the resolution of the Hybrid model. We will show that some major features obtained with a hybrid type approach like the gyration of the cometary heavy ions and the formation of the Mach cone can be partially reproduced with the multifluid-type model.
Resumo:
This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.
Resumo:
BACKGROUND Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM Our aim was to challenge the validity of these software algorithms. METHODS We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes.