49 resultados para Inversion algorithms
Resumo:
Several strategies relying on kriging have recently been proposed for adaptively estimating contour lines and excursion sets of functions under severely limited evaluation budget. The recently released R package KrigInv 3 is presented and offers a sound implementation of various sampling criteria for those kinds of inverse problems. KrigInv is based on the DiceKriging package, and thus benefits from a number of options concerning the underlying kriging models. Six implemented sampling criteria are detailed in a tutorial and illustrated with graphical examples. Different functionalities of KrigInv are gradually explained. Additionally, two recently proposed criteria for batch-sequential inversion are presented, enabling advanced users to distribute function evaluations in parallel on clusters or clouds of machines. Finally, auxiliary problems are discussed. These include the fine tuning of numerical integration and optimization procedures used within the computation and the optimization of the considered criteria.
Resumo:
The 222Radon tracer method is a powerful tool to estimate local and regional surface emissions of, e.g., greenhouse gases. In this paper we demonstrate that in practice, the method as it is commonly used, produces inaccurate results in case of nonhomogeneously spread emission sources, and we propose a different approach to account for this. We have applied the new methodology to ambient observations of CO2 and 222Radon to estimate CO2 surface emissions for the city of Bern, Switzerland. Furthermore, by utilizing combined measurements of CO2 and δ(O2/N2) we obtain valuable information about the spatial and temporal variability of the main emission sources. Mean net CO2 emissions based on 2 years of observations are estimated at (11.2 ± 2.9) kt km−2 a−1. Oxidative ratios indicate a significant influence from the regional biosphere in summer/spring and fossil fuel combustion processes in winter/autumn. Our data indicate that the emissions from fossil fuels are, to a large degree, related to the combustion of natural gas which is used for heating purposes.
Resumo:
The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset - the period 1989-2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.
Resumo:
Dynamic systems, especially in real-life applications, are often determined by inter-/intra-variability, uncertainties and time-varying components. Physiological systems are probably the most representative example in which population variability, vital signal measurement noise and uncertain dynamics render their explicit representation and optimization a rather difficult task. Systems characterized by such challenges often require the use of adaptive algorithmic solutions able to perform an iterative structural and/or parametrical update process towards optimized behavior. Adaptive optimization presents the advantages of (i) individualization through learning of basic system characteristics, (ii) ability to follow time-varying dynamics and (iii) low computational cost. In this chapter, the use of online adaptive algorithms is investigated in two basic research areas related to diabetes management: (i) real-time glucose regulation and (ii) real-time prediction of hypo-/hyperglycemia. The applicability of these methods is illustrated through the design and development of an adaptive glucose control algorithm based on reinforcement learning and optimal control and an adaptive, personalized early-warning system for the recognition and alarm generation against hypo- and hyperglycemic events.
Resumo:
In the antisaccade task, subjects are requested to suppress a reflexive saccade towards a visual target and to perform a saccade towards the opposite side. In addition, in order to reproduce an accurate saccadic amplitude, the visual saccade vector (i.e., the distance between a central fixation point and the peripheral target) must be exactly inverted from one visual hemifield to the other. Results from recent studies using a correlational approach (i.e., fMRI, MEG) suggest that not only the posterior parietal cortex (PPC) but also the frontal eye field (FEF) might play an important role in such a visual vector inversion process. In order to assess whether the FEF contributes to visual vector inversion, we applied an interference approach with continuous theta burst stimulation (cTBS) during a memory-guided antisaccade task. In 10 healthy subjects, one train of cTBS was applied over the right FEF prior to a memory-guided antisaccade task. In comparison to the performance without stimulation or with sham stimulation, cTBS over the right FEF induced a hypometric gain for rightward but not leftward antisaccades. These results obtained with an interference approach confirm that the FEF is also involved in the process of visual vector inversion.
Resumo:
Background Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. Methods We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship ‘Prevalence = Incidence x Duration’ in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship ‘incident = true incident + false incident’ and also to the IIR derived from the BED incidence assay. Results Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R2 = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. Conclusions IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.
Resumo:
BACKGROUND The diagnostic value of a contrast-enhanced T2-weighted FLAIR sequence (ceFLAIR) in brain imaging is unclear. HYPOTHESIS/OBJECTIVES That the number of brain lesions detected with ceFLAIR would be no greater than the sum of lesions detected with nFLAIR and ceT1W sequence. ANIMALS One hundred and twenty-nine animals (108 dogs and 21 cats) undergoing magnetic resonance imaging (MRI) of the head between July 2010 and October 2011 were included in the study. METHODS A transverse ceFLAIR was added to a standard brain MRI protocol. Presence and number of lesions were determined based on all available MRI sequences by 3 examiners in consensus and lesion visibility was evaluated for nFLAIR, ceFLAIR, and ceT1W sequences. RESULTS Eighty-three lesions (58 intra-axial and 25 extra-axial) were identified in 51 patients. Five lesions were detected with nFLAIR alone, 2 with ceT1W alone, and 1 with ceFLAIR alone. Significantly higher numbers of lesions were detected using ceFLAIR than nFLAIR (76 versus 67 lesions; P = 0.04), in particular for lesions also detected with ceT1W images (53 versus 40; P =.01). There was no significant difference between the number of lesions detected with combined nFLAIR and ceT1W sequences compared to those detected with ceFLAIR (82 versus 76; P =.25). CONCLUSION AND CLINICAL IMPORTANCE Use of ceFLAIR as a complementary sequence to nFLAIR and ceT1W sequences did not improve the detection of brain lesions and cannot be recommended as part of a routine brain MRI protocol in dogs and cats with suspected brain lesions.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.