18 resultados para Shift-and-add algorithms
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset - the period 1989-2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.
Resumo:
Southern Switzerland is a fire prone area where fire has to be considered as a natural environmental factor. In the past decades, fire frequency has tended to increase due to changes in landscape management. The most common type of fire is surface fire which normally breaks out during the vegetation resting period. Usually this type of fire shows short residence time (rapid spread), low to medium fire intensity and limited size. South-facing slopes are particularly fire-prone, so that very high fire frequency is possible: under these conditions passive resistant species and postfire resprouting species are favoured, usually leading to a reduction in the number of surviving species to a few fire adapted sprouters. Evergreen broadleaves are extremely sensitive to repeated fires. A simulation of the potential vegetation of southern Switzerland under climatic changed conditions evidenced the coincidence of the potential area of spreading forests rich in evergreen broad-leaved species with the most fire-prone area of the region. Therefore, in future, wildfires could play an important regulating role: most probably they will not stop the large-scale laurophyllisation of the thermophilous forests of southern Switzerland, but at sites with high fire frequency the vegetation shift could be slowed or even prevented by fire-disturbances.
Resumo:
We present new algorithms for M-estimators of multivariate scatter and location and for symmetrized M-estimators of multivariate scatter. The new algorithms are considerably faster than currently used fixed-point and related algorithms. The main idea is to utilize a second order Taylor expansion of the target functional and to devise a partial Newton-Raphson procedure. In connection with symmetrized M-estimators we work with incomplete U-statistics to accelerate our procedures initially.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
The prognostic relevance of quantitative an intracoronary occlusive electrocardiographic (ECG) ST-segment shift and its determinants have not been investigated in humans. In 765 patients with chronic stable coronary artery disease, the following simultaneous quantitative measurements were obtained during a 1-minute coronary balloon occlusion: intracoronary ECG ST-segment shift (recorded by angioplasty guidewire), mean aortic pressure, mean distal coronary pressure, and mean central venous pressure (CVP). Collateral flow index (CFI) was calculated as follows: (mean distal coronary pressure minus CVP)/(mean aortic pressure minus CVP). During an average follow-up duration of 50 ± 34 months, the cumulative mortality rate from all causes was significantly lower in the group with an ST-segment shift <0.1 mV (n = 89) than in the group with an ST-segment shift ≥0.1 mV (n = 676, p = 0.0211). Factors independently related to intracoronary occlusive ECG ST-segment shift <0.1 mV (r(2) = 0.189, p <0.0001) were high CFI (p <0.0001), intracoronary occlusive RR interval (p = 0.0467), right coronary artery as the ischemic region (p <0.0001), and absence of arterial hypertension (p = 0.0132). "High" CFI according to receiver operating characteristics analysis was ≥0.217 (area under receiver operating characteristics curve 0.647, p <0.0001). In conclusion, absence of ECG ST-segment shift during brief coronary occlusion in patients with chronic coronary artery disease conveys a decreased mortality and is directly influenced by a well-developed collateral supply to the right versus left coronary ischemic region and by the absence of systemic hypertension in a patient's history.
Resumo:
Exposure to polycyclic aromatic hydrocarbons (PAH) and DNA damage were analyzed in coke oven (n = 37), refractory (n = 96), graphite electrode (n = 26), and converter workers (n = 12), whereas construction workers (n = 48) served as referents. PAH exposure was assessed by personal air sampling during shift and biological monitoring in urine post shift (1-hydroxypyrene, 1-OHP and 1-, 2 + 9-, 3-, 4-hydroxyphenanthrenes, SigmaOHPHE). DNA damage was measured by 8-oxo-7,8-dihydro-2'-deoxyguanosine (8-oxodGuo) and DNA strand breaks in blood post shift. Median 1-OHP and SigmaOHPHE were highest in converter workers (13.5 and 37.2 microg/g crea). The industrial setting contributed to the metabolite concentrations rather than the air-borne concentration alone. Other routes of uptake, probably dermal, influenced associations between air-borne concentrations and levels of PAH metabolites in urine making biomonitoring results preferred parameters to assess exposure to PAH. DNA damage in terms of 8-oxo-dGuo and DNA strand breaks was higher in exposed workers compared to referents ranking highest for graphite-electrode production. The type of industry contributed to genotoxic DNA damage and DNA damage was not unequivocally associated to PAH on the individual level most likely due to potential contributions of co-exposures.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
A social Semantic Web empowers its users to have access to collective Web knowledge in a simple manner, and for that reason, controlling online privacy and reputation becomes increasingly important, and must be taken seriously. This chapter presents Fuzzy Cognitive Maps (FCM) as a vehicle for Web knowledge aggregation, representation, and reasoning. With this in mind, a conceptual framework for Web knowledge aggregation, representation, and reasoning is introduced along with a use case, in which the importance of investigative searching for online privacy and reputation is highlighted. Thereby it is demonstrated how a user can establish a positive online presence.
Resumo:
An inherited polyneuropathy (PN) observed in Leonberger dogs has clinical similarities to a genetically heterogeneous group of peripheral neuropathies termed Charcot-Marie-Tooth (CMT) disease in humans. The Leonberger disorder is a severe, juvenile-onset, chronic, progressive, and mixed PN, characterized by exercise intolerance, gait abnormalities and muscle atrophy of the pelvic limbs, as well as inspiratory stridor and dyspnea. We mapped a PN locus in Leonbergers to a 250 kb region on canine chromosome 16 (Praw = 1.16×10-10, Pgenome, corrected = 0.006) utilizing a high-density SNP array. Within this interval is the ARHGEF10 gene, a member of the rho family of GTPases known to be involved in neuronal growth and axonal migration, and implicated in human hypomyelination. ARHGEF10 sequencing identified a 10 bp deletion in affected dogs that removes four nucleotides from the 3'-end of exon 17 and six nucleotides from the 5'-end of intron 17 (c.1955_1958+6delCACGGTGAGC). This eliminates the 3'-splice junction of exon 17, creates an alternate splice site immediately downstream in which the processed mRNA contains a frame shift, and generates a premature stop codon predicted to truncate approximately 50% of the protein. Homozygosity for the deletion was highly associated with the severe juvenile-onset PN phenotype in both Leonberger and Saint Bernard dogs. The overall clinical picture of PN in these breeds, and the effects of sex and heterozygosity of the ARHGEF10 deletion, are less clear due to the likely presence of other forms of PN with variable ages of onset and severity of clinical signs. This is the first documented severe polyneuropathy associated with a mutation in ARHGEF10 in any species.
Resumo:
Much of the recent interest in educational gender differences is based on differences in academic performance. Several studies have shown that young women now out-perform males in terms of school grades and university degrees. But while there is a lot of research into the reasons for this shift, and into gender gaps in reading and maths achievement, little research has been done on the consequences of these differences in educational and early occupational success. - See more at: https://cerp.aqa.org.uk/perspectives/male-and-female-routes-success#sthash.HR8fFTen.dpuf
Resumo:
Increasing antibiotic resistance among uropathogenic Escherichia coli (UPEC) is driving interest in therapeutic targeting of nonconserved virulence factor (VF) genes. The ability to formulate efficacious combinations of antivirulence agents requires an improved understanding of how UPEC deploy these genes. To identify clinically relevant VF combinations, we applied contemporary network analysis and biclustering algorithms to VF profiles from a large, previously characterized inpatient clinical cohort. These mathematical approaches identified four stereotypical VF combinations with distinctive relationships to antibiotic resistance and patient sex that are independent of traditional phylogenetic grouping. Targeting resistance- or sex-associated VFs based upon these contemporary mathematical approaches may facilitate individualized anti-infective therapies and identify synergistic VF combinations in bacterial pathogens.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.