837 resultados para semi binary based feature detectordescriptor
Resumo:
In this paper, dual-hop amplify-and-forward (AF) cooperative systems in the presence of high-power amplifier (HPA) nonlinearity at semi-blind relays, are investigated. Based on the modified AF cooperative system model taking into account the HPA nonlinearity, the expression for the output signal-to-noise ratio (SNR) at the destination node is derived, where the interference due to both the AF relaying mechanism and the HPA nonlinearity is characterized. The performance of the AF cooperative system under study is evaluated in terms of average symbol error probability (SEP), which is derived using the moment-generating function (MGF) approach, considering transmissions over Nakagami-m fading channels. Numerical results are provided and show the effects of some system parameters, such as the HPA parameters, numbers of relays, quadrature amplitude modulation (QAM) order, Nakagami parameters, on performance.
Resumo:
We have incorporated a semi-mechanistic isoprene emission module into the JULES land-surface scheme, as a first step towards a modelling tool that can be applied for studies of vegetation – atmospheric chemistry interactions, including chemistry-climate feedbacks. Here, we evaluate the coupled model against local above-canopy isoprene emission flux measurements from six flux tower sites as well as satellite-derived estimates of isoprene emission over tropical South America and east and south Asia. The model simulates diurnal variability well: correlation coefficients are significant (at the 95 % level) for all flux tower sites. The model reproduces day-to-day variability with significant correlations (at the 95 % confidence level) at four of the six flux tower sites. At the UMBS site, a complete set of seasonal observations is available for two years (2000 and 2002). The model reproduces the seasonal pattern of emission during 2002, but does less well in the year 2000. The model overestimates observed emissions at all sites, which is partially because it does not include isoprene loss through the canopy. Comparison with the satellite-derived isoprene-emission estimates suggests that the model simulates the main spatial patterns, seasonal and inter-annual variability over tropical regions. The model yields a global annual isoprene emission of 535 ± 9 TgC yr−1 during the 1990s, 78 % of which from forested areas.
Resumo:
A new model has been developed for assessing multiple sources of nitrogen in catchments. The model (INCA) is process based and uses reaction kinetic equations to simulate the principal mechanisms operating. The model allows for plant uptake, surface and sub-surface pathways and can simulate up to six land uses simultaneously. The model can be applied to catchment as a semi-distributed simulation and has an inbuilt multi-reach structure for river systems. Sources of nitrogen can be from atmospheric deposition, from the terrestrial environment (e.g. agriculture, leakage from forest systems etc.), from urban areas or from direct discharges via sewage or intensive farm units. The model is a daily simulation model and can provide information in the form of time series at key sites, or as profiles down river systems or as statistical distributions. The process model is described and in a companion paper the model is applied to the River Tywi catchment in South Wales and the Great Ouse in Bedfordshire.
Resumo:
The fully compressible semi-geostrophic system is widely used in the modelling of large-scale atmospheric flows. In this paper, we prove rigorously the existence of weak Lagrangian solutions of this system, formulated in the original physical coordinates. In addition, we provide an alternative proof of the earlier result on the existence of weak solutions of this system expressed in the so-called geostrophic, or dual, coordinates. The proofs are based on the optimal transport formulation of the problem and on recent general results concerning transport problems posed in the Wasserstein space of probability measures.
Resumo:
Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.
Resumo:
Two previous reconstructions of palaeovegetation across the whole of China were performed using a simple classification of plant functional types (PFTs). Now a more explicit, global PFT classification scheme has been developed, and a substantial number of additional pollen records have become available. Here we apply the global scheme of PFTs to a comprehensive set of pollen records available from China to test the applicability of the global scheme of PFTs in China, and to obtain a well-founded reconstruction of changing palaeovegetation patterns. A total of 806 pollen surface samples, 188 mid-Holocene (MH, 6000 14C yr BP) and 50 last glacial maximum (LGM, 18,000 14C yr BP) pollen records were used to reconstruct vegetation patterns in China, based on a new global classification system of PFTs and a standard numerical technique for biome assignment (biomization). The biome reconstruction based on pollen surface samples showed convincing agreement with present potential natural vegetation. Coherent patterns of change in biome distribution between MH, LGM and present are observed. In the MH, cold and cool-temperate evergreen needleleaf forests and mixed forests, temperate deciduous broadleaf forest, and warm-temperate evergreen broadleaf and mixed forest in eastern China were shifted northward by 200–500 km. Cold-deciduous forest in northeastern China was replaced by cold evergreen needleleaf forest while in central northern China, cold-deciduous forest was present at some sites now occupied by temperate grassland and desert. The forest–grassland boundary was 200–300 km west of its present position. Temperate xerophytic shrubland, temperate grassland and desert covered a large area on the Tibetan Plateau, but the area of tundra was reduced. Treeline was 300–500 m higher than present in Tibet. These changes imply generally warmer winters, longer growing seasons and more precipitation during the MH. Westward shifts of the forest–shrubland–grassland and grassland–desert boundaries imply greater moisture availability in the MH, consistent with a stronger summer monsoon. During the LGM, in contrast, cold-deciduous forest, cool-temperate evergreen needleleaf forest, cool mixed forests, warm-temperate evergreen broadleaf and mixed forest in eastern China were displaced to the south by 300–1000 km, while temperate deciduous broadleaf forest, pure warm-temperate evergreen forest, tropical semi-evergreen and evergreen broadleaf forests were restricted or absent from the mainland of southern China, implying colder winters than present. Strong shifts of temperate xerophytic shrubland, temperate grassland and desert to the south and east in northern and western China and on the Tibetan Plateau imply drier conditions than present.
Resumo:
This study of landscape evolution presents both new modern and palaeo process-landform data, and analyses the behaviour of the Antarctic Peninsula Ice Sheet through the Last Glacial Maximum (LGM), the Holocene and to the present day. Six sediment-landform assemblages are described and interpreted for Ulu Peninsula, James Ross Island, NE Antarctic Peninsula: (1) the Glacier Ice and Snow Assemblage; (2) the Glacigenic Assemblage, which relates to LGM sediments and comprises both erratic-poor and erratic-rich drift, deposited by cold-based and wet-based ice and ice streams respectively; (3) the Boulder Train Assemblage, deposited during a Mid-Holocene glacier readvance; (4) the Ice-cored Moraine Assemblage, found in front of small cirque glaciers; (5) the Paraglacial Assemblage including scree, pebble-boulder lags, and littoral and fluvial processes; and (6) the Periglacial Assemblage including rock glaciers, protalus ramparts, blockfields, solifluction lobes and extensive patterned ground. The interplay between glacial, paraglacial and periglacial processes in this semi-arid polar environment is important in understanding polygenetic landforms. Crucially, cold-based ice was capable of sediment and landform genesis and modification. This landsystem model can aid the interpretation of past environments, but also provides new data to aid the reconstruction of the last ice sheet to overrun James Ross Island.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
A method for estimating both the Alfvén speed and the field-aligned flow of the magnetosheath at the magnetopause reconnection site is presented. The method employs low-altitude cusp ion observations and requires the identification of a feature in the cusp ion spectra near the low-energy cutoff which will often be present for a low-latitude dayside reconnection site. The appearance of these features in data of limited temporal, energy, and pitch angle resolution is illustrated by using model calculations of cusp ion distribution functions. These are based on the theory of ion acceleration at the dayside magnetopause and allow for the effects on the spectrum of flight times of ions precipitating down newly opened field lines. In addition, the variation of the reconnection rate can be evaluated, and comparison with ground-based observations of the corresponding sequence of transient events allows the field-aligned distance from the ionosphere to the reconnection site to be estimated.
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
Resumo:
Wireless video sensor networks have been a hot topic in recent years; the monitoring capability is the central feature of the services offered by a wireless video sensor network can be classified into three major categories: monitoring, alerting, and information on-demand. These features have been applied to a large number of applications related to the environment (agriculture, water, forest and fire detection), military, buildings, health (elderly people and home monitoring), disaster relief, area and industrial monitoring. Security applications oriented toward critical infrastructures and disaster relief are very important applications that many countries have identified as critical in the near future. This paper aims to design a cross layer based protocol to provide the required quality of services for security related applications using wireless video sensor networks. Energy saving, delay and reliability for the delivered data are crucial in the proposed application. Simulation results show that the proposed cross layer based protocol offers a good performance in term of providing the required quality of services for the proposed application.
Resumo:
Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points.
Resumo:
SCOPE: A high intake of n-3 PUFA provides health benefits via changes in the n-6/n-3 ratio in blood. In addition to such dietary PUFAs, variants in the fatty acid desaturase 1 (FADS1) gene are also associated with altered PUFA profiles. METHODS AND RESULTS: We used mathematical modelling to predict levels of PUFA in whole blood, based on MHT and bolasso selected food items, anthropometric and lifestyle factors, and the rs174546 genotypes in FADS1 from 1,607 participants (Food4Me Study). The models were developed using data from the first reported time point (training set) and their predictive power was evaluated using data from the last reported time point (test set). Amongst other food items, fish, pizza, chicken and cereals were identified as being associated with the PUFA profiles. Using these food items and the rs174546 genotypes as predictors, models explained 26% to 43% of the variability in PUFA concentrations in the training set and 22% to 33% in the test set. CONCLUSIONS: Selecting food items using MHT is a valuable contribution to determine predictors, as our models' predictive power is higher compared to analogue studies. As unique feature, we additionally confirmed our models' power based on a test set.
Resumo:
This paper describes a new approach to detect and track maritime objects in real time. The approach particularly addresses the highly dynamic maritime environment, panning cameras, target scale changes, and operates on both visible and thermal imagery. Object detection is based on agglomerative clustering of temporally stable features. Object extents are first determined based on persistence of detected features and their relative separation and motion attributes. An explicit cluster merging and splitting process handles object creation and separation. Stable object clus- ters are tracked frame-to-frame. The effectiveness of the approach is demonstrated on four challenging real-world public datasets.