92 resultados para Localized algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A class identification algorithms is introduced for Gaussian process(GP)models.The fundamental approach is to propose a new kernel function which leads to a covariance matrix with low rank,a property that is consequently exploited for computational efficiency for both model parameter estimation and model predictions.The objective of either maximizing the marginal likelihood or the Kullback–Leibler (K–L) divergence between the estimated output probability density function(pdf)and the true pdf has been used as respective cost functions.For each cost function,an efficient coordinate descent algorithm is proposed to estimate the kernel parameters using a one dimensional derivative free search, and noise variance using a fast gradient descent algorithm. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Satellite data are increasingly used to provide observation-based estimates of the effects of aerosols on climate. The Aerosol-cci project, part of the European Space Agency's Climate Change Initiative (CCI), was designed to provide essential climate variables for aerosols from satellite data. Eight algorithms, developed for the retrieval of aerosol properties using data from AATSR (4), MERIS (3) and POLDER, were evaluated to determine their suitability for climate studies. The primary result from each of these algorithms is the aerosol optical depth (AOD) at several wavelengths, together with the Ångström exponent (AE) which describes the spectral variation of the AOD for a given wavelength pair. Other aerosol parameters which are possibly retrieved from satellite observations are not considered in this paper. The AOD and AE (AE only for Level 2) were evaluated against independent collocated observations from the ground-based AERONET sun photometer network and against “reference” satellite data provided by MODIS and MISR. Tools used for the evaluation were developed for daily products as produced by the retrieval with a spatial resolution of 10 × 10 km2 (Level 2) and daily or monthly aggregates (Level 3). These tools include statistics for L2 and L3 products compared with AERONET, as well as scoring based on spatial and temporal correlations. In this paper we describe their use in a round robin (RR) evaluation of four months of data, one month for each season in 2008. The amount of data was restricted to only four months because of the large effort made to improve the algorithms, and to evaluate the improvement and current status, before larger data sets will be processed. Evaluation criteria are discussed. Results presented show the current status of the European aerosol algorithms in comparison to both AERONET and MODIS and MISR data. The comparison leads to a preliminary conclusion that the scores are similar, including those for the references, but the coverage of AATSR needs to be enhanced and further improvements are possible for most algorithms. None of the algorithms, including the references, outperforms all others everywhere. AATSR data can be used for the retrieval of AOD and AE over land and ocean. PARASOL and one of the MERIS algorithms have been evaluated over ocean only and both algorithms provide good results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, infrared filters for astronomical telescopes and satellite radiometers are based on multilayer thin film stacks of alternating high and low refractive index materials. However, the choice of suitable layer materials is limited and this places limitations on the filter performance that can be achieved. The ability to design materials with arbitrary refractive index allows for filter performance to be greatly increased but also increases the complexity of design. Here a differential algorithm was used as a method for optimised design of filters with arbitrary refractive indices, and then materials are designed to these specifications as mono-materials with sub wavelength structures using Bruggeman’s effective material approximation (EMA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Distribution Network Operators (DNOs) role is becoming more difficult as electric vehicles and electric heating penetrate the network, increasing the demand. As a result it becomes harder for the distribution networks infrastructure to remain within its operating constraints. Energy storage is a potential alternative to conventional network reinforcement such as upgrading cables and transformers. The research presented here in this paper shows that due to the volatile nature of the LV network, the control approach used for energy storage has a significant impact on performance. This paper presents and compares control methodologies for energy storage where the objective is to get the greatest possible peak demand reduction across the day from a pre-specified storage device. The results presented show the benefits and detriments of specific types of control on a storage device connected to a single phase of an LV network, using aggregated demand profiles based on real smart meter data from individual homes. The research demonstrates an important relationship between how predictable an aggregation is and the best control methodology required to achieve the objective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electronic properties of four divinylanthracene-bridged diruthenium carbonyl complexes [{RuCl(CO)(PMe3)3}2(μ[BOND]CH[DOUBLE BOND]CHArCH[DOUBLE BOND]CH)] (Ar=9,10-anthracene (1), 1,5-anthracene (2), 2,6-anthracene (3), 1,8-anthracene (4)) obtained by molecular spectroscopic methods (IR, UV/Vis/near-IR, and EPR spectroscopy) and DFT calculations are reported. IR spectroelectrochemical studies have revealed that these complexes are first oxidized at the noninnocent bridging ligand, which is in line with the very small ν(C[TRIPLE BOND]O) wavenumber shift that accompanies this process and also supported by DFT calculations. Because of poor conjugation in complex 1, except oxidized 1+, the electronic absorption spectra of complexes 2+, 3+, and 4+ all display the characteristic near-IR band envelopes that have been deconvoluted into three Gaussian sub-bands. Two of the sub-bands belong mainly to metal-to-ligand charge-transfer (MLCT) transitions according to results from time-dependent DFT calculations. EPR spectroscopy of chemically generated 1+–4+ proves largely ligand-centered spin density, again in accordance with IR spectra and DFT calculations results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss the modelling of dielectric responses of amorphous biological samples. Such samples are commonly encountered in impedance spectroscopy studies as well as in UV, IR, optical and THz transient spectroscopy experiments and in pump-probe studies. In many occasions, the samples may display quenched absorption bands. A systems identification framework may be developed to provide parsimonious representations of such responses. To achieve this, it is appropriate to augment the standard models found in the identification literature to incorporate fractional order dynamics. Extensions of models using the forward shift operator, state space models as well as their non-linear Hammerstein-Wiener counterpart models are highlighted. We also discuss the need to extend the theory of electromagnetically excited networks which can account for fractional order behaviour in the non-linear regime by incorporating nonlinear elements to account for the observed non-linearities. The proposed approach leads to the development of a range of new chemometrics tools for biomedical data analysis and classification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tropical Applications of Meteorology Using Satellite and Ground-Based Observations (TAMSAT) rainfall estimates are used extensively across Africa for operational rainfall monitoring and food security applications; thus, regional evaluations of TAMSAT are essential to ensure its reliability. This study assesses the performance of TAMSAT rainfall estimates, along with the African Rainfall Climatology (ARC), version 2; the Tropical Rainfall Measuring Mission (TRMM) 3B42 product; and the Climate Prediction Center morphing technique (CMORPH), against a dense rain gauge network over a mountainous region of Ethiopia. Overall, TAMSAT exhibits good skill in detecting rainy events but underestimates rainfall amount, while ARC underestimates both rainfall amount and rainy event frequency. Meanwhile, TRMM consistently performs best in detecting rainy events and capturing the mean rainfall and seasonal variability, while CMORPH tends to overdetect rainy events. Moreover, the mean difference in daily rainfall between the products and rain gauges shows increasing underestimation with increasing elevation. However, the distribution in satellite–gauge differences demon- strates that although 75% of retrievals underestimate rainfall, up to 25% overestimate rainfall over all eleva- tions. Case studies using high-resolution simulations suggest underestimation in the satellite algorithms is likely due to shallow convection with warm cloud-top temperatures in addition to beam-filling effects in microwave- based retrievals from localized convective cells. The overestimation by IR-based algorithms is attributed to nonraining cirrus with cold cloud-top temperatures. These results stress the importance of understanding re- gional precipitation systems causing uncertainties in satellite rainfall estimates with a view toward using this knowledge to improve rainfall algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given a dataset of two-dimensional points in the plane with integer coordinates, the method proposed reduces a set of n points down to a set of s points s ≤ n, such that the convex hull on the set of s points is the same as the convex hull of the original set of n points. The method is O(n). It helps any convex hull algorithm run faster. The empirical analysis of a practical case shows a percentage reduction in points of over 98%, that is reflected as a faster computation with a speedup factor of at least 4.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article argues that two movements in constant interplay operate within the historical trajectory of the Spanish language: the localization that becomes globalized and the globalization that becomes localized. Equally, this article illustrates how, at the same time that Spanish is expanding in the world, new idiosyncratic and localized forms of the language are emerging. This article deals with the issues of standardization and language ideology, language contact, and redefinition of identities. The article focuses on three geographic loci: Spain, where Spanish opposes Catalan, Basque, and Galician; the United States, where migrants' Spanish dialects converge and confront English and each other; and finally, Latin America, where Spanish is in contact with Portuguese, indigenous, and Afro-Hispanic languages. The concepts that structure the discussion explain both language expansion and contraction as well as the conflict and constant negotiation between a language's standardized forms and its regional and social varieties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This special issue is focused on the assessment of algorithms for the observation of Earth’s climate from environ- mental satellites. Climate data records derived by remote sensing are increasingly a key source of insight into the workings of and changes in Earth’s climate system. Producers of data sets must devote considerable effort and expertise to maximise the true climate signals in their products and minimise effects of data processing choices and changing sensors. A key choice is the selection of algorithm(s) for classification and/or retrieval of the climate variable. Within the European Space Agency Climate Change Initiative, science teams undertook systematic assessment of algorithms for a range of essential climate variables. The papers in the special issue report some of these exercises (for ocean colour, aerosol, ozone, greenhouse gases, clouds, soil moisture, sea surface temper- ature and glaciers). The contributions show that assessment exercises must be designed with care, considering issues such as the relative importance of different aspects of data quality (accuracy, precision, stability, sensitivity, coverage, etc.), the availability and degree of independence of validation data and the limitations of validation in characterising some important aspects of data (such as long-term stability or spatial coherence). As well as re- quiring a significant investment of expertise and effort, systematic comparisons are found to be highly valuable. They reveal the relative strengths and weaknesses of different algorithmic approaches under different observa- tional contexts, and help ensure that scientific conclusions drawn from climate data records are not influenced by observational artifacts, but are robust.