129 resultados para Adaptive large neighborhood search
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The design of binary morphological operators that are translation-invariant and locally defined by a finite neighborhood window corresponds to the problem of designing Boolean functions. As in any supervised classification problem, morphological operators designed from a training sample also suffer from overfitting. Large neighborhood tends to lead to performance degradation of the designed operator. This work proposes a multilevel design approach to deal with the issue of designing large neighborhood-based operators. The main idea is inspired by stacked generalization (a multilevel classifier design approach) and consists of, at each training level, combining the outcomes of the previous level operators. The final operator is a multilevel operator that ultimately depends on a larger neighborhood than of the individual operators that have been combined. Experimental results show that two-level operators obtained by combining operators designed on subwindows of a large window consistently outperform the single-level operators designed on the full window. They also show that iterating two-level operators is an effective multilevel approach to obtain better results.
Resumo:
The globular cluster HP 1 is projected on the bulge, very close to the Galactic center. The Multi-Conjugate Adaptive Optics Demonstrator on the Very Large Telescope allowed us to acquire high-resolution deep images that, combined with first epoch New Technology Telescope data, enabled us to derive accurate proper motions. The cluster and bulge fields` stellar contents were disentangled through this process and produced an unprecedented definition in color-magnitude diagrams of this cluster. The metallicity of [Fe/H] approximate to -1.0 from previous spectroscopic analysis is confirmed, which together with an extended blue horizontal branch imply an age older than the halo average. Orbit reconstruction results suggest that HP 1 is spatially confined within the bulge.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.
Resumo:
Simulated annealing (SA) is an optimization technique that can process cost functions with degrees of nonlinearities, discontinuities and stochasticity. It can process arbitrary boundary conditions and constraints imposed on these cost functions. The SA technique is applied to the problem of robot path planning. Three situations are considered here: the path is represented as a polyline; as a Bezier curve; and as a spline interpolated curve. In the proposed SA algorithm, the sensitivity of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensitivity of each parameter is associated to its probability distribution in the definition of the next candidate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The relatively large number of nearby radio-quiet and thermally emitting isolated neutron stars (INSs) discovered in the ROSAT All-Sky Survey, dubbed the ""Magnificent Seven"", suggests that they belong to a formerly neglected major component of the overall INS population. So far, attempts to discover similar INSs beyond the solar vicinity failed to confirm any reliable candidate. The good positional accuracy and soft X-ray sensitivity of the EPIC cameras onboard the XMM-Newton satellite allow us to efficiently search for new thermally emitting INSs. We used the 2XMMp catalogue to select sources with no catalogued candidate counterparts and with X-ray spectra similar to those of the Magnificent Seven, but seen at greater distances and thus undergoing higher interstellar absorptions. Identifications in more than 170 astronomical catalogues and visual screening allowed us to select fewer than 30 good INS candidates. In order to rule out alternative identifications, we obtained deep ESO-VLT and SOAR optical imaging for the X-ray brightest candidates. We report here on the optical follow-up results of our search and discuss the possible nature of 8 of our candidates. A high X-ray-to-optical flux ratio together with a stable flux and soft X-ray spectrum make the brightest source of our sample, 2XMM J104608.7-594306, a newly discovered thermally emitting INS. The X-ray source 2XMM J010642.3+005032 has no evident optical counterpart and should be further investigated. The remaining X-ray sources are most probably identified with cataclysmic variables and active galactic nuclei, as inferred from the colours and flux ratios of their likely optical counterparts. Beyond the finding of new thermally emitting INSs, our study aims at constraining the space density of this Galactic population at great distances and at determining whether their apparently high density is a local anomaly or not.
Resumo:
We consider a model where sterile neutrinos can propagate in a large compactified extra dimension giving rise to Kaluza-Klein (KK) modes and the standard model left-handed neutrinos are confined to a 4-dimensional spacetime brane. The KK modes mix with the standard neutrinos modifying their oscillation pattern. We examine former and current experiments such as CHOOZ, KamLAND, and MINOS to estimate the impact of the possible presence of such KK modes on the determination of the neutrino oscillation parameters and simultaneously obtain limits on the size of the largest extra dimension. We found that the presence of the KK modes does not essentially improve the quality of the fit compared to the case of the standard oscillation. By combining the results from CHOOZ, KamLAND, and MINOS, in the limit of a vanishing lightest neutrino mass, we obtain the stronger bound on the size of the extra dimension as similar to 1.0(0.6) mu m at 99% C.L. for normal (inverted) mass hierarchy. If the lightest neutrino mass turns out to be larger, 0.2 eV, for example, we obtain the bound similar to 0.1 mu m. We also discuss the expected sensitivities on the size of the extra dimension for future experiments such as Double CHOOZ, T2K, and NO nu A.
Resumo:
We consider the problem of interaction neighborhood estimation from the partial observation of a finite number of realizations of a random field. We introduce a model selection rule to choose estimators of conditional probabilities among natural candidates. Our main result is an oracle inequality satisfied by the resulting estimator. We use then this selection rule in a two-step procedure to evaluate the interacting neighborhoods. The selection rule selects a small prior set of possible interacting points and a cutting step remove from this prior set the irrelevant points. We also prove that the Ising models satisfy the assumptions of the main theorems, without restrictions on the temperature, on the structure of the interacting graph or on the range of the interactions. It provides therefore a large class of applications for our results. We give a computationally efficient procedure in these models. We finally show the practical efficiency of our approach in a simulation study.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper contains a new proposal for the definition of the fundamental operation of query under the Adaptive Formalism, one capable of locating functional nuclei from descriptions of their semantics. To demonstrate the method`s applicability, an implementation of the query procedure constrained to a specific class of devices is shown, and its asymptotic computational complexity is discussed.
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Resumo:
The image reconstruction using the EIT (Electrical Impedance Tomography) technique is a nonlinear and ill-posed inverse problem which demands a powerful direct or iterative method. A typical approach for solving the problem is to minimize an error functional using an iterative method. In this case, an initial solution close enough to the global minimum is mandatory to ensure the convergence to the correct minimum in an appropriate time interval. The aim of this paper is to present a new, simple and low cost technique (quadrant-searching) to reduce the search space and consequently to obtain an initial solution of the inverse problem of EIT. This technique calculates the error functional for four different contrast distributions placing a large prospective inclusion in the four quadrants of the domain. Comparing the four values of the error functional it is possible to get conclusions about the internal electric contrast. For this purpose, initially we performed tests to assess the accuracy of the BEM (Boundary Element Method) when applied to the direct problem of the EIT and to verify the behavior of error functional surface in the search space. Finally, numerical tests have been performed to verify the new technique.
Resumo:
Immunological systems have been an abundant inspiration to contemporary computer scientists. Problem solving strategies, stemming from known immune system phenomena, have been successfully applied to chall enging problems of modem computing. Simulation systems and mathematical modeling are also beginning use to answer more complex immunological questions as immune memory process and duration of vaccines, where the regulation mechanisms are not still known sufficiently (Lundegaard, Lund, Kesmir, Brunak, Nielsen, 2007). In this article we studied in machina a approach to simulate the process of antigenic mutation and its implications for the process of memory. Our results have suggested that the durability of the immune memory is affected by the process of antigenic mutation.and by populations of soluble antibodies in the blood. The results also strongly suggest that the decrease of the production of antibodies favors the global maintenance of immune memory.
Resumo:
We employ the recently installed near-infrared Multi-Conjugate Adaptive Optics demonstrator (MAD) to determine the basic properties of a newly identified, old and distant, Galactic open cluster (FSR 1415). The MAD facility remarkably approaches the diffraction limit, reaching a resolution of 0.07 arcsec (in K), that is also uniform in a field of similar to 1.8 arcmin in diameter. The MAD facility provides photometry that is 50 per cent complete at K similar to 19. This corresponds to about 2.5 mag below the cluster main-sequence turn-off. This high-quality data set allows us to derive an accurate heliocentric distance of 8.6 kpc, a metallicity close to solar and an age of similar to 2.5 Gyr. On the other hand, the deepness of the data allows us to reconstruct (completeness-corrected) mass functions (MFs) indicating a relatively massive cluster, with a flat core MF. The Very Large Telescope/MAD capabilities will therefore provide fundamental data for identifying/analysing other faint and distant open clusters in the Galaxy III and IV quadrants.
Resumo:
A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.