902 resultados para Inversion algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the memory antisaccade task, subjects are instructed to look at an imaginary point precisely at the opposite side of a peripheral visual stimulus presented short time previously. To perform this task accurately, the visual vector, i.e., the distance between a central fixation point and the peripheral stimulus, must be inverted from one visual hemifield to the other. Recent data in humans and monkeys suggest that the posterior parietal cortex (PPC) might be critically involved in the process of visual vector inversion. In the present study, we investigated the temporal dynamics of visual vector inversion in the human PPC by using transcranial magnetic stimulation (TMS). In six healthy subjects, single pulse TMS was applied over the right PPC during a memory antisaccade task at four different time intervals: 100 ms, 217 ms, 333 ms, or 450 ms after target onset. The results indicate that for rightward antisaccades, i.e., when the visual target was presented in the left screen-half, TMS had a significant effect on saccade gain when applied 100 ms after target onset, but not later. For leftward antisaccades, i.e., when the visual target was presented in the right screen-half, a significant TMS effect on gain was found for the 333 ms and 450 ms conditions, but not for the earlier ones. This double dissociation of saccade gain suggests that the initial process of vector inversion can be disrupted 100 ms after onset of the visual stimulus and that TMS interfered with motor saccade planning based on an inversed vector signal at 333 ms and 450 ms after stimulus onset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within the Yellowstone National Park, Wyoming, the silicic Yellowstone volcanic field is one of the most active volcanic systems all over the world. Although the last rhyolite eruption occurred around 70,000 years ago, Yellowstone is still believed to be volcanically active, due to high hydrothermal and seismic activity. The earthquake data used in this study cover the period of time between 1988 and 2010. Earthquake relocations and a set of 369 well-constrained, double-couple, focal mechanism solutions were computed. Events were grouped according to location and time to investigate trends in faulting. The majority of the events has oblique, normal-faulting solutions. The overall direction of extension throughout the 0.64 Ma Yellowstone caldera looks nearly ENE, consistently with the direction of alignments of volcanic vents within the caldera, but detailed study revealed spatial and temporal variations. Stress-field solutions for different areas and time periods were calculated from earthquake focal mechanism inversion. A well-resolved rotation of σ3 was found, from NNE-SSW near the Hebgen Lake fault zone, to ENE-WSW near Norris Junction. In particular, the σ3 direction changed throughout the years in the Norris Junction area, from being ENE-WSW, as calculated in the study by Waite and Smith (2004), to NNE-SSW, while the other σ3 directions are mostly unchanged over time. The Yellowstone caldera was subject to periods of net uplift and subsidence over the past century, explained in previous studies as caused by expanding or contracting sills, at different depths. Based on the models used to explain these deformation periods, we investigated the relationship between variability in aseismic deformation and seismic activity and faulting styles. Focal mechanisms and P and T axes were divided into temporal and depth intervals, in order to identify spatial or temporal trends in deformation. The presence of “chocolate tablet” structures, with composite dilational faults, was identified in many stages of the deformation history both in the Norris Geyser Basin area and inside the caldera. Strike-slip component movement was found in a depth interval below a contracting sill, indicating the movement of magma towards the caldera.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a database of freely available stereo-3D content designed to facilitate research in stereo post-production. It describes the structure and content of the database and provides some details about how the material was gathered. The database includes examples of many of the scenarios characteristic to broadcast footage. Material was gathered at different locations including a studio with controlled lighting and both indoor and outdoor on-location sites with more restricted lighting control. The database also includes video sequences with accompanying 3D audio data recorded in an Ambisonics format. An intended consequence of gathering the material is that the database contains examples of degradations that would be commonly present in real-world scenarios. This paper describes one such artefact caused by uneven exposure in the stereo views, causing saturation in the over-exposed view. An algorithm for the restoration of this artefact is proposed in order to highlight the usefuiness of the database.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several strategies relying on kriging have recently been proposed for adaptively estimating contour lines and excursion sets of functions under severely limited evaluation budget. The recently released R package KrigInv 3 is presented and offers a sound implementation of various sampling criteria for those kinds of inverse problems. KrigInv is based on the DiceKriging package, and thus benefits from a number of options concerning the underlying kriging models. Six implemented sampling criteria are detailed in a tutorial and illustrated with graphical examples. Different functionalities of KrigInv are gradually explained. Additionally, two recently proposed criteria for batch-sequential inversion are presented, enabling advanced users to distribute function evaluations in parallel on clusters or clouds of machines. Finally, auxiliary problems are discussed. These include the fine tuning of numerical integration and optimization procedures used within the computation and the optimization of the considered criteria.