906 resultados para Multiple-scale processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a straightforward method to asymptotically solve a variety of initial and boundary value problems for singularly perturbed ordinary differential equations whose solution structure can be anticipated. The approach is simpler than conventional methods, including those based on asymptotic matching or on eliminating secular terms. © 2010 by the Massachusetts Institute of Technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A neural model is presented of how cortical areas V1, V2, and V4 interact to convert a textured 2D image into a representation of curved 3D shape. Two basic problems are solved to achieve this: (1) Patterns of spatially discrete 2D texture elements are transformed into a spatially smooth surface representation of 3D shape. (2) Changes in the statistical properties of texture elements across space induce the perceived 3D shape of this surface representation. This is achieved in the model through multiple-scale filtering of a 2D image, followed by a cooperative-competitive grouping network that coherently binds texture elements into boundary webs at the appropriate depths using a scale-to-depth map and a subsequent depth competition stage. These boundary webs then gate filling-in of surface lightness signals in order to form a smooth 3D surface percept. The model quantitatively simulates challenging psychophysical data about perception of prolate ellipsoids (Todd and Akerstrom, 1987, J. Exp. Psych., 13, 242). In particular, the model represents a high degree of 3D curvature for a certain class of images, all of whose texture elements have the same degree of optical compression, in accordance with percepts of human observers. Simulations of 3D percepts of an elliptical cylinder, a slanted plane, and a photo of a golf ball are also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Difficulty understanding speech in the presence of background noise is a common report among cochlear implant recipients. The purpose of this research is to evaluate speech processing options currently available in the Cochlear Nucleus 5 sound processor to determine the best option for improving speech recognition in noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do humans rapidly recognize a scene? How can neural models capture this biological competence to achieve state-of-the-art scene classification? The ARTSCENE neural system classifies natural scene photographs by using multiple spatial scales to efficiently accumulate evidence for gist and texture. ARTSCENE embodies a coarse-to-fine Texture Size Ranking Principle whereby spatial attention processes multiple scales of scenic information, ranging from global gist to local properties of textures. The model can incrementally learn and predict scene identity by gist information alone and can improve performance through selective attention to scenic textures of progressively smaller size. ARTSCENE discriminates 4 landscape scene categories (coast, forest, mountain and countryside) with up to 91.58% correct on a test set, outperforms alternative models in the literature which use biologically implausible computations, and outperforms component systems that use either gist or texture information alone. Model simulations also show that adjacent textures form higher-order features that are also informative for scene recognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we introduce a new technique to obtain the slow-motion dynamics in nonequilibrium and singularly perturbed problems characterized by multiple scales. Our method is based on a straightforward asymptotic reduction of the order of the governing differential equation and leads to amplitude equations that describe the slowly-varying envelope variation of a uniformly valid asymptotic expansion. This may constitute a simpler and in certain cases a more general approach toward the derivation of asymptotic expansions, compared to other mainstream methods such as the method of Multiple Scales or Matched Asymptotic expansions because of its relation with the Renormalization Group. We illustrate our method with a number of singularly perturbed problems for ordinary and partial differential equations and recover certain results from the literature as special cases. © 2010 - IOS Press and the authors. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The primary visual cortex employs simple, complex and end-stopped cells to create a scale space of 1D singularities (lines and edges) and of 2D singularities (line and edge junctions and crossings called keypoints). In this paper we show first results of a biological model which attributes information of the local image structure to keypoints at all scales, ie junction type (L, T, +) and main line/edge orientations. Keypoint annotation in combination with coarse to fine scale processing facilitates various processes, such as image matching (stereo and optical flow), object segregation and object tracking.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A photochemical trajectory model has been used to simulate the chemical evolution of air masses arriving at the TORCH field campaign site in the southern UK during late July and August 2003, a period which included a widespread and prolonged photochemical pollution episode. The model incorporates speciated emissions of 124 nonmethane anthropogenic VOC and three representative biogenic VOC, coupled with a comprehensive description of the chemistry of their degradation. A representation of the gas/aerosol absorptive partitioning of ca. 2000 oxygenated organic species generated in the Master Chemical Mechanism (MCM v3.1) has been implemented, allowing simulation of the contribution to organic aerosol (OA) made by semi- and non-volatile products of VOC oxidation; emissions of primary organic aerosol (POA) and elemental carbon (EC) are also represented. Simulations of total OA mass concentrations in nine case study events (optimised by comparison with observed hourly-mean mass loadings derived from aerosol mass spectrometry measurements) imply that the OA can be ascribed to three general sources: (i) POA emissions; (ii) a '' ubiquitous '' background concentration of 0.7 mu g m(-3); and (iii) gas-to-aerosol transfer of lower volatility products of VOC oxidation generated by the regional scale processing of emitted VOC, but with all partitioning coefficients increased by a species-independent factor of 500. The requirement to scale the partitioning coefficients, and the implied background concentration, are both indicative of the occurrence of chemical processes within the aerosol which allow the oxidised organic species to react by association and/or accretion reactions which generate even lower volatility products, leading to a persistent, non-volatile secondary organic aerosol (SOA). The contribution of secondary organic material to the simulated OA results in significant elevations in the simulated ratio of organic carbon (OC) to EC, compared with the ratio of 1.1 assigned to the emitted components. For the selected case study events, [OC]/[EC] is calculated to lie in the range 2.7-9.8, values which are comparable with the high end of the range reported in the literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A theoretical framework for the joint conservation of energy and momentum in the parameterization of subgrid-scale processes in climate models is presented. The framework couples a hydrostatic resolved (planetary scale) flow to a nonhydrostatic subgrid-scale (mesoscale) flow. The temporal and horizontal spatial scale separation between the planetary scale and mesoscale is imposed using multiple-scale asymptotics. Energy and momentum are exchanged through subgrid-scale flux convergences of heat, pressure, and momentum. The generation and dissipation of subgrid-scale energy and momentum is understood using wave-activity conservation laws that are derived by exploiting the (mesoscale) temporal and horizontal spatial homogeneities in the planetary-scale flow. The relations between these conservation laws and the planetary-scale dynamics represent generalized nonacceleration theorems. A derived relationship between the wave-activity fluxes-which represents a generalization of the second Eliassen-Palm theorem-is key to ensuring consistency between energy and momentum conservation. The framework includes a consistent formulation of heating and entropy production due to kinetic energy dissipation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

By considering the long-wavelength limit of the regularized long wave (RLW) equation, we study its multiple-time higher-order evolution equations. As a first result, the equations of the Korteweg-de Vries hierarchy are shown to play a crucial role in providing a secularity-free perturbation theory in the specific case of a solitary-wave solution. Then, as a consequence, we show that the related perturbative series can be summed and gives exactly the solitary-wave solution of the RLW equation. Finally, some comments and considerations are made on the N-soliton solution, as well as on the limitations of applicability of the multiple-scale method in obtaining uniform perturbative series.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log-log) slopes from -2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels.