24 resultados para multi-scale modelling
em Aston University Research Archive
Resumo:
A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Perception of Mach bands may be explained by spatial filtering ('lateral inhibition') that can be approximated by 2nd derivative computation, and several alternative models have been proposed. To distinguish between them, we used a novel set of ‘generalised Gaussian’ images, in which the sharp ramp-plateau junction of the Mach ramp was replaced by smoother transitions. The images ranged from a slightly blurred Mach ramp to a Gaussian edge and beyond, and also included a sine-wave edge. The probability of seeing Mach Bands increased with the (relative) sharpness of the junction, but was largely independent of absolute spatial scale. These data did not fit the predictions of MIRAGE, nor 2nd derivative computation at a single fine scale. In experiment 2, observers used a cursor to mark features on the same set of images. Data on perceived position of Mach bands did not support the local energy model. Perceived width of Mach bands was poorly explained by a single-scale edge detection model, despite its previous success with Mach edges (Wallis & Georgeson, 2009, Vision Research, 49, 1886-1893). A more successful model used separate (odd and even) scale-space filtering for edges and bars, local peak detection to find candidate features, and the MAX operator to compare odd- and even-filter response maps (Georgeson, VSS 2006, Journal of Vision 6(6), 191a). Mach bands are seen when there is a local peak in the even-filter (bar) response map, AND that peak value exceeds corresponding responses in the odd-filter (edge) maps.
Resumo:
Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log-log) slopes from -2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels.
Resumo:
Forests play a pivotal role in timber production, maintenance and development of biodiversity and in carbon sequestration and storage in the context of the Kyoto Protocol. Policy makers and forest experts therefore require reliable information on forest extent, type and change for management, planning and modeling purposes. It is becoming increasingly clear that such forest information is frequently inconsistent and unharmonised between countries and continents. This research paper presents a forest information portal that has been developed in line with the GEOSS and INSPIRE frameworks. The web portal provides access to forest resources data at a variety of spatial scales, from global through to regional and local, as well as providing analytical capabilities for monitoring and validating forest change. The system also allows for the utilisation of forest data and processing services within other thematic areas. The web portal has been developed using open standards to facilitate accessibility, interoperability and data transfer.
Resumo:
The absence of a definitive approach to the design of manufacturing systems signifies the importance of a control mechanism to ensure the timely application of relevant design techniques. To provide effective control, design development needs to be continually assessed in relation to the required system performance, which can only be achieved analytically through computer simulation. The technique providing the only method of accurately replicating the highly complex and dynamic interrelationships inherent within manufacturing facilities and realistically predicting system behaviour. Owing to the unique capabilities of computer simulation, its application should support and encourage a thorough investigation of all alternative designs. Allowing attention to focus specifically on critical design areas and enabling continuous assessment of system evolution. To achieve this system analysis needs to efficient, in terms of data requirements and both speed and accuracy of evaluation. To provide an effective control mechanism a hierarchical or multi-level modelling procedure has therefore been developed, specifying the appropriate degree of evaluation support necessary at each phase of design. An underlying assumption of the proposal being that evaluation is quick, easy and allows models to expand in line with design developments. However, current approaches to computer simulation are totally inappropriate to support the hierarchical evaluation. Implementation of computer simulation through traditional approaches is typically characterized by a requirement for very specialist expertise, a lengthy model development phase, and a correspondingly high expenditure. Resulting in very little and rather inappropriate use of the technique. Simulation, when used, is generally only applied to check or verify a final design proposal. Rarely is the full potential of computer simulation utilized to aid, support or complement the manufacturing system design procedure. To implement the proposed modelling procedure therefore the concept of a generic simulator was adopted, as such systems require no specialist expertise, instead facilitating quick and easy model creation, execution and modification, through simple data inputs. Previously generic simulators have tended to be too restricted, lacking the necessary flexibility to be generally applicable to manufacturing systems. Development of the ATOMS manufacturing simulator, however, has proven that such systems can be relevant to a wide range of applications, besides verifying the benefits of multi-level modelling.
Resumo:
T-cell activation requires interaction of T-cell receptors (TCR) with peptide epitopes bound by major histocompatibility complex (MHC) proteins. This interaction occurs at a special cell-cell junction known as the immune or immunological synapse. Fluorescence microscopy has shown that the interplay among one agonist peptide-MHC (pMHC), one TCR and one CD4 provides the minimum complexity needed to trigger transient calcium signalling. We describe a computational approach to the study of the immune synapse. Using molecular dynamics simulation, we report here on a study of the smallest viable model, a TCR-pMHC-CD4 complex in a membrane environment. The computed structural and thermodynamic properties are in fair agreement with experiment. A number of biomolecules participate in the formation of the immunological synapse. Multi-scale molecular dynamics simulations may be the best opportunity we have to reach a full understanding of this remarkable supra-macromolecular event at a cell-cell junction.
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (first-derivative) filter, or as zero-crossings (ZCs) in the second-derivative. A variety of multi-scale models are based on this idea. We tested this approach by devising a stimulus that has no local peaks of gradient and no ZCs, at any scale. Our stimulus profile is analogous to the classic Mach-band stimulus, but it is the local luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux. The luminance profile is a smoothed triangle wave and is obtained by integrating the gradient profile. Subjects used a cursor to mark the position and polarity of perceived edges. For all the ramp-widths tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These new Mach edges correspond to peaks and troughs in the third-derivative. They are analogous to Mach bands - light and dark bars are seen where there are no luminance peaks but there are peaks in the second derivative. Here, peaks in the third derivative were seen as light-to-dark edges, troughs as dark-to-light edges. Thus Mach edges are inconsistent with many standard edge detectors, but are nicely predicted by a new model that uses a (nonlinear) third-derivative operator to find edge points.
Resumo:
For the first time for the model of real-world forward-pumped fibre Raman amplifier with the randomly varying birefringence, the stochastic calculations have been done numerically based on the Kloeden-Platen-Schurz algorithm. The results obtained for the averaged gain and gain fluctuations as a function of polarization mode dispersion (PMD) parameter agree quantitatively with the results of previously developed analytical model. Simultaneously, the direct numerical simulations demonstrate an increased stochastisation (maximum in averaged gain variation) within the region of the polarization mode dispersion parameter of 0.1÷0.3 ps/km1/2. The results give an insight into margins of applicability of a generic multi-scale technique widely used to derive coupled Manakov equations and allow generalizing analytic model with accounting for pump depletion, group-delay dispersion and Kerr-nonlinearity that is of great interest for development of the high-transmission-rates optical networks.
Resumo:
For a fibre Raman amplifier with randomly varying birefringence, we provide insight on the validity of previously explored multi-scale techniques leading to polarisation pulling of the signal state of polarisation to the pump state of polarisation. Unlike previous study, we demonstrate that in addition to polarisation pulling a new random birefringence-mediated phenomenon that goes beyond existing multi-scale techniques can boost resonance-like gain fluctuations similar to the Stochastic Anti-Resonance. For mode locked fibre lasers we report on fast and slow polarisation dynamics of fundamental, bound state, and multipulsing vector solitons along with stretched pulses. We demonstrate that tuning cavity anisotropy and birefringence along with parameters of an injected signal with randomly varying state of polarisation provides access to the variety of vector waveforms previously unexplored.
Resumo:
Hospitals can experience difficulty in detecting and responding to early signs of patient deterioration leading to late intensive care referrals, excess mortality and morbidity, and increased hospital costs. Our study aims to explore potential indicators of physiological deterioration by the analysis of vital-signs. The dataset used comprises heart rate (HR) measurements from MIMIC II waveform database, taken from six patients admitted to the Intensive Care Unit (ICU) and diagnosed with severe sepsis. Different indicators were considered: 1) generic early warning indicators used in ecosystems analysis (autocorrelation at-1-lag (ACF1), standard deviation (SD), skewness, kurtosis and heteroskedasticity) and 2) entropy analysis (kernel entropy and multi scale entropy). Our preliminary findings suggest that when a critical transition is approaching, the equilibrium state changes what is visible in the ACF1 and SD values, but also by the analysis of the entropy. Entropy allows to characterize the complexity of the time series during the hospital stay and can be used as an indicator of regime shifts in a patient’s condition. One of the main problems is its dependency of the scale used. Our results demonstrate that different entropy scales should be used depending of the level of entropy verified.
Resumo:
A conventional neural network approach to regression problems approximates the conditional mean of the output vector. For mappings which are multi-valued this approach breaks down, since the average of two solutions is not necessarily a valid solution. In this article mixture density networks, a principled method to model conditional probability density functions, are applied to retrieving Cartesian wind vector components from satellite scatterometer data. A hybrid mixture density network is implemented to incorporate prior knowledge of the predominantly bimodal function branches. An advantage of a fully probabilistic model is that more sophisticated and principled methods can be used to resolve ambiguities.
Resumo:
A conventional neural network approach to regression problems approximates the conditional mean of the output vector. For mappings which are multi-valued this approach breaks down, since the average of two solutions is not necessarily a valid solution. In this article mixture density networks, a principled method to model conditional probability density functions, are applied to retrieving Cartesian wind vector components from satellite scatterometer data. A hybrid mixture density network is implemented to incorporate prior knowledge of the predominantly bimodal function branches. An advantage of a fully probabilistic model is that more sophisticated and principled methods can be used to resolve ambiguities.
Resumo:
A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.