29 resultados para Camera Network, Image Processing, Compression
Resumo:
Visually impaired people have a very different view of the world such that seemingly simple environments as viewed by a ‘normally’ sighted people can be difficult for people with visual impairments to access and move around. This is a problem that can be hard to fully comprehend by people with ‘normal vision’ even when guidelines for inclusive design are available. This paper investigates ways in which image processing techniques can be used to simulate the characteristics of a number of common visual impairments in order to provide, planners, designers and architects, with a visual representation of how people with visual impairments view their environment, thereby promoting greater understanding of the issues, the creation of more accessible buildings and public spaces and increased accessibility for visually impaired people in everyday situations.
Resumo:
Self-organizing neural networks have been implemented in a wide range of application areas such as speech processing, image processing, optimization and robotics. Recent variations to the basic model proposed by the authors enable it to order state space using a subset of the input vector and to apply a local adaptation procedure that does not rely on a predefined test duration limit. Both these variations have been incorporated into a new feature map architecture that forms an integral part of an Hybrid Learning System (HLS) based on a genetic-based classifier system. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset targets set against a subset of their features. The system's objective is to achieve these targets by evolving a behavioural repertoire that efficiently explores and exploits the problem environment. Feature maps encode two types of knowledge within HLS — long-term memory traces of useful regularities within the environment and the classifier performance data calibrated against an object's feature states and targets. Self-organization of these networks constitutes non-genetic-based (experience-driven) learning within HLS. This paper presents a description of the HLS architecture and an analysis of the modified feature map implementing associative memory. Initial results are presented that demonstrate the behaviour of the system on a simple control task.
Resumo:
The real time hardware architecture of a deterministic video echo canceller (deghoster) system is presented. The deghoster is capable of calculating all the multipath channel distortion characteristics from terrestrial and cable television in one single pass while performing real time video in-line ghost cancellation. The results from the actual system are also presented in this paper.
Resumo:
The real-time parallel computation of histograms using an array of pipelined cells is proposed and prototyped in this paper with application to consumer imaging products. The array operates in two modes: histogram computation and histogram reading. The proposed parallel computation method does not use any memory blocks. The resulting histogram bins can be stored into an external memory block in a pipelined fashion for subsequent reading or streaming of the results. The array of cells can be tuned to accommodate the required data path width in a VLSI image processing engine as present in many imaging consumer devices. Synthesis of the architectures presented in this paper in FPGA are shown to compute the real-time histogram of images streamed at over 36 megapixels at 30 frames/s by processing in parallel 1, 2 or 4 pixels per clock cycle.
Resumo:
Urban microclimates are greatly affected by urban form and texture and have a significant impact on building energy performance. The impact of urban form on energy consumption in buildings mainly relates to the availability of the uses of solar radiation, daylighting and natural ventilation. The urban heat island (UHI) effect increases the risk of overheating in buildings as well as the maximum energy demand for cooling. A need has arisen for a robust calculation tool (using the first-cut calculation method) to enable planners, architects and environmental assessors, to quickly and accurately compare the impact of different urban forms on local climate and UHI mitigation strategies. This paper describes a tool for the simulation of urban microclimates, which is developed by integrating image processing with a coupled thermal and airflow model.
Resumo:
Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.
Resumo:
Methods for producing nonuniform transformations, or regradings, of discrete data are discussed. The transformations are useful in image processing, principally for enhancement and normalization of scenes. Regradings which “equidistribute” the histogram of the data, that is, which transform it into a constant function, are determined. Techniques for smoothing the regrading, dependent upon a continuously variable parameter, are presented. Generalized methods for constructing regradings such that the histogram of the data is transformed into any prescribed function are also discussed. Numerical algorithms for implementing the procedures and applications to specific examples are described.
Resumo:
Very high-resolution Synthetic Aperture Radar sensors represent an alternative to aerial photography for delineating floods in built-up environments where flood risk is highest. However, even with currently available SAR image resolutions of 3 m and higher, signal returns from man-made structures hamper the accurate mapping of flooded areas. Enhanced image processing algorithms and a better exploitation of image archives are required to facilitate the use of microwave remote sensing data for monitoring flood dynamics in urban areas. In this study a hybrid methodology combining radiometric thresholding, region growing and change detection is introduced as an approach enabling the automated, objective and reliable flood extent extraction from very high-resolution urban SAR images. The method is based on the calibration of a statistical distribution of “open water” backscatter values inferred from SAR images of floods. SAR images acquired during dry conditions enable the identification of areas i) that are not “visible” to the sensor (i.e. regions affected by ‘layover’ and ‘shadow’) and ii) that systematically behave as specular reflectors (e.g. smooth tarmac, permanent water bodies). Change detection with respect to a pre- or post flood reference image thereby reduces over-detection of inundated areas. A case study of the July 2007 Severn River flood (UK) observed by the very high-resolution SAR sensor on board TerraSAR-X as well as airborne photography highlights advantages and limitations of the proposed method. We conclude that even though the fully automated SAR-based flood mapping technique overcomes some limitations of previous methods, further technological and methodological improvements are necessary for SAR-based flood detection in urban areas to match the flood mapping capability of high quality aerial photography.
Resumo:
Pulsed terahertz imaging is being developed as a technique to image obscured mural paintings. Due to significant advances in terahertz technology, portable systems are now capable of operating in unregulated environments and this has prompted their use on archaeological excavations. August 2011 saw the first use of pulsed terahertz imaging at the archaeological site of Çatalhöyük, Turkey, where mural paintings dating from the Neolithic period are continuously being uncovered by archaeologists. In these particular paintings the paint is applied onto an uneven surface, and then covered by an equally uneven surface. Traditional terahertz data analysis has proven unsuccessful at sub-surface imaging of these paintings due to the effect of these uneven surfaces. For the first time, an image processing technique is presented, based around Gaussian beam-mode coupling, which enables the visualization of the obscured painting.
Resumo:
n the past decade, the analysis of data has faced the challenge of dealing with very large and complex datasets and the real-time generation of data. Technologies to store and access these complex and large datasets are in place. However, robust and scalable analysis technologies are needed to extract meaningful information from these datasets. The research field of Information Visualization and Visual Data Analytics addresses this need. Information visualization and data mining are often used complementary to each other. Their common goal is the extraction of meaningful information from complex and possibly large data. However, though data mining focuses on the usage of silicon hardware, visualization techniques also aim to access the powerful image-processing capabilities of the human brain. This article highlights the research on data visualization and visual analytics techniques. Furthermore, we highlight existing visual analytics techniques, systems, and applications including a perspective on the field from the chemical process industry.
Resumo:
Sea surface temperature has been an important application of remote sensing from space for three decades. This chapter first describes well-established methods that have delivered valuable routine observations of sea surface temperature for meteorology and oceanography. Increasingly demanding requirements, often related to climate science, have highlighted some limitations of these ap-proaches. Practitioners have had to revisit techniques of estimation, of characterising uncertainty, and of validating observations—and even to reconsider the meaning(s) of “sea surface temperature”. The current understanding of these issues is reviewed, drawing attention to ongoing questions. Lastly, the prospect for thermal remote sens-ing of sea surface temperature over coming years is discussed.