8 resultados para image and video annotation
em WestminsterResearch - UK
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.
Resumo:
Assessing the subjective quality of processed images through an objective quality metric is a key issue in multimedia processing and transmission. In some scenarios, it is also important to evaluate the quality of the received images with minimal reference to the transmitted ones. For instance, for closed-loop optimisation of image and video transmission, the quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original images - prior to compression and transmission - are not usually available at the receiver side, and it is important to rely at the receiver side on an objective quality metric that does not need reference or needs minimal reference to the original images. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art reduced reference metric. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
Resumo:
In global engineering enterprises, information and knowledge sharing are critical factors that can determine a project’s success. This statement is widely acknowledged in published literature. However, according to some academics, tacit knowledge is derived from a person’s lifetime of experience, practice, perception and learning, which makes it hard to capture and document in order to be shared. This project investigates if social media tools can be used to improve and enable tacit knowledge sharing within a global engineering enterprise. This paper first provides a brief background of the subject area, followed by an explanation of the industrial investigation, from which the proposed knowledge framework to improve tacit knowledge sharing is presented. This project’s main focus is on the improvement of collaboration and knowledge sharing amongst product development engineers in order to improve the whole product development cycle.
Resumo:
Rapid developments in display technologies, digital printing, imaging sensors, image processing and image transmission are providing new possibilities for creating and conveying visual content. In an age in which images and video are ubiquitous and where mobile, satellite, and three-dimensional (3-D) imaging have become ordinary experiences, quantification of the performance of modern imaging systems requires appropriate approaches. At the end of the imaging chain, a human observer must decide whether images and video are of a satisfactory visual quality. Hence the measurement and modeling of perceived image quality is of crucial importance, not only in visual arts and commercial applications but also in scientific and entertainment environments. Advances in our understanding of the human visual system offer new possibilities for creating visually superior imaging systems and promise more accurate modeling of image quality. As a result, there is a profusion of new research on imaging performance and perceived quality.
Resumo:
The police use both subjective (i.e. police staff) and automated (e.g. face recognition systems) methods for the completion of visual tasks (e.g person identification). Image quality for police tasks has been defined as the image usefulness, or image suitability of the visual material to satisfy a visual task. It is not necessarily affected by any artefact that may affect the visual image quality (i.e. decrease fidelity), as long as these artefacts do not affect the relevant useful information for the task. The capture of useful information will be affected by the unconstrained conditions commonly encountered by CCTV systems such as variations in illumination and high compression levels. The main aim of this thesis is to investigate aspects of image quality and video compression that may affect the completion of police visual tasks/applications with respect to CCTV imagery. This is accomplished by investigating 3 specific police areas/tasks utilising: 1) the human visual system (HVS) for a face recognition task, 2) automated face recognition systems, and 3) automated human detection systems. These systems (HVS and automated) were assessed with defined scene content properties, and video compression, i.e. H.264/MPEG-4 AVC. The performance of imaging systems/processes (e.g. subjective investigations, performance of compression algorithms) are affected by scene content properties. No other investigation has been identified that takes into consideration scene content properties to the same extend. Results have shown that the HVS is more sensitive to compression effects in comparison to the automated systems. In automated face recognition systems, `mixed lightness' scenes were the most affected and `low lightness' scenes were the least affected by compression. In contrast the HVS for the face recognition task, `low lightness' scenes were the most affected and `medium lightness' scenes the least affected. For the automated human detection systems, `close distance' and `run approach' are some of the most commonly affected scenes. Findings have the potential to broaden the methods used for testing imaging systems for security applications.
Resumo:
The institutionalization of Utopia Studies in the last decade is premised upon a specifically aesthetic reception of Ernst Bloch’s theory of the “utopian impulse” during the 1980s and 1990s. A postmodern uneasiness to both left and right formulations of the "End of History" during this period imposes a resistance to concepts of historical and political closure or totality, resulting in a "Utopianism without Utopia". For all the attractiveness of this pan-utopianism, its failure to consider the relation between historical representation and fulfillment renders it consummate with liberalism as a merely inverted conservatism. In contrast to this specific recuperation of a Bloch, the continuing importance of Walter Benjamin’s theory of the dialectical image and the speculative concept of historical experience which underlies it becomes apparent. The intrusion of the historical Absolute is coded throughout Benjamin’s thought as the eruptive and mortuary figure of catastrophe, which stands as the dialectical counterpart to the utopian wish images of the collective dream. Indeed, the motto under which the Arcades Project was to be constructed derives from Adorno: “Each epoch dreams of itself as annihilated by catastrophe”.
Resumo:
Code without a message (taq), animation and stills