624 resultados para Muti-Modal Biometrics, User Authentication, Fingerprint Recognition, Palm Print Recognition
Resumo:
As multimedia-enabled mobile devices such as smart phones and tablets are becoming the day-to-day computing device of choice for users of all ages, everyone expects that all mobile multimedia applications and services should be as smooth and as high-quality as the desktop experience. The grand challenge in delivering multimedia to mobile devices using the Internet is to ensure the quality of experience that meets the users' expectations, within reasonable costs, while supporting heterogeneous platforms and wireless network conditions. This book aims to provide a holistic overview of the current and future technologies used for delivering high-quality mobile multimedia applications, while focusing on user experience as the key requirement. The book opens with a section dealing with the challenges in mobile video delivery as one of the most bandwidth-intensive media that requires smooth streaming and a user-centric strategy to ensure quality of experience. The second section addresses this challenge by introducing some important concepts for future mobile multimedia coding and the network technologies to deliver quality services. The last section combines the user and technology perspectives by demonstrating how user experience can be measured using case studies on urban community interfaces and Internet telephones.
Resumo:
User needs are a fundamental element of design. If the design process does not properly reflect user needs, the design will be severely compromised. Therefore, it is worthwhile to investigate how the user is, and user needs are, understood in the design process. In this article, three accepted linear process models for web site and interactive media design are reviewed in terms of the designer and user participation. The article then proposes a user-evolving collaborative design process which is built on co-creation activities between designer and user. Co-creation activities across the entire design process structurally and ontologically reposition the users, and user needs, centrally, which allows the designers to holistically approach to the user needs through building a partnership with the users. Co-creation creates an equal evolving participatory process between user and designer towards sharing values and knowledge and creating new domains of collective creativity.
Resumo:
Positive user experience (UX) has become a key factor in designing interactive products. It acts as a differentiator which can determine a product’s success on the mature market. However, current UX frameworks and methods do not fully support the early stages of product design and development. During these phases, assessment of UX is challenging as no actual user-product interaction can be tested. This qualitative study investigated anticipated user experience (AUX) to address this problem. Using the co-discovery method, participants were asked to imagine a desired product, anticipate experiences with it, and discuss their views with another participant. Fourteen sub-categories emerged from the data, and relationships among them were defined through co-occurrence analysis. These data formed the basis of the AUX framework which consists of two networks which elucidate 1) how users imagine a desired product and 2) how they anticipate positive experiences with that product. Through this AUX framework, important factors in the process of imagining future products and experiences were learnt, including the way in which these factors interrelate. Focusing on and exploring each component of the two networks in the framework will allow designers to obtain a deeper understanding of the required pragmatic and hedonic qualities of product, intended uses of product, user characteristics, potential contexts of experience, and anticipated emotions embedded within the experience. This understanding, in turn, will help designers to better foresee users’ underlying needs and to focus on the most important aspects of their positive experience. Therefore, the use of the AUX framework in the early stages of product development will contribute to the design for pleasurable UX.
Resumo:
Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems.
Diurnal variations in axial length, choroidal thickness, intraocular pressure, and ocular biometrics
Resumo:
The chief challenge facing persistent robotic navigation using vision sensors is the recognition of previously visited locations under different lighting and illumination conditions. The majority of successful approaches to outdoor robot navigation use active sensors such as LIDAR, but the associated weight and power draw of these systems makes them unsuitable for widespread deployment on mobile robots. In this paper we investigate methods to combine representations for visible and long-wave infrared (LWIR) thermal images with time information to combat the time-of-day-based limitations of each sensing modality. We calculate appearance-based match likelihoods using the state-of-the-art FAB-MAP [1] algorithm to analyse loop closure detection reliability across different times of day. We present preliminary results on a dataset of 10 successive traverses of a combined urban-parkland environment, recorded in 2-hour intervals from before dawn to after dusk. Improved location recognition throughout an entire day is demonstrated using the combined system compared with methods which use visible or thermal sensing alone.
Resumo:
This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI.
Resumo:
Audio-visualspeechrecognition, or the combination of visual lip-reading with traditional acoustic speechrecognition, has been previously shown to provide a considerable improvement over acoustic-only approaches in noisy environments, such as that present in an automotive cabin. The research presented in this paper will extend upon the established audio-visualspeechrecognition literature to show that further improvements in speechrecognition accuracy can be obtained when multiple frontal or near-frontal views of a speaker's face are available. A series of visualspeechrecognition experiments using a four-stream visual synchronous hidden Markov model (SHMM) are conducted on the four-camera AVICAR automotiveaudio-visualspeech database. We study the relative contribution between the side and central orientated cameras in improving visualspeechrecognition accuracy. Finally combination of the four visual streams with a single audio stream in a five-stream SHMM demonstrates a relative improvement of over 56% in word recognition accuracy when compared to the acoustic-only approach in the noisiest conditions of the AVICAR database.
Resumo:
As a part of vital infrastructure and transportation network, bridge structures must function safely at all times. Bridges are designed to have a long life span. At any point in time, however, some bridges are aged. The ageing of bridge structures, given the rapidly growing demand of heavy and fast inter-city passages and continuous increase of freight transportation, would require diligence on bridge owners to ensure that the infrastructure is healthy at reasonable cost. In recent decades, a new technique, structural health monitoring (SHM), has emerged to meet this challenge. In this new engineering discipline, structural modal identification and damage detection have formed a vital component. Witnessed by an increasing number of publications is that the change in vibration characteristics is widely and deeply investigated to assess structural damage. Although a number of publications have addressed the feasibility of various methods through experimental verifications, few of them have focused on steel truss bridges. Finding a feasible vibration-based damage indicator for steel truss bridges and solving the difficulties in practical modal identification to support damage detection motivated this research project. This research was to derive an innovative method to assess structural damage in steel truss bridges. First, it proposed a new damage indicator that relies on optimising the correlation between theoretical and measured modal strain energy. The optimisation is powered by a newly proposed multilayer genetic algorithm. In addition, a selection criterion for damage-sensitive modes has been studied to achieve more efficient and accurate damage detection results. Second, in order to support the proposed damage indicator, the research studied the applications of two state-of-the-art modal identification techniques by considering some practical difficulties: the limited instrumentation, the influence of environmental noise, the difficulties in finite element model updating, and the data selection problem in the output-only modal identification methods. The numerical (by a planer truss model) and experimental (by a laboratory through truss bridge) verifications have proved the effectiveness and feasibility of the proposed damage detection scheme. The modal strain energy-based indicator was found to be sensitive to the damage in steel truss bridges with incomplete measurement. It has shown the damage indicator's potential in practical applications of steel truss bridges. Lastly, the achievement and limitation of this study, and lessons learnt from the modal analysis have been summarised.
Resumo:
Facial expression is an important channel of human social communication. Facial expression recognition (FER) aims to perceive and understand emotional states of humans based on information in the face. Building robust and high performance FER systems that can work in real-world video is still a challenging task, due to the various unpredictable facial variations and complicated exterior environmental conditions, as well as the difficulty of choosing a suitable type of feature descriptor for extracting discriminative facial information. Facial variations caused by factors such as pose, age, gender, race and occlusion, can exert profound influence on the robustness, while a suitable feature descriptor largely determines the performance. Most present attention on FER has been paid to addressing variations in pose and illumination. No approach has been reported on handling face localization errors and relatively few on overcoming facial occlusions, although the significant impact of these two variations on the performance has been proved and highlighted in many previous studies. Many texture and geometric features have been previously proposed for FER. However, few comparison studies have been conducted to explore the performance differences between different features and examine the performance improvement arisen from fusion of texture and geometry, especially on data with spontaneous emotions. The majority of existing approaches are evaluated on databases with posed or induced facial expressions collected in laboratory environments, whereas little attention has been paid on recognizing naturalistic facial expressions on real-world data. This thesis investigates techniques for building robust and high performance FER systems based on a number of established feature sets. It comprises of contributions towards three main objectives: (1) Robustness to face localization errors and facial occlusions. An approach is proposed to handle face localization errors and facial occlusions using Gabor based templates. Template extraction algorithms are designed to collect a pool of local template features and template matching is then performed to covert these templates into distances, which are robust to localization errors and occlusions. (2) Improvement of performance through feature comparison, selection and fusion. A comparative framework is presented to compare the performance between different features and different feature selection algorithms, and examine the performance improvement arising from fusion of texture and geometry. The framework is evaluated for both discrete and dimensional expression recognition on spontaneous data. (3) Evaluation of performance in the context of real-world applications. A system is selected and applied into discriminating posed versus spontaneous expressions and recognizing naturalistic facial expressions. A database is collected from real-world recordings and is used to explore feature differences between standard database images and real-world images, as well as between real-world images and real-world video frames. The performance evaluations are based on the JAFFE, CK, Feedtum, NVIE, Semaine and self-collected QUT databases. The results demonstrate high robustness of the proposed approach to the simulated localization errors and occlusions. Texture and geometry have different contributions to the performance of discrete and dimensional expression recognition, as well as posed versus spontaneous emotion discrimination. These investigations provide useful insights into enhancing robustness and achieving high performance of FER systems, and putting them into real-world applications.
Resumo:
"The text is unique in the way it balances a "user" and "preparer" perspective and integrates real financial information to illustrate business decision choices and how decisions are made using accounting information. The pedagogical approach presented in the text has been tried and tested over many years, and provides a constructive framework for students to learn fundamental accounting concepts and processes. Through the use of real company information and financial statements students will quickly appreciate the use of accounting information. The textbook clearly outlines to students how to account for typical business transactions and prepare financial statements - such as a balance sheet, income statement, and statement of cash flows - that communicate the financing, operating, and investing activities of a business. Whether a student is required to study one accounting subject, as part of a wider business degree, or undertake a major study of accounting the text builds a strong conceptual understanding of accounting and will develop skills that can be applied to an accounting and business environment. The integral role of financial statements for decision making is also emphasised in this text and is reinforced throughout by the Decision Toolkit in each chapter. Students are provided with an extensive set of tools necessary to make business decisions based on financial information." -- publisher website
Resumo:
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Resumo:
When using a mobile device to control a cursor on a large shared display, the interaction must be carefully planned to match the environment and purpose of the systems use. We describe a ‘democratic jukebox’ system that revealed five recommendations that should be considered when designing this type of interaction relating to providing feedback to the user; how to represent users in a multi-cursor based system; where people tend to look and their expectation of how to move their cursor; the orientation of screens and the social context; and, the use of simulated users to give the real users a sense that they are engaging with a greater audience.