823 resultados para Feature Documentary
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Social enterprises are diverse in their mission, business structures and industry orientations. Like all businesses, social enterprises face a range of strategic and operational challenges and utilize a range of strategies to access resources in support of their venture. This exploratory study examined the strategic management issues faced by Australian social enterprises and the ways in which they respond to these. The research was based on a comprehensive literature review and semi-structured interviews with 11 representatives of eight social enterprises based in Victoria and Queensland. The sample included mature social enterprises and those within two years of start-up. In addition to the research report, the outputs of the project include a series of six short documentaries, which are available on YouTube at http://www.youtube.com/user/SocialEnterpriseQUT#p/u. The research reported on here suggests that social enterprises are sophisticated in utilizing processes of network bricolage (Baker et al. 2003) to mobilize resources in support of their goals. Access to network resources can be both enabling and constraining as social enterprises mature. In terms of the use of formal business planning strategies, all participating social enterprises had utilized these either at the outset or the point of maturation of their business operations. These planning activities were used to support internal operations, to provide a mechanism for managing collective entrepreneurship, and to communicate to external stakeholders about the legitimacy and performance of the social enterprises. Further research is required to assess the impacts of such planning activities, and the ways in which they are used over time. Business structures and governance arrangements varied amongst participating enterprises according to: mission and values; capital needs; and the experiences and culture of founding organizations and individuals. In different ways, participants indicated that business structures and governance arrangements are important ways of conferring legitimacy on social enterprise, by signifying responsible business practice and strong social purpose to both external and internal stakeholders. Almost all participants in the study described ongoing tensions in balancing social purpose and business objectives. It is not clear, however, whether these tensions were problematic (in the sense of eroding mission or business opportunities) or productive (in the sense of strengthening mission and business practices through iterative processes of reflection and action). Longitudinal research on the ways in which social enterprises negotiate mission fulfillment and business sustainability would enhance our knowledge in this area. Finally, despite growing emphasis on measuring social impact amongst institutions, including governments and philanthropy, that influence the operating environment of social enterprise, relatively little priority was placed on this activity. The participants in our study noted the complexities of effectively measuring social impact, as well as the operational difficulties of undertaking such measurement within the day to day realities of running small to medium businesses. It is clear that impact measurement remains a vexed issue for a number of our respondents. This study suggests that both the value and practicality of social impact measurement require further debate and critically informed evidence, if impact measurement is to benefit social enterprises and the communities they serve.
Resumo:
A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.
Resumo:
Trajectory design for Autonomous Underwater Vehicles (AUVs) is of great importance to the oceanographic research community. Intelligent planning is required to maneuver a vehicle to high-valued locations for data collection. We consider the use of ocean model predictions to determine the locations to be visited by an AUV, which then provides near-real time, in situ measurements back to the model to increase the skill of future predictions. The motion planning problem of steering the vehicle between the computed waypoints is not considered here. Our focus is on the algorithm to determine relevant points of interest for a chosen oceanographic feature. This represents a first approach to an end to end autonomous prediction and tasking system for aquatic, mobile sensor networks. We design a sampling plan and present experimental results with AUV retasking in the Southern California Bight (SCB) off the coast of Los Angeles.
Resumo:
This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.
Resumo:
This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.
Resumo:
To date, the majority of films that utilise or feature hip hop music and culture, have either been in the realms of documentary, or in ‘show musicals’ (where the film musical’s device of characters’ bursting into song, is justified by the narrative of a pursuit of a career in the entertainment industry). Thus, most films that feature hip hop expression have in some way been tied to the subject of hip hop. A research interest and enthusiasm was developed for utilising hip hop expression in film in a new way, which would extend the narrative possibilities of hip hop film to wider topics and themes. The creation of the thesis film Out of My Cloud, and the writing of this accompanying exegesis, investigates a research concern of the potential for the use of hip hop expression in an ‘integrated musical’ film (where characters’ break into song without conceit or explanation). Context and rationale for Out of My Cloud (an Australian hip hop ‘integrated musical’ film) is provided in this writing. It is argued that hip hop is particularly suitable for use in a modern narrative film, and particularly in an ‘integrated musical’ film, due to its: current vibrancy and popularity, rap (vocal element of hip hop) music’s focus on lyrical message and meaning, and rap’s use as an everyday, non-performative method of communication. It is also argued that Australian hip hop deserves greater representation in film and literature due to: its current popularity, and its nature as a unique and distinct form of hip hop. To date, representation of Australian hip hop in film and television has almost solely been restricted to the documentary form. Out of My Cloud borrows from elements of social realist cinema such as: contrasts with mainstream cinema, an exploration/recognition of the relationship between environment and development of character, use of non-actors, location-shooting, a political intent of the filmmaker, displaying sympathy for an underclass, representation of underrepresented character types and topics, and a loose narrative structure that does not offer solid resolution. A case is made that it may be appropriate to marry elements of social realist film with hip hop expression due to common characteristics, such as: representation of marginalised or underrepresented groups and issues in society, political objectives of the artist/s, and sympathy for an underclass. In developing and producing Out of My Cloud, a specific method of working with, and filming actor improvisation was developed. This method was informed by improvisation and associated camera techniques of filmmakers such as Charlie Chaplin, Mike Leigh, Khoa Do, Dogme 95 filmmakers, and Lars von Trier (post-Dogme 95). A review of techniques used by these filmmakers is provided in this writing, as well as the impact it has made on my approach. The method utilised in Out of My Cloud was most influenced by Khoa Do’s technique of guiding actors to improvise fairly loosely, but with a predetermined endpoint in mind. A variation of this technique was developed for use in Out of My Cloud, which involved filming with two cameras to allow edits from multiple angles. Specific processes for creating Out of My Cloud are described and explained in this writing. Particular attention is given to the approaches regarding the story elements and the music elements. Various significant aspects of the process are referred to including the filming and recording of live musical performances, the recording of ‘freestyle’ performances (lyrics composed and performed spontaneously) and the creation of a scored musical scene involving a vocal performance without regular timing or rhythm. The documentation of processes in this writing serve to make the successful elements of this film transferable and replicable to other practitioners in the field, whilst flagging missteps to allow fellow practitioners to avoid similar missteps in future projects. While Out of My Cloud is not without its shortcomings as a short film work (for example in the areas of story and camerawork) it provides a significant contribution to the field as a working example of how hip hop may be utilised in an ‘integrated musical’ film, as well as being a rare example of a narrative film that features Australian hip hop. This film and the accompanying exegesis provide insights that contribute to an understanding of techniques, theories and knowledge in the field of filmmaking practice.
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the captured iris images, which significantly degrades iris recognition performance. Superresolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, all existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values. This paper considers transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. This is the first paper to investigate the possibility of feature domain super-resolution for iris recognition, and experiments confirm the validity of the proposed approach.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.
Resumo:
The journalism revolution is upon us. In a world where we are constantly being told that everyone can be a publisher and challenges are emerging from bloggers, Twitterers and podcasters, journalism educators are inevitably reassessing what skills we now need to teach to keep our graduates ahead of the game. QUT this year tackled that question head-on as a curriculum review and program restructure resulted in a greater emphasis on online journalism. The author spent a week in the online newsrooms of each of two of the major players – ABC online news and thecouriermail.com to watch, listen and interview some of the key players. This, in addition to interviews with industry leaders from Fairfax and news.com, lead to the conclusion that while there are some new skills involved in new media much of what the industry is demanding is in fact good old fashioned journalism. Themes of good spelling, grammar, accuracy and writing skills and a nose for news recurred when industry players were asked what it was that they would like to see in new graduates. While speed was cited as one of the big attributes needed in online journalism, the conclusion of many of the players was that the skills of a good down-table sub or a journalist working for wire service were not unlike those most used in online newsrooms.
Resumo:
Despite many arguments to the contrary, the three-act story structure, as propounded and refined by Hollywood continues to dominate the blockbuster and independent film markets. Recent successes in post-modern cinema could indicate new directions and opportunities for low-budget national cinemas.