51 resultados para codebook
Resumo:
Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.
Resumo:
Mode of access: Internet.
Resumo:
The researcher presents the details, findings, and critique of a pre-pilot study conducted on a codebook created for a textbook comparison. She used Cohen’s alpha and percent agreement to determine inter-rater reliabilities for coding categories. These values revealed changes needed in the coding scheme and in the coder training process for the future comparison study.
Resumo:
Principal Topic: For forward thinking companies, the environment may represent the ''biggest opportunity for enterprise and invention the industrial world has ever seen'' (Cairncross 1990). Increasing awareness of environmental and sustainability issues through media including the promotion of Al Gore's ''An Inconvenient Truth'' has seen increased awareness of environmental and sustainability issues and increased demand for business processes that reduce detrimental environmental impacts of global development (Dean & McMullen 2007). The increased demand for more environmentally sensitive products and services represents an opportunity for the development of ventures that seek to satisfy this demand through entrepreneurial action. As a consequence, increasing recent market developments in renewable energy, carbon emissions, fuel cells, green building, and other sectors suggest an increasing importance of opportunities for environmental entrepreneurship (Dean and McMullen 2007) and increasingly important area of business activity (Schaper 2005). In the last decade in particular, big business has sought to develop a more ''sustainability/ green friendly'' orientation as a response to public pressure and increased government legislation and policy to improve environmental performance (Cohen and Winn 2007). Whilst much of the literature and media is littered with examples of sustainability practices of large firms, nascent and young sustainability firms have only recently begun generating strong research and policy interest (Shepherd, Kuskova and Patzelt 2009): not only for their potential to generate above average financial performance and returns owing to a greater popularity and demand towards sustainability products and services offerings, but also for their intent to lessen environmental impacts, and to provide a more accurate reflection of the ''true cost'' of market offerings taking into account carbon and environmental impacts. More specifically, researchers have suggested that although the previous focus has been on large firms and their impact on the environment, the estimated collective impact of entries and exits of nascent and young firms in development is substantial and could outweigh the combined environmental impact of large companies (Hillary, 2000). Therefore, it may be argued that greater attention should be paid to nascent and young firms and researching sustainability practices, for both their impact in reducing environmental impacts and potential higher financial performance. Whilst acknowledging this research only uses the first wave of a four year longitudinal study of nascent and young firms, it can still begin to provide initial analysis on which to continue further research. The aim of this paper therefore is to provide an overview of the emerging literature in sustainable entrepreneurship and to present some selected preliminary results from the first wave of the data collection, with comparison, where appropriate, of sustainable and firms that do not fulfil this criteria. ''One of the key challenges in evaluating sustainability entrepreneurship is the lack of agreement in how it is defined'' (Schaper, 2005: 10). Some evaluate sustainable entrepreneurs simply as one category of entrepreneurs with little difference between them and traditional entrepreneurs (Dees, 1998). Other research recognises values-based sustainable enterprises requiring a unique perspective (Parrish, 2005). Some see the environmental or sustainable entrepreneurship is a subset of social entrepreneurship (Cohen & Winn, 2007; Dean & McMullen, 2007) whilst others see it as a separate, distinct theory (Archer 2009). Following one of the first definitions of sustainability developed by the Brundtland Commission (1987) we define sustainable entrepreneurship as firms which ''seek to meet the needs and aspirations of the present without compromising the ability to meet those of the future''. ---------- Methodology/Key Propositions: In this exploratory paper we investigate sustainable entrepreneurship using Cohen et al.'s (2008) framework to identify strategies of nascent and young entrepreneurial firms. We use data from The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE). This study shares the general empirical approach with PSED studies in the US (Reynolds et al 1994; Reynolds & Curtin 2008). The overall study uses samples of 727 nascent (not yet operational) firms and another 674 young firms, the latter being in an operational stage but less than four years old. To generate the sub sample of sustainability firms, we used content analysis techniques on firm titles, descriptions and product descriptions provided by respondents. Two independent coders used a predefined codebook developed from our review of the sustainability entrepreneurship literature (Cohen et al. 2009) to evaluate the content based on terms such as ''sustainable'' ''eco-friendly'' ''renewable energy'' ''environment'' amongst others. The inter-rater reliability was checked and the Kappa's co-efficient was found to be within the acceptable range (0.746). 85 firms fulfilled the criteria given for inclusion in the sustainability cohort. ---------- Results and Implications: The results for this paper are based on Wave one of the CAUSEE survey which has been completed and the data is available for analysis. It is expected that the findings will assist in beginning to develop an understanding of nascent and young firms that are driven to contribute to a society which is sustainable, not just from an economic perspective (Cohen et al 2008), but from an environmental and social perspective as well. The CAUSEE study provides an opportunity to compare the characteristics of sustainability entrepreneurs with entrepreneurial firms without a stated environmental purpose, which constitutes the majority of the new firms created each year, using a large scale novel longitudinal dataset. The results have implications for Government in the design of better conditions for the creation of new business, firms who assist sustainability in developing better advice programs in line with a better understanding of their needs and requirements, individuals who may be considering becoming entrepreneurs in high potential arenas and existing entrepreneurs make better decisions.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.