998 resultados para Global fingerprint
Resumo:
This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system achieves a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.
Resumo:
Knowledge is about cultural power. Considering that it is both resource and product within the brave new world of fast capitalism, this collection argues for knowledge cultures that are mutually engaged and hence more culturally inclusive and socially productive. Globalized intellectual property regimes, the privatization of information, and their counterpoint, the information and creative commons movements, constitute productive sites for the exploration of epistemologies that talk with each other rather than at and past each other. Global Knowledge Cultures provides a collection of accessible essays by some of the world’s leading legal scholars, new media analysts, techno activists, library professionals, educators and philosophers. Issues canvassed by the authors include the ownership of knowledge, open content licensing, knowledge policy, the common-wealth of learning, transnational cultural governance, and information futures. Together, they call for sustained intercultural dialogue for more ethical knowledge cultures within contexts of fast knowledge capitalism.
Resumo:
Most of the research into ELT has focused on its linguistic and methodological aspects, which are based on Western scientific traditions. The contributions and experiences of English language teachers themselves, especially their work in overseas contexts, have frequently been overlooked. This volume aims to document the complexity of ELT as ‘work’ in new global economic and cultural conditions, and to explore how this complexity is realised in the everyday experiences of ELT teachers. The development of ELT from the colonial experience to its current status as a global commodity is explored; ELT is then situated in the discourses of globalisation, specifically within Appadurai’s theorisation of global flows of people, images, ideas, technology and money, or scapes. Within this framework, narratives are constructed from the experiences of Native-speaking English teachers. These reveal much about the personal, pedagogical and cultural dimensions of ELT work in non-Centre countries, and will contribute to a greater understanding of the intercultural dimensions of ELT for all those who work in it, and in related educational fields.
Resumo:
The previous investigations have shown that the modal strain energy correlation method, MSEC, could successfully identify the damage of truss bridge structures. However, it has to incorporate the sensitivity matrix to estimate damage and is not reliable in certain damage detection cases. This paper presents an improved MSEC method where the prediction of modal strain energy change vector is differently obtained by running the eigensolutions on-line in optimisation iterations. The particular trail damage treatment group maximising the fitness function close to unity is identified as the detected damage location. This improvement is then compared with the original MSEC method along with other typical correlation-based methods on the finite element model of a simple truss bridge. The contributions to damage detection accuracy of each considered mode is also weighed and discussed. The iterative searching process is operated by using genetic algorithm. The results demonstrate that the improved MSEC method suffices the demand in detecting the damage of truss bridge structures, even when noised measurement is considered.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Agriculture's contribution to radiative forcing is principally through its historical release of carbon in soil and vegetation to the atmosphere and through its contemporary release of nitrous oxide (N2O) and methane (CHM4). The sequestration of soil carbon in soils now depleted in soil organic matter is a well-known strategy for mitigating the buildup of CO2 in the atmosphere. Less well-recognized are other mitigation potentials. A full-cost accounting of the effects of agriculture on greenhouse gas emissions is necessary to quantify the relative importance of all mitigation options. Such an analysis shows nitrogen fertilizer, agricultural liming, fuel use, N2O emissions, and CH4 fluxes to have additional significant potential for mitigation. By evaluating all sources in terms of their global warming potential it becomes possible to directly evaluate greenhouse policy options for agriculture. A comparison of temperate and tropical systems illustrates some of these options.
Resumo:
About this book: Over 100 authors present 25 contributions on the impacts of global change on terrestrial ecosystems including:key processes of the earth system such as the CO2 fertilization effect, shifts in disturbances and biome distribution, the saturation of the terrestrial carbon sink, and changes in functional biodiversity,ecosystem services such the production of wheat, pest control, and carbon storage in croplands, and sensitive regions in the world threaten by rapid changes in climate and land use such as high latitudes ecosystems, tropical forest in Southeast Asia, and ecosystems dominated by Monsoon climate.The book also explores new research developments on spatial thresholds and nonlinearities, the key role of urban development in global biogeochemical processes, and the integration of natural and social sciences to address complex problems of the human-environment system.
Resumo:
Extended wear has long been the ‘holy grail’ of contact lenses by virtue of the increased convenience and freedom of lifestyle which they accord; however, this modality enjoyed only limited market success during the last quarter of the 20th century. The introduction of silicone hydrogel materials into the market at the beginning of this century heralded the promise of successful extended wear due to the superior oxygen performance of this lens type. To assess patterns of contact lens fitting, including extended wear, over the past decade, up to 1000 survey forms were sent to contact lens fitters in Australia, Canada, Japan, the Netherlands, Norway, the UK and the USA each year between 2000 and 2009. Practitioners were asked to record data relating to the first 10 contact lens fits or refits performed after receiving the survey form. Analysis of returned forms revealed that, averaged over this period, 9% of all soft lenses prescribed were for extended wear, with national figures ranging from 2% in Japan to 17% in Norway. The trend over the past decade has been for an increase from about 5% of all soft lens fits in 2000 to a peak of between 9 and 12% between 2002 and 2007, followed by a decline to around 7% in 2009. A person receiving extended wear lenses is likely to be an older female who is being refitted with silicone hydrogel lenses for full-time wear. Although extended wear has yet again failed to fulfil the promise of being the dominant contact lens wearing modality, it is still a viable option for many people.