939 resultados para Minimum Criteria for Interview


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To evaluate the fruit and vegetable intakes of Australian adults aged 19-64 years. Methods: Intake data were collected as part of the National Nutrition Survey 1995 representing all Australian States and Territories, including city, metropolitan, rural and remote areas. Dietary intake of 8,891 19-to-64 year-olds was assessed using a structured 24-hour recall. Intake frequency was assessed as the proportion of participants consuming fruit and vegetables on the day prior to interview and variety was assessed as the number of subgroups of fruit and vegetables consumed. Intake levels were compared with the recommendations of the Australian Guide to Healthy Eating (AGHE). Results: Sixty-two per cent of participants consumed some fruit and 89% consumed some vegetables on the day surveyed. Males were less likely to consume fruit and younger adults less likely to consume fruit and vegetables compared with females and older adults respectively. Variety was primarily low (1 subcategory) for fruit and medium (3-4 subcategories) for vegetables. Thirty-two per cent of adults consumed the minimum two serves of fruit and 30% consumed the minimum five serves of vegetables as recommended by the AGHE. Eleven per cent of adults met the minimum recommendations for both fruit and vegetables. Conclusion: A large proportion of adults have fruit and vegetable intakes below the AGHE minimum recommendations. Implications: A nationally integrated, longterm campaign to increase fruit and vegetable consumption, supported by policy changes to address structural barriers to consumption, is vital to improve fruit and vegetable consumption among adults

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High reliability of railway power systems is one of the essential criteria to ensure quality and cost-effectiveness of railway services. Evaluation of reliability at system level is essential for not only scheduling maintenance activities, but also identifying reliability-critical components. Various methods to compute reliability on individual components or regularly structured systems have been developed and proven to be effective. However, they are not adequate for evaluating complicated systems with numerous interconnected components, such as railway power systems, and locating the reliability critical components. Fault tree analysis (FTA) integrates the reliability of individual components into the overall system reliability through quantitative evaluation and identifies the critical components by minimum cut sets and sensitivity analysis. The paper presents the reliability evaluation of railway power systems by FTA and investigates the impact of maintenance activities on overall reliability. The applicability of the proposed methods is illustrated by case studies in AC railways.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multi-criteria decision making methods, Preference METHods for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA), and the two-way Positive Matrix Factorization (PMF) receptor model were applied to airborne fine particle compositional data collected at three sites in Hong Kong during two monitoring campaigns held from November 2000 to October 2001 and November 2004 to October 2005. PROMETHEE/GAIA indicated that the three sites were worse during the later monitoring campaign, and that the order of the air quality at the sites during each campaign was: rural site > urban site > roadside site. The PMF analysis on the other hand, identified 6 common sources at all of the sites (diesel vehicle, fresh sea salt, secondary sulphate, soil, aged sea salt and oil combustion) which accounted for approximately 68.8 ± 8.7% of the fine particle mass at the sites. In addition, road dust, gasoline vehicle, biomass burning, secondary nitrate, and metal processing were identified at some of the sites. Secondary sulphate was found to be the highest contributor to the fine particle mass at the rural and urban sites with vehicle emission as a high contributor to the roadside site. The PMF results are broadly similar to those obtained in a previous analysis by PCA/APCS. However, the PMF analysis resolved more factors at each site than the PCA/APCS. In addition, the study demonstrated that combined results from multi-criteria decision making analysis and receptor modelling can provide more detailed information that can be used to formulate the scientific basis for mitigating air pollution in the region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australia rural research and development corporations and companies expended over $AUS500 million on agricultural research and development. A substantial proportion of this is invested in R&D in the beef industry. The Australian beef industry exports almost $AUS5billionof product annually and invest heavily in new product development to improve the beef quality and improve production efficiency. Review points are critical for effective new product development, yet many research and development bodies, particularly publicly funded ones, appear to ignore the importance of assessing products prior to their release. Significant sums of money are invested in developing technological innovations that have low levels and rates of adoption. The adoption rates could be improved if the developers were more focused on technology uptake and less focused on proving their technologies can be applied in practice. Several approaches have been put forward in an effort to improve rates of adoption into operational settings. This paper presents a study of key technological innovations in the Australian beef industry to assess the use of multiple criteria in evaluating the potential uptake of new technologies. Findings indicate that using multiple criteria to evaluate innovations before commercializing a technology enables researchers to better understand the issues that may inhibit adoption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates how contemporary works of women’s travel writing are reworking canonical formations of environmental literature by presenting imaginative accounts of travel writing that are both literal and metaphorical. In this context, the paper considers how women who travel/write may intersect the spatial hybridities of travel writing and nature writing, and in doing so, create a new genre of environmental literature that is not only ecologically sensitive but gendered. As the role of female travel writers in generating this knowledge is immense but largely unexamined, this paper will investigate how a feminist geography can be applied, both critically and creatively, to local accounts of travel. It will draw on my own travels around Queensland in an attempt to explore how many female storytellers situate themselves, in and against, various discourses of mobility and morality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While externally moderated standards-based assessment has been practised in Queensland senior schooling for more than three decades, there has been no such practice in the middle years. With the introduction of standards at state and national levels in these years, teacher judgement as developed in moderation practices is now vital. This paper argues, that in this context of assessment reform, standards intended to inform teacher judgement and to build assessment capacity are necessary but not sufficient for maintaining teacher and public confidence in schooling. Teacher judgement is intrinsic to moderation, and to professional practice, and can no longer remain private. Moderation too is intrinsic to efforts by the profession to realise judgements that are defensible, dependable and open to scrutiny. Moderation can no longer be considered an optional extra and requires system-level support especially if, as intended, the standards are linked to system-wide efforts to improve student learning. In presenting this argument we draw on an Australian Research Council funded study with key industry partners (the Queensland Studies Authority and the National Council for Curriculum and Assessment of the Republic of Ireland). The data analysed included teacher interview data and additional teacher talk during moderation sessions. These were undertaken during the initial phase of policy development. The analysis identified those issues that emerge in moderation meetings that are designed to reach consistent, reliable judgements. Of interest are the different ways in which teachers talked through and interacted with one another to reach agreement about the quality of student work in the application of standards. There is evidence of differences in the way that teachers made compensations and trade-offs in their award of grades, dependent on the subject domain in which they teach. This article concludes with some empirically derived insights into moderation practices as policy and social events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To investigate evidence-based visual field size criteria for referral of low-vision (LV) patients for mobility rehabilitation. Methods. One hundred and nine participants with LV and 41 age-matched participants with normal sight (NS) were recruited. The LV group was heterogeneous with diverse causes of visual impairment. We measured binocular kinetic visual fields with the Humphrey Field Analyzer and mobility performance on an obstacle-rich, indoor course. Mobility was assessed as percent preferred walking speed (PPWS) and number of obstacle-contact errors. The weighted kappa coefficient of association (κr) was used to discriminate LV participants with both unsafe and inefficient mobility from those with adequate mobility on the basis of their visual field size for the full sample and for subgroups according to type of visual field loss and whether or not the participants had previously received orientation and mobility training. Results. LV participants with both PPWS <38% and errors >6 on our course were classified as having inadequate (inefficient and unsafe) mobility compared with NS participants. Mobility appeared to be first compromised when the visual field was less than about 1.2 steradians (sr; solid angle of a circular visual field of about 70° diameter). Visual fields <0.23 and 0.63 sr (31 to 52° diameter) discriminated patients with at-risk mobility for the full sample and across the two subgroups. A visual field of 0.05 sr (15° diameter) discriminated those with critical mobility. Conclusions. Our study suggests that: practitioners should be alert to potential mobility difficulties when the visual field is less than about 1.2 sr (70° diameter); assessment for mobility rehabilitation may be warranted when the visual field is constricted to about 0.23 to 0.63 sr (31 to 52° diameter) depending on the nature of their visual field loss and previous history (at risk); and mobility rehabilitation should be conducted before the visual field is constricted to 0.05 sr (15° diameter; critical).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

David Held is the Graham Wallace Chair in Political Science, and co-director of LSE Global Governance, at the London School of Economics. He is the author of many works, such as Cosmopolitanism: Ideals and Realities (2010); The Cosmopolitanism Reader (2010), with Garrett Brown; Globalisation/AntiGlobalisation (2007), Models of Democracy (2006), Global Covenant (2004) and Global Transformations: Politics, Economics and Culture (1999). Professor Held is also the co-founder, alongside Lord Professor Anthony Giddens, of Polity Press. Professor Held is widely known for his work concerning cosmopolitan theory, democracy, and social, political and economic global improvement. His Global Policy Journal endeavours to marry academic developments with practitioner realities, and contributes to the understanding and improvement of our governing systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dr. Isakahn is currently a research associate with the Centre for Dialogue at La Trobe University in Australia. His latest works include several forthcoming books: Democracy in Iraq is a monograph soon to be released; whilst The Edinburgh Companion to the History of Democracy and The Secret History of Democracy, both done in concert with Stephen Stockwell, are edited collections. His most recent articles include “Targeting the Symbolic Dimension of Baathist Iraq,” “Measuring Islam in Australia” and “Manufacturing Consent in Iraq.” For further information regarding Dr. Isakhan and his works, please visit his website, www.benjaminisakhan.com.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dr. Richard Shapcott is the senior lecturer in International Relations at the University of Queensland. His areas of interest in research concern international ethics, cosmopolitan political theory and cultural diversity. He is the author of the recently published book titled International Ethics: A Critical Introduction; and several other pieces, such as, “Anti-Cosmopolitanism, the Cosmopolitan Harm Principle and Global Dialogue,” in Michalis’ and Petito’s book, Civilizational Dialogue and World Order. He’s also the author of “Dialogue and International Ethics: Religion, Cultural Diversity and Universalism, in Patrick Hayden’s, The Ashgate Research Companion to Ethics and International Relations.