898 resultados para dichroic mirror


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Services in the form of business services or IT-enabled (Web) Services have become a corporate asset of high interest in striving towards the agile organisation. However, while the design and management of a single service is widely studied and well understood, little is known about how a set of services can be managed. This gap motivated this paper, in which we explore the concept of Service Portfolio Management. In particular, we propose a Service Portfolio Management Framework that explicates service portfolio goals, tasks, governance issues, methods and enablers. The Service Portfolio Management Framework is based upon a thorough analysis and consolidation of existing, well-established portfolio management approaches. From an academic point of view, the Service Portfolio Management Framework can be positioned as an extension of portfolio management conceptualisations in the area of service management. Based on the framework, possible directions for future research are provided. From a practical point of view, the Service Portfolio Management Framework provides an organisation with a novel approach to managing its emerging service portfolios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information System (IS) success may be the most arguable and important dependent variable in the IS field. The purpose of the present study is to address IS success by empirically assess and compare DeLone and McLean’s (1992) and Gable’s et al. (2008) models of IS success in Australian Universities context. The two models have some commonalities and several important distinctions. Both models integrate and interrelate multiple dimensions of IS success. Hence, it would be useful to compare the models to see which is superior; as it is not clear how IS researchers should respond to this controversy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In response to a range of contextual drivers, the worldwide adoption of ERP Systems in Higher Education Institutions (HEIs) has increased substantially over the past decade. Though this demand continues to grow, with HEIs now a main target market for ERP vendors, little has been published on the topic. This paper reports a sub-study of a larger research effort that aims to contribute to understanding the phenomenon of ERP adoption and evaluation in HEIs in the Australasian region. It presents a descriptive case study conducted at Queensland University of Technology (QUT) in Australia, with emphasis on challenges with ERP adoption. The case study provides rich contextual details about ERP system selection, customisation, integration and evaluation, and insights into the role of consultants in the HE sector. Through this analysis, the paper (a) provides evidence of the dearth of ERP literature pertaining to the HE sector; (b) yields insights into differentiating factors in the HE sector that warrants specific research attention, and (c) offers evidence of how key ERP decisions such as systems selection, customisation, integration, evaluation, and consultant engagement are influenced by the specificities of the HE sector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a competitive environment, companies continuously innovate to offer superior services at lower costs. ‘Shared services’ have been extensively adopted in practice as one means for improving organisational performance. Shared services is considered most appropriate for support functions, and is widely adopted in Human Resource Management, Finance and Accounting; more recently being employed across the Information Systems function. IS applications and infrastructure are an important enabler and driver of shared services in all functional areas. As computer based corporate information systems have become de facto and the internet pervasive and increasingly the backbone of administrative systems, the technical impediments to sharing have come down dramatically. As this trend continues, CIOs and IT professionals will need a deeper understanding of the shared services phenomenon and its implications. The advent of shared services has consequential implications for the IS academic discipline. Yet, archival analysis of IS the academic literature reveals that shared services, though mentioned in more than 100 articles, has received little in depth attention. This paper is the first attempt to investigate and report on the current status of shared services in the IS literature. The paper presents detailed review of literature from main IS journals and conferences, findings evidencing a lack of focus and definitions and objectives lacking conceptual rigour. The paper concludes with a tentative operational definition, a list of perceived main objectives of shared services, and an agenda for related future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although comparison phakometry has been used by a number of studies to measure posterior corneal shape, these studies have not calculated the size of the posterior corneal zones of reflection they assessed. This paper develops paraxial equations for calculating posterior corneal zones of reflection, based on standard keratometry equations and equivalent mirror theory. For targets used in previous studies, posterior corneal reflection zone sizes were calculated using paraxial equations and using exact ray tracing, assuming spherical and aspheric corneal surfaces. Paraxial methods and exact ray tracing methods give similar estimates for reflection zone sizes less than 2 mm, but for larger zone sizes ray tracing methods should be used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Incidence and mortality from skin cancers including melanoma are highest among men 50 years or older. Thorough skin self-examination may be beneficial to improve skin cancers outcomes.--------- Objectives: To develop and conduct a randomized-controlled trial of a video-based intervention to improve skin self-examination behavior among men 50 years or older.--------- Methods: Pilot work ascertained appropriate targeting of the 12-minute intervention video towards men 50 years or older. Overall, 968 men were recruited and 929 completed baseline telephone assessment. Baseline analysis assessed randomization balance and demographic, skin cancer risk and attitudinal factors associated with conducting a whole-body skin self-examination or receiving a whole-body clinical skin examination by a doctor during the past 12 months.--------- Results: Randomization resulted in well-balanced intervention and control groups. Overall 13% of men reported conducting a thorough skin self-examination using a mirror or the help of another person to check difficult to see areas, while 39% reported having received a whole-body skin examination by a doctor within the past 12 months. Confidence in finding time for and receiving advice or instructions by a doctor to perform a skin self-examination were among the factors associated with thorough skin self-examination at baseline.---------- Conclusions: Men 50 years or older can successfully be recruited to a video-based intervention trial with the aim reduce their burden through skin cancer. Randomization by computer generated randomization list resulted in good balance between control and intervention group and baseline analysis determined factors associated with skin cancer early detection behavior at baseline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The over representation of novice drivers in crashes is alarming. Research indicates that one in five drivers’ crashes within their first year of driving. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the drive. This paper presents a system that evaluates the data stream acquired from multiple in-vehicle sensors (acquired from Driver Vehicle Environment-DVE) using fuzzy rules and classifies the driving manoeuvres (i.e. overtake, lane change and turn) as low risk or high risk. The fuzzy rules use parameters such as following distance, frequency of mirror checks, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvre to assess risk. The fuzzy rules to estimate risk are designed after analysing the selected driving manoeuvres performed by driver trainers. This paper focuses mainly on the difference in gaze pattern for experienced and novice drivers during the selected manoeuvres. Using this system, trainers of novice drivers would be able to empirically evaluate and give feedback to the novice drivers regarding their driving behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The notion of pedagogy for anyone in the teaching profession is innocuous. The term itself, is steeped in history but the details of the practice can be elusive. What does it mean for an academic to be embracing pedagogy? The problem is not limited to academics; most teachers baulk at the introduction of a pedagogic agenda and resist attempts to have them reflect on their classroom teaching practice, where ever that classroom might be constituted. This paper explores the application of a pedagogic model (Education Queensland, 2001) which was developed in the context of primary and secondary teaching and was part of a schooling agenda to improve pedagogy. As a teacher educator I introduced the model to classroom teachers (Hill, 2002) using an Appreciative Inquiry (Cooperrider and Srivastva 1987) model and at the same time applied the model to my own pedagogy as an academic. Despite being instigated as a model for classroom teachers, I found through my own practitioner investigation that the model was useful for exploring my own pedagogy as a university academic (Hill, 2007, 2008). Cooperrider, D.L. and Srivastva, S. (1987) Appreciative inquiry in organisational life, in Passmore, W. and Woodman, R. (Eds) Research in Organisational Changes and Development (Vol 1) Greenwich, CT: JAI Press. Pp 129-69 Education Queensland (2001) School Reform Longitudinal Study (QSRLS), Brisbane, Queensland Government. Hill, G. (2002, December ) Reflecting on professional practice with a cracked mirror: Productive Pedagogy experiences. Australian Association for Research in Education Conference. Brisbane, Australia. Hill, G. (2007) Making the assessment criteria explicit through writing feedback: A pedagogical approach to developing academic writing. International Journal of Pedagogies and Learning 3(1), 59-66. Hill, G. (2008) Supervising Practice Based Research. Studies in Learning, Evaluation, Innovation and Development, 5(4), 78-87

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The legal power to declare war has traditionally been a part of a prerogative to be exercised solely on advice that passed from the King to the Governor-General no later than 1942. In 2003, the Governor- General was not involved in the decision by the Prime Minister and Cabinet to commit Australian troops to the invasion of Iraq. The authors explore the alternative legal means by which Australia can go to war - means the government in fact used in 2003 - and the constitutional basis of those means. While the prerogative power can be regulated and/or devolved by legislation, and just possibly by practice, there does not seem to be a sound legal basis to assert that the power has been devolved to any other person. It appears that in 2003 the Defence Minister used his legal powers under the Defence Act 1903 (Cth) (as amended in 1975) to give instructions to the service head(s). A powerful argument could be made that the relevant sections of the Defence Act were not intended to be used for the decision to go to war, and that such instructions are for peacetime or in bello decisions. If so, the power to make war remains within the prerogative to be exercised on advice. Interviews with the then Governor-General indicate that Prime Minister Howard had planned to take the matter to the Federal Executive Council 'for noting', but did not do so after the Governor-General sought the views of the then Attorney-General about relevant issues of international law. The exchange raises many issues, but those of interest concern the kinds of questions the Governor-General could and should ask about proposed international action and whether they in any way mirror the assurances that are uncontroversially required for domestic action. In 2003, the Governor-General's scrutiny was the only independent scrutiny available because the legality of the decision to go to war was not a matter that could be determined in the High Court, and the federal government had taken action in March 2002 that effectively prevented the matter coming before the International Court of Justice

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Driver simulators provide safe conditions to assess driver behaviour and provide controlled and repeatable environments for study. They are a promising research tool in terms of both providing safety and experimentally well controlled environments. There are wide ranges of driver simulators, from laptops to advanced technologies which are controlled by several computers in a real car mounted on platforms with six degrees of freedom of movement. The applicability of simulator-based research in a particular study needs to be considered before starting the study, to determine whether the use of a simulator is actually appropriate for the research. Given the wide range of driver simulators and their uses, it is important to know beforehand how closely the results from a driver simulator match results found in the real word. Comparison between drivers’ performance under real road conditions and in particular simulators is a fundamental part of validation. The important question is whether the results obtained in a simulator mirror real world results. In this paper, the results of the most recently conducted research into validity of simulators is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To compare subjective blur limits for cylinder and defocus. ---------- Method: Blur was induced with a deformable, adaptive-optics mirror when either the subjects’ own astigmatisms were corrected or when both astigmatisms and higher-order aberrations were corrected. Subjects were cyclopleged and had 5 mm artificial pupils. Black letter targets (0.1, 0.35 and 0.6 logMAR) were presented on white backgrounds. Results: For ten subjects, blur limits were approximately 50% greater for cylinder than for defocus (in diopters). While there were considerable effects of axis for individuals, overall this was not strong, with the 0° (or 180°) axis having about 20% greater limits than oblique axes. In a second experiment with text (equivalent in angle to N10 print at 40 cm distance), cylinder blur limits for 6 subjects were approximately 30% greater than those for defocus; this percentage was slightly smaller than for the three letters. Blur limits of the text were intermediate between those of 0.35 logMAR and 0.6 logMAR letters. Extensive blur limit measurements for one subject with single letters did not show expected interactions between target detail orientation and cylinder axis. ---------- Conclusion: Subjective blur limits for cylinder are 30%-50% greater than those for defocus, with the overall influence of cylinder axis being 20%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While critical success factors (CSFs) of enterprise system (ES) implementation are mature concepts and have received considerable attention for over a decade, researchers have very often focused on only a specific aspect of the implementation process or a specific CSF. Resultantly, there is (1) little research documented that encompasses all significant CSF considerations and (2) little empirical research into the important factors of successful ES implementation. This paper is part of a larger research effort that aims to contribute to understanding the phenomenon of ES CSFs, and reports on preliminary findings from a case study conducted at a Queensland University of Technology (QUT) in Australia. This paper reports on an empirically derived CSFs framework using a directed content analysis of 79 studies; from top IS outlets, employing the characteristics of the analytic theory, and from six different projects implemented at QUT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While Information services function’s (ISF) service quality is not a new concept and has received considerable attention for over two decades, cross-cultural research of ISF’s service quality is not very mature. The author argues that the relationship between cultural dimensions and the ISF’s service quality dimensions may provide useful insights for how organisations should deal with different cultural groups. This paper will show that ISF’s service quality dimensions vary from one culture to another. The study adopts Hofstede’s (1980, 1991) typology of cultures and the “zones of tolerance” (ZOT) service quality measure reported by Kettinger & Lee (2005) as the primary commencing theory-base. In this paper, the author hypothesised and tested the influences of culture on users’ service quality perceptions and found strong empirical support for the study’s hypotheses. The results of this study indicate that as a result of their cultural characteristics, users vary in both their overall service quality perceptions and their perceptions on each of the four dimensions of ZOT service quality.