942 resultados para Regression To The Mean


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to explore the role of cross-functional teams in the alignment between system effectiveness and operational effectiveness after the implementation of enterprise information systems (EIS). In addition, it aims to explore the contribution of cross-functional teams to improvement in operational performance. ---------- Design/methodology/approach: The research uses a combination of qualitative and quantitative methods, in a two-stage methodological approach, to investigate the influence of cross-functional teams on the alignment between system effectiveness and operational effectiveness and the impact of the stated alignment on the improvement in operational performance. ---------- Findings: Initial findings suggest that factors stemming from system effectiveness and the performance objectives stemming from operational effectiveness are important and significantly well correlated factors that promote the alignment between the effectiveness of technological implementation and the effectiveness of operations. In addition, confirmatory factor analysis has been used to find the structural relationships and provide explanations for the stated alignment and the contribution of cross-functional teams to the improvement in operational performance. ---------- Research limitations/implications: The principal limitation of this study is its small sample size. ---------- Practical implications: Cross-functional teams have been used by many organisations as a way of involving expertise from different functional areas in the implementation of innovative technologies. An appropriate use of the dimensions that emerged from this research, in the context of cross-functional teams, will assist organisations to properly utilise cross-functional teams with the aim of improving operational performance. ---------- Originality/value: The paper presents a new approach to measure the effectiveness of EIS implementation by adding new dimensions to measure it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2009, Religious Education is a designated key learning area in Catholic schools in the Archdiocese of Brisbane and, indeed, across Australia. Over the years, though, different conceptualisations of the nature and purpose of religious education have led to the construction of different approaches to the classroom teaching of religion. By investigating the development of religious education policy in the Archdiocese of Brisbane from 1984 to 2003, the study seeks to trace the emergence of new discourses on religious education. The study understands religious education to refer to a lifelong process that occurs through a variety of forms (Moran, 1989). In Catholic schools, it refers both to co-curricula activities, such as retreats and school liturgies, and the classroom teaching of religion. It is the policy framework for the classroom teaching of religion that this study explores. The research was undertaken using a policy case study approach to gain a detailed understanding of how new conceptualisations of religious education emerged at a particular site of policy production, in this case, the Archdiocese of Brisbane. The study draws upon Yeatman’s (1998) description of policy as occurring “when social actors think about what they are doing and why in relation to different and alternative possible futures” (p. 19) and views policy as consisting of more than texts themselves. Policy texts result from struggles over meaning (Taylor, 2004) in which specific discourses are mobilised to support particular views. The study has a particular interest in the analysis of Brisbane religious education policy texts, the discursive practices that surrounded them, and the contexts in which they arose. Policy texts are conceptualised in the study as representing “temporary settlements” (Gale, 1999). Such settlements are asymmetrical, temporary and dependent on context: asymmetrical in that dominant actors are favoured; temporary because dominant actors are always under challenge by other actors in the policy arena; and context - dependent because new situations require new settlements. To investigate the official policy documents, the study used Critical Discourse Analysis (hereafter referred to as CDA) as a research tool that affords the opportunity for researchers to map and chart the emergence of new discourses within the policy arena. As developed by Fairclough (2001), CDA is a three-dimensional application of critical analysis to language. In the Brisbane religious education arena, policy texts formed a genre chain (Fairclough, 2004; Taylor, 2004) which was a focus of the study. There are two features of texts that form genre chains: texts are systematically linked to one another; and, systematic relations of recontextualisation exist between the texts. Fairclough’s (2005) concepts of “imaginary space” and “frameworks for action” (p. 65) within the policy arena were applied to the Brisbane policy arena to investigate the relationship between policy statements and subsequent guidelines documents. Five key findings emerged from the study. First, application of CDA to policy documents revealed that a fundamental reconceptualisation of the nature and purpose of classroom religious education in Catholic schools occurred in the Brisbane policy arena over the last twenty-five years. Second, a disjuncture existed between catechetical discourses that continued to shape religious education policy statements, and educational discourses that increasingly shaped guidelines documents. Third, recontextualisation between policy documents was evident and dependent on the particular context in which religious education occurred. Fourth, at subsequent links in the chain, actors created their own “imaginary space”, thereby altering orders of discourse within the policy arena, with different actors being either foregrounded or marginalised. Fifth, intertextuality was more evident in the later links in the genre chain (i.e. 1994 policy statement and 1997 guidelines document) than in earlier documents. On the basis of the findings of the study, six recommendations are made. First, the institutional Church should carefully consider the contribution that the Catholic school can make to the overall pastoral mission of the diocese in twenty-first century Australia. Second, policymakers should articulate a nuanced understanding of the relationship between catechesis and education with regard to the religion classroom. Third, there should be greater awareness of the connections among policies relating to Catholic schools – especially the connection between enrolment policy and religious education policy. Fourth, there should be greater consistency between policy documents. Fifth, policy documents should be helpful for those to whom they are directed (i.e. Catholic schools, teachers). Sixth, “imaginary space” (Fairclough, 2005) in policy documents needs to be constructed in a way that allows for multiple “frameworks for action” (Fairclough, 2005) through recontextualisation. The findings of this study are significant in a number of ways. For religious educators, the study highlights the need to develop a shared understanding of the nature and purpose of classroom religious education. It argues that this understanding must take into account the multifaith nature of Australian society and the changing social composition of Catholic schools themselves. Greater recognition should be given to the contribution that religious studies courses such as Study of Religion make to the overall religious development of a person. In view of the social composition of Catholic schools, there is also an issue of ecclesiological significance concerning the conceptualisation of the relationship between the institutional Catholic Church and Catholic schools. Finally, the study is of significance because of its application of CDA to religious education policy documents. Use of CDA reveals the foregrounding, marginalising, or excluding of various actors in the policy arena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The obesity epidemic is a global trend and is of particular concern in children. Recent reports have highlighted the severity of obesity in children by suggesting: “today's generation of children will be the first for over a century for whom life expectancy falls.” This review assesses the evidence that identifies the important role of physical activity in the growth, development and physical health of young people, owing to its numerous physical and psychological health benefits. Key issues, such as “does a sedentary lifestyle automatically lead to obesity” and “are levels of physical activity in today's children less than physical activity levels in children from previous generations?”, are also discussed. Today's environment enforces an inactive lifestyle that is likely to contribute to a positive energy balance and childhood obesity. Whether a child or adolescent, the evidence is conclusive that physical activity is conducive to a healthy lifestyle and prevention of disease. Habitual physical activity established during the early years may provide the greatest likelihood of impact on mortality and longevity. It is evident that environmental factors need to change if physical activity strategies are to have a significant impact on increasing habitual physical activity levels in children and adolescents. There is also a need for more evidence-based physical activity guidelines for children of all ages. Efforts should be concentrated on facilitating an active lifestyle for children in an attempt to put a stop to the increasing prevalence of obese children

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology and Nursing Practice explains and critically engages with the practice implications of technology for nursing. It takes a broad view of technology, covering not only health informatics, but also 'tele-nursing' and the use of equipment in clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.