951 resultados para FIELD MEASUREMENT
Resumo:
The generic IS-success constructs first identified by DeLone and McLean (1992) continue to be widely employed in research. Yet, recent work by Petter et al (2007) has cast doubt on the validity of many mainstream constructs employed in IS research over the past 3 decades; critiquing the almost universal conceptualization and validation of these constructs as reflective when in many studies the measures appear to have been implicitly operationalized as formative. Cited examples of proper specification of the Delone and McLean constructs are few, particularly in light of their extensive employment in IS research. This paper introduces a four-stage formative construct development framework: Conceive > Operationalize > Respond > Validate (CORV). Employing the CORV framework in an archival analysis of research published in top outlets 1985-2007, the paper explores the extent of possible problems with past IS research due to potential misspecification of the four application-related success dimensions: Individual-Impact, Organizational-Impact, System-Quality and Information-Quality. Results suggest major concerns where there is a mismatch of the Respond and Validate stages. A general dearth of attention to the Operationalize and Respond stages in methodological writings is also observed.
Resumo:
Mentoring has been the focus of both research and writing across a range of professional fields including, for example, education, business, medecine, nursing and law for decades. Even so it has been argued by researchers that much less confusion continues to surround its meaning and understanding. Part of this confusion lies in the fact it has been described in many ways. Some writing in the field focuses on it as a workplace activity for men and womean, a developmental process for novices and leaders alike, a career tool for enhancing promotion, an affirmative action strategy for members of minority groups, and a human resource development strategy used in organisations (Ehrich and Hansford, 1999).
Resumo:
The measurement of broadband ultrasonic attenuation (BUA) in cancellous bone at the calcaneus was first described in 1984. The assessment of osteoporosis by BUA has recently been recognized by Universities UK, within its EurekaUK book, as being one of the “100 discoveries and developments in UK Universities that have changed the world” over the past 50 years, covering the whole academic spectrum from the arts and humanities to science and technology. Indeed, BUA technique has been clinically validated and is utilized worldwide, with at least seven commercial systems providing calcaneal BUA measurement. However, a fundamental understanding of the dependence of BUA upon the material and structural properties of cancellous bone is still lacking. This review aims to provide a science- and technology-orientated perspective on the application of BUA to the medical disease of osteoporosis.
Resumo:
Evidence-based Practice (EBP) has recently emerged as a topic of discussion amongst professionals within the library and information services (LIS) industry. Simply stated, EBP is the process of using formal research skills and methods to assist in decision making and establishing best practice. The emerging interest in EBP within the library context serves to remind the library profession that research skills and methods can help ensure that the library industry remains current and relevant in changing times. The LIS sector faces ongoing challenges in terms of the expectation that financial and human resources will be managed efficiently, particularly if library budgets are reduced and accountability to the principal stakeholders is increased. Library managers are charged with the responsibility to deliver relevant and cost effective services, in an environment characterised by rapidly changing models of information provision, information access and user behaviours. Consequently they are called upon not only to justify the services they provide, or plan to introduce, but also to measure the effectiveness of these services and to evaluate the impact on the communities they serve. The imperative for innovation in and enhancements to library practice is accompanied by the need for a strong understanding of the processes of review, measurement, assessment and evaluation. In 2001 the Centre for Information Research was commissioned by the Chartered Institute of Library and Information Professionals (CILIP) in the UK to conduct an examination into the research landscape for library and information science. The examination concluded that research is “important for the LIS [library and information science] domain in a number of ways” (McNicol & Nankivell, 2001, p.77). At the professional level, research can inform practice, assist in the future planning of the profession, raise the profile of the discipline, and indeed the reputation and standing of the library and information service itself. At the personal level, research can “broaden horizons and offer individuals development opportunities” (McNicol & Nankivell, 2001, p.77). The study recommended that “research should be promoted as a valuable professional activity for practitioners to engage in” (McNicol & Nankivell, 2001, p.82). This chapter will consider the role of EBP within the library profession. A brief review of key literature in the area is provided. The review considers issues of definition and terminology, highlights the importance of research in professional practice and outlines the research approaches that underpin EBP. The chapter concludes with a consideration of the specific application of EBP within the dynamic and evolving field of information literacy (IL).
Resumo:
The advantages of using a balanced approach to measurement of overall organisational performance are well-known. We examined the effects of a balanced approach in the more specific domain of measuring innovation effectiveness in 144 small to medium sized companies in Australia and Thailand. We found that there were no differences in the metrics used by Australian and Thai companies. In line with our hypotheses, we found that those SMEs that took a balanced approach were more likely to perceive benefits of implemented innovations than those that used only a financial approach to measurement. The perception of benefits then had a subsequent effect on overall attitudes towards innovation. The study shows the importance of measuring both financial and non-financial indicators of innovation effectiveness within SMEs and discusses ways in which these can be conducted with limited resources.
Resumo:
Practitioners and academics often assume that investments in innovation will lead to organizational improvements. However, previous research has often shown that implemented innovations fail to realise these potential improvements. On the other hand, organisation, perhaps, has been growing and productive because of the innovation, but traditional measurements have failed to capture that growth. In order to help organizations capture their innovation performance effectively, this study examined the organizations which employ different types of performance measurement and their perception of innovation effectiveness.
Resumo:
In recent years, practitioners and researchers alike have turned their attention to knowledge management (KM) in order to increase organisational performance (OP). As a result, many different approaches and strategies have been investigated and suggested for how knowledge should be managed to make organisations more effective and efficient. However, most research has been undertaken in the for-profit sector, with only a few studies focusing on the benefits nonprofit organisations might gain by managing knowledge. This study broadly investigates the impact of knowledge management on the organisational performance of nonprofit organisations. Organisational performance can be evaluated through either financial or non-financial measurements. In order to evaluate knowledge management and organisational performance, non-financial measurements are argued to be more suitable given that knowledge is an intangible asset which often cannot be expressed through financial indicators. Non-financial measurement concepts of performance such as the balanced scorecard or the concept of Intellectual Capital (IC) are well accepted and used within the for-profit and nonprofit sectors to evaluate organisational performance. This study utilised the concept of IC as the method to evaluate KM and OP in the context of nonprofit organisations due to the close link between KM and IC: Indeed, KM is concerned with managing the KM processes of creating, storing, sharing and applying knowledge and the organisational KM infrastructure such as organisational culture or organisational structure to support these processes. On the other hand, IC measures the knowledge stocks in different ontological levels: at the individual level (human capital), at the group level (relational capital) and at the organisational level (structural capital). In other words, IC measures the value of the knowledge which has been managed through KM. As KM encompasses the different KM processes and the KM infrastructure facilitating these processes, previous research has investigated the relationship between KM infrastructure and KM processes. Organisational culture, organisational structure and the level of IT support have been identified as the main factors of the KM infrastructure influencing the KM processes of creating, storing, sharing and applying knowledge. Other research has focused on the link between KM and OP or organisational effectiveness. Based on existing literature, a theoretical model was developed to enable the investigation of the relation between KM (encompassing KM infrastructure and KM processes) and IC. The model assumes an association between KM infrastructure and KM processes, as well as an association between KM processes and the various levels of IC (human capital, structural capital and relational capital). As a result, five research questions (RQ) with respect to the various factors of the KM infrastructure as well as with respect to the relationship between KM infrastructure and IC were raised and included into the research model: RQ 1 Do nonprofit organisations which have a Hierarchy culture have a stronger IT support than nonprofit organisations which have an Adhocracy culture? RQ 2 Do nonprofit organisations which have a centralised organisational structure have a stronger IT support than nonprofit organisations which have decentralised organisational structure? RQ 3 Do nonprofit organisations which have a stronger IT support have a higher value of Human Capital than nonprofit organisations which have a less strong IT support? RQ 4 Do nonprofit organisations which have a stronger IT support have a higher value of Structural Capital than nonprofit organisations which have a less strong IT support? RQ 5 Do nonprofit organisations which have a stronger IT support have a higher value of Relational Capital than nonprofit organisations which have a less strong IT support? In order to investigate the research questions, measurements for IC were developed which were linked to the main KM processes. The final KM/IC model contained four items for evaluating human capital, five items for evaluating structural capital and four items for evaluating relational capital. The research questions were investigated through empirical research using a case study approach with the focus on two nonprofit organisations providing trade promotions services through local offices worldwide. Data for the investigation of the assumptions were collected via qualitative as well as quantitative research methods. The qualitative study included interviews with representatives of the two participating organisations as well as in-depth document research. The purpose of the qualitative study was to investigate the factors of the KM infrastructure (organisational culture, organisational structure, IT support) of the organisations and how these factors were related to each other. On the other hand, the quantitative study was carried out through an online-survey amongst staff of the various local offices. The purpose of the quantitative study was to investigate which impact the level of IT support, as the main instrument of the KM infrastructure, had on IC. Overall several key themes were found as a result of the study: • Knowledge Management and Intellectual Capital were complementary with each other, which should be expressed through measurements of IC based on KM processes. • The various factors of the KM infrastructure (organisational culture, organisational structure and level of IT support) are interdependent. • IT was a primary instrument through which the different KM processes (creating, storing, sharing and applying knowledge) were performed. • A high level of IT support was evident when participants reported higher level of IC (human capital, structural capital and relational capital). The study supported previous research in the field of KM and replicated the findings from other case studies in this area. The study also contributed to theory by placing the KM research within the nonprofit context and analysing the linkage between KM and IC. From the managerial perspective, the findings gave clear indications that would allow interested parties, such as nonprofit managers or consultants to understand more about the implications of KM on OP and to use this knowledge for implementing efficient and effective KM strategies within their organisations.
Resumo:
Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.
Resumo:
In the study of complex neurobiological movement systems, measurement indeterminacy has typically been overcome by imposing artificial modelling constraints to reduce the number of unknowns (e.g., reducing all muscle, bone and ligament forces crossing a joint to a single vector). However, this approach prevents human movement scientists from investigating more fully the role, functionality and ubiquity of coordinative structures or functional motor synergies. Advancements in measurement methods and analysis techniques are required if the contribution of individual component parts or degrees of freedom of these task-specific structural units is to be established, thereby effectively solving the indeterminacy problem by reducing the number of unknowns. A further benefit of establishing more of the unknowns is that human movement scientists will be able to gain greater insight into ubiquitous processes of physical self-organising that underpin the formation of coordinative structures and the confluence of organismic, environmental and task constraints that determine the exact morphology of these special-purpose devices.
Resumo:
Two-stroke outboard boat engines using total loss lubrication deposit a significant proportion of their lubricant and fuel directly into the water. The purpose of this work is to document the velocity and concentration field characteristics of a submerged swirling water jet emanating from a propeller in order to provide information on its fundamental characteristics. Measurements of the velocity and concentration field were performed in a turbulent jet generated by a model boat propeller (0.02 m diameter) operating at 1500 rpm and 3000 rpm. The measurements were carried out in the Zone of Established Flow up to 50 propeller diameters downstream of the propeller. Both the mean axial velocity profile and the mean concentration profile showed self-similarity. Further, the stand deviation growth curve was linear. The effects of propeller speed and dye release location were also investigated.
Resumo:
As with the broader field of education research, most writing on the subject of school excursions and field trips has centred around progressive/humanist concerns for building pupil’s self-esteem and for the development of the ‘whole child’. Such research has also stressed the importance of a broad, grounded, and experiential curriculum - as exemplified by subjects containing these extra-school activities - as well as the possibility of strengthening the relationship between student and teacher. Arguing that this approach to the field trip is both exhausted of ideas and conceptually flawed, this paper proposes some alternate routes into the area for the prospective researcher. First, it is argued that by historicising the subject matter, it can be seen that school excursions are not simply the product of the contemporary humanist desire for diverse and fulfilling educational experiences, rather they can, in part, be traced to eighteenth century beliefs among the English gentry that travel formed a crucial component of a good education, to the advent of an affordable public rail system, and to school tours associated with the Temperance movement. Second, field trips can be understood from within the associated framework of concerns over the governance of tourism and the organisation of disciplinary apparatuses for the production of an educated and regulated citizenry. Far from being a simple learning experience, museums and art galleries form part of a complex of disciplinary and power relations designed to produce a populace with very specific capacities, aspirations and styles of public conduct. Finally, rather than allowing children ‘freedom’ from the constraints of the classroom, on the contrary, through the medium of the field-trip, children can become accustomed to having their activities governed in the broader domain of the generalised community . School excursions thereby constitute an effective tactic through which young people have their conduct managed, and their social and scholastic identities shaped and administered.
Resumo:
We report on an inter-comparison of six different hygroscopicity tandem differential mobility analysers (HTDMAs). These HTDMAs are used worldwide in laboratories and in field campaigns to measure the water uptake of aerosol particles and were never intercompared. After an investigation of the different design of the instruments with their advantages and inconveniencies, the methods for calibration, validation and analysis are presented. Measurements of nebulised ammonium sulphate as well as of secondary organic aerosol generated from a smog chamber were performed. Agreement and discrepancies between the instrument and to the theory are discussed, and final recommendations for a standard instrument are given, as a benchmark for laboratory or field experiments to ensure a high quality of HTDMA data.
Resumo:
Much of what we know about lymphoedema is derived from studies involving cancer cohorts, in particular breast cancer. Yet even within this setting, and despite the known profound physical, social and psychological effects, our understanding of associated risk factors and effectiveness of prevention and treatment strategies is poorly studied with inconsistent results. The limitations of our current methods to detect and monitor lymphoedema contribute to our lack of understanding of this condition. Current measurement approaches applied in the clinical and research setting will be described during this presentation. The strengths, limitations and practical considerations relevant to measurement methods will also be addressed. Improving the way we detect and monitor lymphoedema is necessary and critical for advancing the lymphoedema field and is relevant for the detection and monitoring of lymphoedema in the clinic as well as in research.
Resumo:
Suggestions that peripheral imagery may affect the development of refractive error have led to interest in the variation in refraction and aberration across the visual field. It is shown that, if the optical system of the eye is rotationally symmetric about an optical axis which does not coincide with the visual axis, measurements of refraction and aberration made along the horizontal and vertical meridians of the visual field will show asymmetry about the visual axis. The departures from symmetry are modelled for second-order aberrations, refractive components and third-order coma. These theoretical results are compared with practical measurements from the literature. The experimental data support the concept that departures from symmetry about the visual axis in the measurements of crossed-cylinder astigmatism J45 and J180 are largely explicable in terms of a decentred optical axis. Measurements of the mean sphere M suggest, however, that the retinal curvature must differ in the horizontal and vertical meridians.
Resumo:
This thesis proposes that contemporary printmaking, at its most significant, marks the present through reconstructing pasts and anticipating futures. It argues this through examples in the field, occurring in contexts beyond the Euramerican (Europe and North America). The arguments revolve around how the practice of a number of significant artists in Japan, Australia and Thailand has generated conceptual and formal innovations in printmaking that transcend local histories and conventions, whilst paradoxically, also building upon them and creating new meanings. The arguments do not portray the relations between contemporary and traditional art as necessarily antagonistic but rather, as productively dialectical. Furthermore, the case studies demonstrate that, in the 1980s and 1990s particularly, the studio practice of these printmakers was informed by other visual arts disciplines and reflected postmodern concerns. Departures from convention witnessed in these countries within the Asia-Pacific region shifted the field of the print into a heterogeneous and hybrid realm. The practitioners concerned (especially in Thailand) produced work that was more readily equated with performance and installation art than with printmaking per se. In Japan, the incursion of photography interrupted the decorative cast of printmaking and delivered it from a straightforward, craft-based aesthetic. In Australia, fixed notions of national identity were challenged by print practitioners through deliberate cultural rapprochements and technical contradictions (speaking across old and new languages).However time-honoured print methods were not jettisoned by any case study artists. Their re-alignment of the fundamental attributes of printmaking, in line with materialist formalism, is a core consideration of my arguments. The artists selected for in-depth analysis from these three countries are all innovators whose geographical circumstances and creative praxis drew on local traditions whilst absorbing international trends. In their radical revisionism, they acknowledged the specificity of history and place, conditions of contingency and forces of globalisation. The transformational nature of their work during the late twentieth century connects it to the postmodern ethos and to a broader artistic and cultural nexus than has hitherto been recognised in literature on the print. Emerging from former guild-based practices, they ambitiously conceived their work to be part of a continually evolving visual arts vocabulary. I argue in this thesis that artists from the Asia-Pacific region have historically broken with the hermetic and Euramerican focus that has generally characterised the field. Inadequate documentation and access to print activity outside the dominant centres of critical discourse imply that readings of postmodernism have been too limited in their scope of inquiry. Other locations offer complexities of artistic practice where re-alignments of customary boundaries are often the norm. By addressing innovative activity in Japan, Australia and Thailand, this thesis exposes the need for a more inclusive theoretical framework and wider global reach than currently exists for ‘printmaking’.