976 resultados para measuring instruments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This workshop focuses upon research about the qualities of community in music and of music in community facilitated by technologically supported relationships. Generative media systems present an opportunity for users to leverage computational systems to form new relationships through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences for users with little formal musical or artistic expertise. This workshop examines the relational affordances of these systems evidenced by selected field data drawn from the Network Jamming Project. These generative performance systems enable access to unique ensembles with very little musical knowledge or skill and offer the possibility of interactive relationships with artists and musical knowledge through collaborative performance. In this workshop we will focus on data that highlights how these simulated experiences might lead to understandings that may be of social benefit. Conference participants will be invited to jam in real time using virtual interfaces and to evaluate purposively selected video artifacts that demonstrate different kinds of interactive relationship with artists, peers, and community and that enrich the sense of expressive self. Theoretical insights about meaningful engagement drawn from the longitudinal and cross cultural experiences will underpin the discussion and practical presentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Australia is leading the way in establishing a national system (the Palliative Care Outcomes Collaboration – PCOC) to measure the outcomes and quality of specialist palliative care services and to benchmark services across the country. This article reports on analysis of data collected routinely at point-of-care on 5939 patients treated by the first fifty one services that voluntarily joined PCOC. By March 2009, 111 services have agreed to join PCOC, representing more than 70% of services and more than 80% of specialist palliative care patients nationally. All states and territories are involved in this unique process that has involved extensive consultation and infrastructure and close collaboration between health services and researchers. The challenges of dealing with wide variation in outcomes and practice and the progress achieved to date are described. PCOC is aiming to improve understanding of the reasons for variations in clinical outcomes between specialist palliative care patients and differences in service outcomes as a critical step in an ongoing process to improve both service quality and patient outcomes. What is known about the topic? Governments internationally are grappling with how best to provide care for people with life limiting illnesses and how best to measure the outcomes and quality of that care. There is little international evidence on how to measure the quality and outcomes of palliative care on a routine basis. What does this paper add? The Palliative Care Outcomes Collaboration (PCOC) is the first effort internationally to measure the outcomes and quality of specialist palliative care services and to benchmark services on a national basis through an independent third party. What are the implications for practitioners? If outcomes and quality are to be measured on a consistent national basis, standard clinical assessment tools that are used as part of everyday clinical practice are necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To analyze the repeatability of measuring nerve fiber length (NFL) from images of the human corneal subbasal nerve plexus using semiautomated software. Methods: Images were captured from the corneas of 50 subjects with type 2 diabetes mellitus who showed varying severity of neuropathy, using the Heidelberg Retina Tomograph 3 with Rostock Corneal Module. Semiautomated nerve analysis software was independently used by two observers to determine NFL from images of the subbasal nerve plexus. This procedure was undertaken on two occasions, 3 days apart. Results: The intraclass correlation coefficient values were 0.95 (95% confidence intervals: 0.92–0.97) for individual subjects and 0.95 (95% confidence intervals: 0.74–1.00) for observer. Bland-Altman plots of the NFL values indicated a reduced spread of data with lower NFL values. The overall spread of data was less for (a) the observer who was more experienced at analyzing nerve fiber images and (b) the second measurement occasion. Conclusions: Semiautomated measurement of NFL in the subbasal nerve fiber layer is highly repeatable. Repeatability can be enhanced by using more experienced observers. It may be possible to markedly improve repeatability when measuring this anatomic structure using fully automated image analysis software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the 1970s the internationalisation process of firms has attracted wide research interest. One of the dominant explanations of firm internationalisation resulting from this research activity is the Uppsala stages model. In this paper, a pre-internationalisation phase is incorporated into the traditional Uppsala model to address the question: What are the antecedents of this model? Four concepts are proposed as the key components that define the experiential learning process underlying a firm’s pre-export phase: export stimuli, attitudinal/psychological commitment, resources and lateral rigidity. Through a survey of 290 Australian exporting and non-exporting small-medium sized firms, data relating to the four pre-internationalisation concepts is collected and an Export Readiness Index (ERI) is constructed through factor analysis. Using logistic regression, the ERI is tested as a tool for analysing export readiness among Australian SMEs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Assessments of change in subjective patient reported outcomes such as health-related quality of life (HRQoL) are a key component of many clinical and research evaluations. However, conventional longitudinal evaluation of change may not agree with patient perceived change if patients' understanding of the subjective construct under evaluation changes over time (response shift) or if patients' have inaccurate recollection (recall bias). This study examined whether older adults' perception of change is in agreement with conventional longitudinal evaluation of change in their HRQoL over the duration of their hospital stay. It also investigated this level of agreement after adjusting patient perceived change for recall bias that patients may have experienced. Methods: A prospective longitudinal cohort design nested within a larger randomised controlled trial was implemented. 103 hospitalised older adults participated in this investigation at a tertiary hospital facility. The EQ-5D utility and Visual Analogue Scale (VAS) scores were used to evaluate HRQoL. Participants completed EQ-5D reports as soon as they were medically stable (within three days of admission) then again immediately prior to discharge. Three methods of change score calculation were used (conventional change, patient perceived change and patient perceived change adjusted for recall bias). Agreement was primarily investigated using intraclass correlation coefficients (ICC) and limits of agreement. Results: Overall 101 (98%) participants completed both admission and discharge assessments. The mean (SD) age was 73.3 (11.2). The median (IQR) length of stay was 38 (20-60) days. For agreement between conventional longitudinal change and patient perceived change: ICCs were 0.34 and 0.40 for EQ-5D utility and VAS respectively. For agreement between conventional longitudinal change and patient perceived change adjusted for recall bias: ICCs were 0.98 and 0.90 respectively. Discrepancy between conventional longitudinal change and patient perceived change was considered clinically meaningful for 84 (83.2%) of participants, after adjusting for recall bias this reduced to 8 (7.9%). Conclusions: Agreement between conventional change and patient perceived change was not strong. A large proportion of this disagreement could be attributed to recall bias. To overcome the invalidating effect of response shift (on conventional change) and recall bias (on patient perceived change) a method of adjusting patient perceived change for recall bias has been described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: To compare three different methods of falls reporting and examine the characteristics of the data missing from the hospital incident reporting system. DESIGN: Fourteen-month prospective observational study nested within a randomized controlled trial. SETTING: Rehabilitation, stroke, medical, surgical, and orthopedic wards in Perth and Brisbane, Australia. PARTICIPANTS: Fallers (n5153) who were part of a larger trial (1,206 participants, mean age 75.1 � 11.0). MEASUREMENTS: Three falls events reporting measures: participants’ self-report of fall events, fall events reported in participants’ case notes, and falls events reported through the hospital reporting systems. RESULTS: The three reporting systems identified 245 falls events in total. Participants’ case notes captured 226 (92.2%) falls events, hospital incident reporting systems captured 185 (75.5%) falls events, and participant selfreport captured 147 (60.2%) falls events. Falls events were significantly less likely to be recorded in hospital reporting systems when a participant sustained a subsequent fall, (P5.01) or when the fall occurred in the morning shift (P5.01) or afternoon shift (P5.01). CONCLUSION: Falls data missing from hospital incident report systems are not missing completely at random and therefore will introduce bias in some analyses if the factor investigated is related to whether the data ismissing.Multimodal approaches to collecting falls data are preferable to relying on a single source alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the scope of this study, ‘performance measurement’ includes the collection and presentation of relevant information that reflects progress in achieving organisational strategic aims and meeting the needs of stakeholders such as merchants, importers, exporters and other clients. Evidence shows that utilising information technology (IT) in customs matters supports import and export practices and ensures that supply chain management flows seamlessly. This paper briefly reviews some practical techniques for measuring performance. Its aim is to recommend a model for measuring the performance of information systems (IS): in this case, the Customs Information System (CIS) used by the Royal Malaysian Customs Department (RMCD).The study evaluates the effectiveness of CIS implementation measures in Malaysia from an IT perspective. A model based on IS theories will be used to assess the impact of CIS. The findings of this study recommend measures for evaluating the performance of CIS and its organisational impacts in Malaysia. It is also hoped that the results of the study will assist other Customs administrations evaluate the performance of their information systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper attempts to develop a theoretical acceptance model for measuring Web personalization success. Key factors impacting Web personalization acceptance are identified from a detailed literature review. The final model is then cast in a structural equation modeling (SEM) framework comprising nineteen manifest variables, which are grouped into three focal behaviors of Web users. These variables could provide a framework for better understanding of numerous factors that contribute to the success measures of Web personalization technology. Especially, those concerning the quality of personalized features and how personalized information through personalized Website can be delivered to the user. The interrelationship between success constructs is also explained. Empirical validations of this theoretical model are expected on future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information Systems researchers have employed a diversity of sometimes inconsistent measures of IS success, seldom explicating the rationale, thereby complicating the choice for future researchers. In response to these and other issues, Gable, Sedera and Chan introduced the IS-Impact measurement model. This model represents “the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. Although the IS-Impact model was rigorously validated in previous research, there is a need to further generalise and validate it in different context. This paper reported the findings of the IS-Impact model revalidation study at four state governments in Malaysia with 232 users of a financial system that is currently being used at eleven state governments in Malaysia. Data was analysed following the guidelines for formative measurement validation using SmartPLS. Based on the PLS results, data supported the IS-Impact dimensions and measures thus confirming the validity of the IS-Impact model in Malaysia. This indicates that the IS-Impact model is robust and can be used across different context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article outlines the contribution the ARC Centre of Excellence for Creative Industries and Innovation has made to the project to improve statistical parameters for defining the “creative” workforce. This is one approach which addresses the imprecision of official statistics in grasping the emergent nature of the creative industries. The article discusses the policy implications of the differences between emphasizing industry and occupation or workforce. It provides qualitative case studies that provide further perspectives on quantitative analysis of the creative workforce. It also outlines debates about the implications for the cultural disciplines of an evidence-based account of creative labour. The “creative trident” methodology is summarized: it is the total of creative occupations within the core creative industries (specialists), plus the creative occupations employed in other industries (embedded) plus the business and support occupations employed in creative industries who are often responsible for managing, accounting for and technically supporting creative activity (support). The method is applied to the arts workforce in Australia. An industry-facing spin-off from the centre's mapping work, Creative Business Benchmarker, is discussed. The implications of this approach to the creative workforce is raised and exemplified in case studies of design and of the health industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.