36 resultados para Tate, Kellyn
Resumo:
Australian research and technological solutions are now being applied throughout the world.
Resumo:
Objectives: Concentrations of troponin measured with high sensitivity troponin assays are raised in a number of emergency department (ED) patients; however many are not diagnosed with acute myocardial infarction (AMI). Clinical comparisons between the early use (2 h after presentation) of high sensitivity cardiac troponin T (hs-cTnT) and I (hs-cTnI) assays for the diagnosis of AMI have not been reported. Design and methods: Early (0 h and 2 h) hs-cTnT and hs-cTnI assay results in 1571 ED patients with potential acute coronary syndrome (ACS) without ST elevation on electrocardiograph (ECG) were evaluated. The primary outcome was diagnosis of index MI adjudicated by cardiologists using the local cTnI assay results taken ≥6 h after presentation, ECGs and clinical information. Stored samples were later analysed with hs-cTnT and hs-cTnI assays. Results: The ROC analysis for AMI (204 patients; 13.0%) for hs-cTnT and hs-cTnI after 2 h was 0.95 (95% CI: 0.94–0.97) and 0.98 (95% CI: 0.97–0.99) respectively. The sensitivity, specificity, PLR, and NLR of hs-cTnT and hs-cTnI for AMI after 2 h were 94.1% (95% CI: 90.0–96.6) and 95.6% (95% CI: 91.8–97.7), 79.0% (95% CI: 76.8–81.1) and 92.5% (95% CI: 90.9–93.7), 4.48 (95% CI: 4.02–5.00) and 12.86 (95% CI: 10.51–15.31), and 0.07 (95% CI: 0.04–0.13) and 0.05 (95% CI:0.03–0.09) respectively. Conclusions: Exclusion of AMI 2 h after presentation in emergency patients with possible ACS can be achieved using hs-cTnT or hs-cTnI assays. Significant differences in specificity of these assays are relevant and if using the hs-cTnT assay, further clinical assessment in a larger proportion of patients would be required.
Resumo:
This article summarizes a panel held at the 15th Pacific Asia Conference on Information Systems (PACIS) in Brisbane, Austrailia, in 2011. The panelists proposed a new research agenda for information systems success research. The DeLone and McLean IS Success Model has been one of the most influential models in Information Systems research. However, the nature of information systems continues to change. Information systems are increasingly implemented across layers of infrastructure and application architecture. The diffusion of information systems into many spheres of life means that information systems success needs to be considered in multiple contexts. Services play a much more prominent role in the economies of countries, making the “service” context of information systems increasingly important. Further, improved understandings of theory and measurement offer new opportunities for novel approaches and new research questions about information systems success.
Resumo:
An increasing range of services are now offered via online applications and e-commerce websites. However, problems with online services still occur at times, even for the best service providers due to the technical failures, informational failures, or lack of required website functionalities. Also, the widespread and increasing implementation of web services means that service failures are both more likely to occur, and more likely to have serious consequences. In this paper we first develop a digital service value chain framework based on existing service delivery models adapted for digital services. We then review current literature on service failure prevention, and provide a typology of technolo- gies and approaches that can be used to prevent failures of different types (functional, informational, system), that can occur at different stages in the web service delivery. This makes a contribution to theory by relating specific technologies and technological approaches to the point in the value chain framework where they will have the maximum impact. Our typology can also be used to guide the planning, justification and design of robust, reliable web services.
Resumo:
The nature of services and service delivery has been changing rapidly since the 1980’s when many seminal papers in services research were published. Services are increasingly digital, or have a digital component. Further, a large and heterogeneous literature, with competing and overlapping definitions, many of which are dated and inappropriate to contemporary digital services offerings is impeding progress in digital services research. In this conceptual paper, we offer a critical review of some existing conceptualizations of services and digital services. We argue that an inductive approach to understanding cognition about digital services is required to develop a taxonomy of digital services and a new vocabulary. We argue that this is a pre-requisite to theorizing about digital services, including understanding quality drivers, value propositions, and quality determinants for different digital service types. We propose a research approach for reconceptualising digital services and service quality, and outline methodological approaches and outcomes.
Resumo:
Focus groups are a popular qualitative research method for information systems researchers. However, compared with the abundance of research articles and handbooks on planning and conducting focus groups, surprisingly, there is little research on how to analyse focus group data. Moreover, those few articles that specifically address focus group analysis are all in fields other than information systems, and offer little specific guidance for information systems researchers. Further, even the studies that exist in other fields do not provide a systematic and integrated procedure to analyse both focus group ‘content’ and ‘interaction’ data. As the focus group is a valuable method to answer the research questions of many IS studies (in the business, government and society contexts), we believe that more attention should be paid to this method in the IS research. This paper offers a systematic and integrated procedure for qualitative focus group data analysis in information systems research.
Resumo:
An increasing range of technology services are now offered on a self-service basis. However, problems with self-service technologies (SSTs) occur at times due to the technical error, staff error, or consumers’ own mistakes. Considering the role of consumers as co-producers in the SST context, we aim to study consumer’s behaviours, strategies, and decision making in solving their problem with SST and identify the factors contributing to their persistence in solving the problem. This study contributes to the information systems research, as it is the first study that aims to identify such a process and the factors affecting consumers’ persistence in solving their problem with SST. A focus group with user support staff has been conducted, yielding some initial results that helped to conduct the next phases of the study. Next, using Critical Incident Technique, data will be gathered through focus groups with users, diary method, and think-aloud method.
Resumo:
Aims: We assessed the diagnostic performance of z-scores to define a significant delta cardiac troponin (cTn) in a cohort of patients with well-defined clinical outcomes. Methods: We calculated z-scores, which are dependent on the analytical precision and biological variation, to report changes in cTn. We compared the diagnostic performances of a relative delta (%Δ), actual delta (Δ), and z-scores in 762 emergency department patients with symptoms of suspected acute coronary syndrome. cTn was measured with sensitive cTnI (Beckman Coulter), highly sensitive cTnI (Abbott), and highly sensitive cTnT (Roche) assays. Results: Receiver operating characteristic analysis showed no statistically significant differences in the areas under the curve (AUC) of z-scores and Δ with both superior compared to %Δ for all three assays (p<0.001). The AUCs of z-scores measured with the Abbott hs-cTnI (0.955) and Roche hs-cTnT (0.922) assays were comparable to Beckman Coulter cTnI (0.933) (p=0.272 and 0.640, respectively). The individualized Δ cut-off values that were required to emulate a z-score of 1.96 were: Beckman Coulter cTnI 30 ng/l, Abbott hs-cTnI 20 ng/l, and Roche hs-cTnT 7 ng/l. Conclusions: z-scores allow the use of a single cut-off value at all cTn levels, for both cTnI and cTnT and for sensitive and highly sensitive assays, with comparable diagnostic performances. This strategy of reporting significant changes as z-scores may obviate the need for the empirical development of assay-specific cut-off rules to define significant troponin changes.
Resumo:
Theories of individual attitudes toward IT include task technology fit (TTF), technology acceptance model (TAM), unified theory of acceptance and use of technology (UTAUT), cognitive fit, expectation disconfirmation, and computer self-efficacy. Examination of these theories reveals three main concerns. First, the theories mostly ‘‘black box’’ (or omit) the IT artifact. Second, appropriate mid-range theory is not developed to contribute to disciplinary progress and to serve the needs of our practitioner community. Third, theories are overlapping but incommensurable. We propose a theoretical framework that harmonizes these attitudinal theories and shows how they can be specialized to include relevant IS phenomenon.
Resumo:
Despite longstanding concern with the dimensionality of the service quality construct as measured by ServQual and IS-ServQual instruments, variations on the IS-ServQual instrument have been enduringly prominent in both academic research and practice in the field of IS. We explain the continuing popularity of the instrument based on the salience of the item set for predicting overall customer satisfaction, suggesting that the preoccupation with the dimensions has been a distraction. The implicit mutual exclusivity of the items suggests a more appropriate conceptualization of IS-ServQual as a formative index. This conceptualization resolves the paradox in IS-ServQual research, that of how an instrument with such well-known and well-documented weaknesses continue to be very influential and widely used by academics and practitioners. A formative conceptualization acknowledges and addresses the criticisms of IS-ServQual, while simultaneously explaining its enduring salience by focusing on the items rather than the “dimensions.” By employing an opportunistic sample and adopting the most recent IS-ServQual instrument published in a leading IS journal (virtually, any valid IS- ServQual sample in combination with a previously tested instrument variant would suffice for study purposes), we demonstrate that when re-specified as both first-order and second-order formatives, IS-ServQual has good model quality metrics and high predictive power on customer satisfaction. We conclude that this formative specification has higher practical use and is more defensible theoretically.
Resumo:
This paper uses discourse analysis techniques associated with Foucauldian archaeology to examine a teacher education accreditation document from Australia to reveal how graduating teachers are constructed through the discourses presented. The findings reveal a discursive site of contestation within the document itself and a mismatch between the identified policy discourses and those from the academic archive. The authors suggest that rather than contradictory representations of what constitutes graduating teacher quality and professionalism, what is needed is an accreditation process that agrees on constructions of graduate identity and professional practice that enact an intellectual and reflexive form of professionalism.