891 resultados para benefit realization
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.
Resumo:
The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.
Resumo:
In contemporary Australian theatre there seems to be no precise, universally accepted methodology that defines the dramaturgical process. There is not even agreement as to how a playwright might benefit from dramaturgy. Nevertheless, those engaged in creating original works for the Australian professional theatre have, to varying degrees, come to accept dramaturgical process as something of a necessity. Increasingly, dramaturgical process is evident in the development of new plays by state, flagship and project-based professional theatre producers. Many small to medium theatre companies provide dramaturgical assistance to playwrights although this often occurs in an ad hoc fashion, prescribed by economic restraint rather than artistic sensibility. Through an exploration of the dramaturgical development of two of his plays in several professional play development contexts, the researcher examines issues influencing contemporary dramaturgy in Australia. These plays are presented here as examinable components (weighted 70%) of the research as a whole, and they function in symbiotic relationship with the exegetical enquiry (weighted 30%). The research also presents the findings of a small-scale experiment which tests the hypothesis that a holistic approach to developing new plays might challenge conventional views on dramaturgical process. In terms of its overall conclusions, this research finds that while many playwrights and theatre professionals in Australia consider dramaturgy a distinct and important component of the creative development process, there exist substantial inconsistencies in relation to facilitating dramaturgical models that provide quality artistic outcomes for playwrights and their plays. The study presents unique qualitative and quantitative data as a contribution to knowledge in this field of enquiry, and it is anticipated that the research as a whole will be of interest to a variety of readers, including playwrights, dramaturgs, other theatre practitioners, students and teachers.
Resumo:
In this paper, a static synchronous series compensator (SSSC), along with a fixed capacitor, is used to avoid torsional mode instability in a series compensated transmission system. A 48-step harmonic neutralized inverter is used for the realization of the SSSC. The system under consideration is the IEEE first benchmark model on SSR analysis. The system stability is studied both through eigenvalue analysis and EMTDC/PSCAD simulation studies. It is shown that the combination of the SSSC and the fixed capacitor improves the synchronizing power coefficient. The presence of the fixed capacitor ensures increased damping of small signal oscillations. At higher levels of fixed capacitor compensation, a damping controller is required to stabilize the torsional modes of SSR.
Resumo:
The critical factor in determining students' interest and motivation to learn science is the quality of the teaching. However, science typically receives very little time in primary classrooms, with teachers often lacking the confidence to engage in inquiry-based learning because they do not have a sound understanding of science or its associated pedagogical approaches. Developing teacher knowledge in this area is a major challenge. Addressing these concerns with didactic "stand and deliver" modes of Professional Development (PD) has been shown to have little relevance or effectiveness, yet is still the predominant approach used by schools and education authorities. In response to that issue, the constructivist-inspired Primary Connections professional learning program applies contemporary theory relating to the characteristics of effective primary science teaching, the changes required for teachers to use those pedagogies, and professional learning strategies that facilitate such change. This study investigated the nature of teachers' engagement with the various elements of the program. Summative assessments of such PD programs have been undertaken previously, however there was an identified need for a detailed view of the changes in teachers' beliefs and practices during the intervention. This research was a case study of a Primary Connections implementation. PD workshops were presented to a primary school staff, then two teachers were observed as they worked in tandem to implement related curriculum units with their Year 4/5 classes over a six-month period. Data including interviews, classroom observations and written artefacts were analysed to identify common themes and develop a set of assertions related to how teachers changed their beliefs and practices for teaching science. When teachers implement Primary Connections, their students "are more frequently curious in science and more frequently learn interesting things in science" (Hackling & Prain, 2008). This study has found that teachers who observe such changes in their students consequently change their beliefs and practices about teaching science. They enhance science learning by promoting student autonomy through open-ended inquiries, and they and their students enhance their scientific literacy by jointly constructing investigations and explaining their findings. The findings have implications for teachers and for designers of PD programs. Assertions related to teaching science within a pedagogical framework consistent with the Primary Connections model are that: (1) promoting student autonomy enhances science learning; (2) student autonomy presents perceived threats to teachers but these are counteracted by enhanced student engagement and learning; (3) the structured constructivism of Primary Connections resources provides appropriate scaffolding for teachers and students to transition from didactic to inquiry-based learning modes; and (4) authentic science investigations promote understanding of scientific literacy and the "nature of science". The key messages for designers of PD programs are that: (1) effective programs model the pedagogies being promoted; (2) teachers benefit from taking the role of student and engaging in the proposed learning experiences; (3) related curriculum resources foster long-term engagement with new concepts and strategies; (4) change in beliefs and practices occurs after teachers implement the program or strategy and see positive outcomes in their students; and (5) implementing this study's PD model is efficient in terms of resources. Identified topics for further investigation relate to the role of assessment in providing evidence to support change in teachers' beliefs and practices, and of teacher reflection in making such change more sustainable.
Resumo:
Introduction: The purpose of this study was to assess the capacity of a written intervention, in this case a patient information brochure, to improve patient satisfaction during an Emergency Department (ED) visit. For the purpose of measuring the effect of the intervention the ED journey was conceptualised as a series of distinct areas of service comprising waiting time, service by the triage nurse, care from doctors and nurses and information giving Background of study: Research into patient satisfaction has become a widespread activity endorsed by both governments and hospital administrations. The literature on ED patient satisfaction has consistently indicated three primary areas of patient dissatisfaction: waiting time, nursing care and communication. Recent developments in the literature on patient satisfaction studies however have highlighted the relationship between patients. expectations of a service encounter and their consequent assessment of the experience as dissatisfying or satisfying. Disconfirmation theory posits that the degree to which expectations are confirmed will affect subsequent levels of satisfaction. The conceptual framework utilised in this study is Coye.s (2004) model of disconfirmation. Coye while reiterating satisfaction is a consequence of the degree expectations are either confirmed or disconfirmed also posits that expectations can be modified by interventions. Coye.s work conceptualises these interventions as intra encounter experiences (cues) which function to adjust expectations. Coye suggests some cues are unintended and may have a negative impact which also reinforces the value of planned cues intended to meet or exceed consumer expectations. Consequently the brochure can be characterized as a potentially positive cue, encouraging the patient to understand processes and to orient them in what can be a confronting environment. Only a limited number of studies have examined the effect of written interventions within an ED. No studies could be located which have tested the effect of ED interventions using a conceptual framework which relates the effect of the degree to which expectations are confirmed or disconfirmed in terms of satisfaction with services. Method: Two studies were conducted. Study One used qualitative methods to explore patients. expectations of the ED from the perspective of both patients and health care professionals. Study One was used in part to direct the development of the intervention (brochure) in Study Two. The brochure was an intervention designed to modify patients. expectations thus increasing their satisfaction with the provision of ED service. As there was no existing tools to measure ED patients. expectations and satisfaction a new tool was also developed based on the findings and the literature of Study One. Study Two used a non-randomised, quasi-experimental approach using a non-equivalent post-test only comparison group design used to investigate the effect of the patient education brochure (Stommel and Wills, 2004). The brochure was disseminated to one of two study groups (the intervention group). The effect of the brochure was assessed by comparing the data obtained from both the intervention and control group. These two groups consisted of 150 participants each. It was expected that any differences in the relevant domains selected for examination would indicate the effect of the brochure both on expectation and potentially satisfaction. Results: Study One revealed several areas of common ground between patients and nurses in terms of relevant content for the written intervention, including the need for information on the triage system and waiting times. Areas of difference were also found with patients emphasizing communication issues, whereas focus group members expressed concern that patients were often unable to assimilate verbal information. The findings suggested the potential utility of written material to reinforce verbal communication particularly in terms of the triage process and other ED protocols. This material was synthesized within the final version of the written intervention. Overall the results of Study Two indicated no significant differences between the two groups. The intervention group did indicate a significant number of participants who viewed the brochure of having changed their expectations. The effect of the brochure may have been obscured by a lack of parity between the two groups as the control group presented with statistically significantly higher levels of acuity and experienced significantly shorter waiting times. In terms of disconfirmation theory this would suggest expectations that had been met or exceeded. The results confirmed the correlation of expectations with satisfaction. Several domains also indicated age as a significant predictor with older patients tending to score higher satisfaction results. Other significant predictors of satisfaction established were waiting time and care from nurses, reinforcing the combination of efficient service and positive interpersonal experiences as being valued by patients. Conclusions: Information presented in written form appears to benefit a significant number of ED users in terms of orientation and explaining systems and procedures. The degree to which these effects may interact with other dimensions of satisfaction however is likely to be limited. Waiting time and interpersonal behaviours from staff also provide influential cues in determining satisfaction. Written material is likely to be one element in a series of coordinated strategies to improve patient satisfaction during periods of peak demand.
Resumo:
In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.
Resumo:
When asking the question, ``How can institutions design science policies for the benefit of decision makers?'' Sarewitz and Pielke Sarewitz, D., Pielke Jr., R.A., this issue. The neglected heart of science policy: reconciling supply of and demand for science. Environ. Sci. Policy 10] posit the idea of ``reconciling supply and demand of science'' as a conceptual tool for assessment of science programs. We apply the concept to the U.S. Department of Agriculture's (USDA) carbon cycle science program. By evaluating the information needs of decision makers, or the ``demand'', along with the supply of information by the USDA, we can ascertain where matches between supply and demand exist, and where science policies might miss opportunities. We report the results of contextual mapping and of interviews with scientists at the USDA to evaluate the production and use of current agricultural global change research, which has the stated goal of providing ``optimal benefit'' to decision makers on all levels. We conclude that the USDA possesses formal and informal mechanisms by which scientists evaluate the needs of users, ranging from individual producers to Congress and the President. National-level demands for carbon cycle science evolve as national and international policies are explored. Current carbon cycle science is largely derived from those discussions and thus anticipates the information needs of producers. However, without firm agricultural carbon policies, such information is currently unimportant to producers. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
No-tillage (NT) management has been promoted as a practice capable of offsetting greenhouse gas (GHG) emissions because of its ability to sequester carbon in soils. However, true mitigation is only possible if the overall impact of NT adoption reduces the net global warming potential (GWP) determined by fluxes of the three major biogenic GHGs (i.e. CO2, N2O, and CH4). We compiled all available data of soil-derived GHG emission comparisons between conventional tilled (CT) and NT systems for humid and dry temperate climates. Newly converted NT systems increase GWP relative to CT practices, in both humid and dry climate regimes, and longer-term adoption (>10 years) only significantly reduces GWP in humid climates. Mean cumulative GWP over a 20-year period is also reduced under continuous NT in dry areas, but with a high degree of uncertainty. Emissions of N2O drive much of the trend in net GWP, suggesting improved nitrogen management is essential to realize the full benefit from carbon storage in the soil for purposes of global warming mitigation. Our results indicate a strong time dependency in the GHG mitigation potential of NT agriculture, demonstrating that GHG mitigation by adoption of NT is much more variable and complex than previously considered, and policy plans to reduce global warming through this land management practice need further scrutiny to ensure success.
Resumo:
Sustainability has been increasingly recognised as an integral part of highway infrastructure development. In practice however, the fact that financial return is still a project’s top priority for many, environmental aspects tend to be overlooked or considered as a burden, as they add to project costs. Sustainability and its implications have a far-reaching effect on each project over time. Therefore, with highway infrastructure’s long-term life span and huge capital demand, the consideration of environmental cost/ benefit issues is more crucial in life-cycle cost analysis (LCCA). To date, there is little in existing literature studies on viable estimation methods for environmental costs. This situation presents the potential for focused studies on environmental costs and issues in the context of life-cycle cost analysis. This paper discusses a research project which aims to integrate the environmental cost elements and issues into a conceptual framework for life cycle costing analysis for highway projects. Cost elements and issues concerning the environment were first identified through literature. Through questionnaires, these environmental cost elements will be validated by practitioners before their consolidation into the extension of existing and worked models of life-cycle costing analysis (LCCA). A holistic decision support framework is being developed to assist highway infrastructure stakeholders to evaluate their investment decision. This will generate financial returns while maximising environmental benefits and sustainability outcome.
Resumo:
Safety interventions (e.g., median barriers, photo enforcement) and road features (e.g., median type and width) can influence crash severity, crash frequency, or both. Both dimensions—crash frequency and crash severity—are needed to obtain a full accounting of road safety. Extensive literature and common sense both dictate that crashes are not created equal, with fatalities costing society more than 1,000 times the cost of property damage crashes on average. Despite this glaring disparity, the profession has not unanimously embraced or successfully defended a nonarbitrary severity weighting approach for analyzing safety data and conducting safety analyses. It is argued here that the two dimensions (frequency and severity) are made available by intelligently and reliably weighting crash frequencies and converting all crashes to property-damage-only crash equivalents (PDOEs) by using comprehensive societal unit crash costs. This approach is analogous to calculating axle load equivalents in the prediction of pavement damage: for instance, a 40,000-lb truck causes 4,025 times more stress than does a 4,000-lb car and so simply counting axles is not sufficient. Calculating PDOEs using unit crash costs is the most defensible and nonarbitrary weighting scheme, allows for the simple incorporation of severity and frequency, and leads to crash models that are sensitive to factors that affect crash severity. Moreover, using PDOEs diminishes the errors introduced by underreporting of less severe crashes—an added benefit of the PDOE analysis approach. The method is illustrated with rural road segment data from South Korea (which in practice would develop PDOEs with Korean crash cost data).
Resumo:
It is important to examine the nature of the relationships between roadway, environmental, and traffic factors and motor vehicle crashes, with the aim to improve the collective understanding of causal mechanisms involved in crashes and to better predict their occurrence. Statistical models of motor vehicle crashes are one path of inquiry often used to gain these initial insights. Recent efforts have focused on the estimation of negative binomial and Poisson regression models (and related deviants) due to their relatively good fit to crash data. Of course analysts constantly seek methods that offer greater consistency with the data generating mechanism (motor vehicle crashes in this case), provide better statistical fit, and provide insight into data structure that was previously unavailable. One such opportunity exists with some types of crash data, in particular crash-level data that are collected across roadway segments, intersections, etc. It is argued in this paper that some crash data possess hierarchical structure that has not routinely been exploited. This paper describes the application of binomial multilevel models of crash types using 548 motor vehicle crashes collected from 91 two-lane rural intersections in the state of Georgia. Crash prediction models are estimated for angle, rear-end, and sideswipe (both same direction and opposite direction) crashes. The contributions of the paper are the realization of hierarchical data structure and the application of a theoretically appealing and suitable analysis approach for multilevel data, yielding insights into intersection-related crashes by crash type.
Resumo:
This paper describes the formalization and application of a methodology to evaluate the safety benefit of countermeasures in the face of uncertainty. To illustrate the methodology, 18 countermeasures for improving safety of at grade railroad crossings (AGRXs) in the Republic of Korea are considered. Akin to “stated preference” methods in travel survey research, the methodology applies random selection and laws of large numbers to derive accident modification factor (AMF) densities from expert opinions. In a full Bayesian analysis framework, the collective opinions in the form of AMF densities (data likelihood) are combined with prior knowledge (AMF density priors) for the 18 countermeasures to obtain ‘best’ estimates of AMFs (AMF posterior credible intervals). The countermeasures are then compared and recommended based on the largest safety returns with minimum risk (uncertainty). To the author's knowledge the complete methodology is new and has not previously been applied or reported in the literature. The results demonstrate that the methodology is able to discern anticipated safety benefit differences across candidate countermeasures. For the 18 at grade railroad crossings considered in this analysis, it was found that the top three performing countermeasures for reducing crashes are in-vehicle warning systems, obstacle detection systems, and constant warning time systems.
Resumo:
Objective: To examine the prospective dose–response relationships between both leisure-time physical activity (LTPA) and walking with self-reported arthritis in older women. Design, setting and participants: Data came from women aged 73–78 years who completed mailed surveys in 1999, 2002 and 2005 for the Australian Longitudinal Study on Women’s Health. Women reported their weekly minutes of walking and moderate to vigorous physical activities. They also reported on whether they had been diagnosed with, or treated for, arthritis since the previous survey. General estimating equation analyses were performed to examine the longitudinal relationship between LTPA and arthritis and, for women who reported walking as their only physical activity, the longitudinal relationship between walking and arthritis. Women who reported arthritis or a limited ability to walk in 1999 were excluded, resulting in data from 3613 women eligible for inclusion in these analyses. Main results: ORs for self-reported arthritis were lowest for women who reported “moderate” levels of LTPA (OR 0.78; 95% CI 0.67 to 0.92), equivalent to 75 to <150 minutes of moderate-intensity LTPA per week. Slightly higher odds ratios were found for women who reported “high” (OR 0.81; 95% CI 0.69 to 0.95) or “very high” (OR 0.84; 95% CI 0.72 to 0.98) LTPA levels, indicating no further benefit from increased activity. For women whose only activity was walking, an inverse dose–response relationship between walking and arthritis was seen. Conclusions: The results support an inverse association between both LTPA and walking with self-reported arthritis over 6 years in older women who are able to walk.
Resumo:
Background: The first sign of developing multiple sclerosis is a clinically isolated syndrome that resembles a multiple sclerosis relapse. Objective/methods: The objective was to review the clinical trials of two medicines in clinically isolated syndromes (interferon β and glatiramer acetate) to determine whether they prevent progression to definite multiple sclerosis. Results: In the BENEFIT trial, after 2 years, 45% of subjects in the placebo group developed clinically definite multiple sclerosis, and the rate was lower in the interferon β-1b group. Then all subjects were offered interferon β-1b, and the original interferon β-1b group became the early treatment group, and the placebo group became the delayed treatment group. After 5 years, the number of subjects with clinical definite multiple sclerosis remained lower in the early treatment than late treatment group. In the PreCISe trial, after 2 years, the time for 25% of the subjects to convert to definite multiple sclerosis was prolonged in the glatiramer group. Conclusions: Interferon β-1b and glatiramer acetate slow the progression of clinically isolated syndromes to definite multiple sclerosis. However, it is not known whether this early treatment slows the progression to the physical disabilities experienced in multiple sclerosis.