922 resultados para operational
Resumo:
AIMS: To test a model that delineates advanced practice nursing from the practice profile of other nursing roles and titles. BACKGROUND: There is extensive literature on advanced practice reporting the importance of this level of nursing to contemporary health service and patient outcomes. Literature also reports confusion and ambiguity associated with advanced practice nursing. Several countries have regulation and delineation for the nurse practitioner, but there is less clarity in definition and service focus of other advanced practice nursing roles. DESIGN: A statewide survey. METHODS: Using the modified Strong Model of Advanced Practice Role Delineation tool, a survey was conducted in 2009 with a random sample of registered nurses/midwives from government facilities in Queensland, Australia. Analysis of variance compared total and subscale scores across groups according to grade. Linear, stepwise multiple regression analysis examined factors influencing advanced practice nursing activities across all domains. RESULTS: There were important differences according to grade in mean scores for total activities in all domains of advanced practice nursing. Nurses working in advanced practice roles (excluding nurse practitioners) performed more activities across most advanced practice domains. Regression analysis indicated that working in clinical advanced practice nursing roles with higher levels of education were strong predictors of advanced practice activities overall. CONCLUSION: Essential and appropriate use of advanced practice nurses requires clarity in defining roles and practice levels. This research delineated nursing work according to grade and level of practice, further validating the tool for the Queensland context and providing operational information for assigning innovative nursing service.
Resumo:
Performance of urban transit systems may be quantified and assessed using transit capacity and productive capacity in planning, design and operational management activities. Bunker (4) defines important productive performance measures of an individual transit service and transit line, which are extended in this paper to quantify efficiency and operating fashion of transit services and lines. Comparison of a hypothetical bus line’s operation during a morning peak hour and daytime hour demonstrates the usefulness of productiveness efficiency and passenger transmission efficiency, passenger churn and average proportion line length traveled to the operator in understanding their services’ and lines’ productive performance, operating characteristics, and quality of service. Productiveness efficiency can flag potential pass-up activity under high load conditions, as well as ineffective resource deployment. Proportion line length traveled can directly measure operating fashion. These measures can be used to compare between lines/routes and, within a given line, various operating scenarios and time horizons to target improvements. The next research stage is investigating within-line variation using smart card passenger data and field observation of pass-ups. Insights will be used to further develop practical guidance to operators.
Resumo:
Performance of urban transit systems may be quantified and assessed using transit capacity and productive capacity in planning, design and operational management activities. Bunker (4) defines important productive performance measures of an individual transit service and transit line, which are extended in this paper to quantify efficiency and operating fashion of transit services and lines. Comparison of a hypothetical bus line’s operation during a morning peak hour and daytime hour demonstrates the usefulness of productiveness efficiency and passenger transmission efficiency, passenger churn and average proportion line length traveled to the operator in understanding their services’ and lines’ productive performance, operating characteristics, and quality of service. Productiveness efficiency can flag potential pass-up activity under high load conditions, as well as ineffective resource deployment. Proportion line length traveled can directly measure operating fashion. These measures can be used to compare between lines/routes and, within a given line, various operating scenarios and time horizons to target improvements. The next research stage is investigating within-line variation using smart card passenger data and field observation of pass-ups. Insights will be used to further develop practical guidance to operators.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
Bomb technicians perform their work while encapsulated in explosive ordnance disposal (EOD) suits. Designed primarily for safety, these suits have an unintended consequence of impairing the body’s natural mechanisms for heat dissipation. Purpose: To quantify the heat strain encountered during an EOD operational scenario in the tropical north of Australia. Methods: All active police male bomb technicians, located in a tropical region of Australia (n=4, experience 7 ± 2.1 yrs, age 34 ± 2 yrs, height 182.3 ± 5.4 cm, body mass 95 ± 4 kg, VO2max 46 ± 5.7 ml.kg-1.min-1) undertook an operational scenario wearing the Med-Eng EOD 9 suit and helmet (~32 kg). The climatic conditions ranged between 27.1–31.8°C ambient temperature, 66-88% relative humidity, and 30.7-34.3°C wet bulb globe temperature. The scenario involved searching a two story non air-conditioned building for a target; carrying and positioning equipment for taking an X-ray; carrying and positioning equipment to disrupt the target; and finally clearing the site. Core temperature and heart rate were continuously monitored, and were used to calculate a physiological strain index (PSI). Urine specific gravity (USG) assessed hydration status and heat associated symptomology were also recorded. Results: The scenario was completed in 121 ± 22 mins (23.4 ± 0.4% work, 76.5 ± 0.4% rest/recovery). Maximum core temperature (38.4 ± 0.2°C), heart rate (173 ± 5.4 bpm, 94 ± 3.3% max), PSI (7.1 ± 0.4) and USG (1.031 ± 0.002) were all elevated after the simulated operation. Heat associated symptomology highlighted that moderate-severe levels of fatigue and thirst were universally experienced, muscle weakness and heat sensations experienced by 75%, and one bomb technician reported confusion and light-headedness. Conclusion: All bomb technicians demonstrated moderate-high levels of heat strain, evidenced by elevated heart rate, core body temperature and PSI. Severe levels of dehydration and noteworthy heat-related symptoms further highlight the risks to health and safety faced by bomb technicians operating in tropical locations.
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
To the Editor; It was with interest that I read the recent article by Zhang et al. published in Supportive Care in Cancer [1]. This paper highlighted the importance of radiodermatitis (RD) being an unresolved and distressing clinical issue in patients with cancer undergoing radiation therapy. However, I am concerned with a number of clinical and methodological issues within this paper: (i) the clinical and operational definition of prophylaxis and treatment of RD; (ii) the accuracy of the identification of trials; and (iii) the appropriateness of the conduct of the meta-analyses...
Resumo:
All civil and private aircraft are required to comply with the airworthiness standards set by their national airworthiness authority and throughout their operational life must be in a condition of safe operation. Aviation accident data shows that over twenty percent of all fatal accidents in aviation are due to airworthiness issues, specifically aircraft mechanical failures. Ultimately it is the responsibility of each registered operator to ensure that their aircraft remain in a condition of safe operation, and this is done through both effective management of airworthiness activities and the effective program governance of safety outcomes. Typically, the projects within these airworthiness management programs are focused on acquiring, modifying and maintaining the aircraft as a capability supporting the business. Program governance provides the structure through which the goals and objectives of airworthiness programs are set along with the means of attaining them. Whilst the principal causes of failures in many programs can be traced to inadequate program governance, many of the failures in large scale projects can have their root causes in the organisational culture and more specifically in the organisational processes related to decision-making. This paper examines the primary theme of project and program based enterprises, and introduces a model for measuring organisational culture in airworthiness management programs using measures drawn from 211 respondents in Australian airline programs. The paper describes the theoretical perspectives applied to modifying an original model to specifically focus it on measuring the organisational culture of programs for managing airworthiness; identifying the most important factors needed to explain the relationship between the measures collected, and providing a description of the nature of these factors. The paper concludes by identifying a model that best describes the organisational culture data collected from seven airworthiness management programs.
Resumo:
Since 2007 Kite Arts Education Program (KITE), based at Queensland Performing Arts Centre (QPAC), has been engaged in delivering a series of theatre-based experiences for children in low socio-economic primary schools in Queensland. The artist in residence (AIR) project titled Yonder includes performances developed by the children with the support and leadership of teacher artists from KITE for their community and parents/carers,supported by a peak community cultural institution. In 2009,Queensland Performing Arts Centre partnered with Queensland University of Technology (QUT) Creative Industries Faculty (Drama) to conduct a three-year evaluation of the Yonder project to understand the operational dynamics, artistic outputs and the educational benefits of the project. This paper outlines the research findings for children engaged in the Yonder project in the interrelated areas of literacy development and social competencies. Findings are drawn from six iterations of the project in suburban locations on the edge of Brisbane city and in regional Queensland.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
A qualitative think aloud study of the early Neo-Piagetian stages of reasoning in novice programmers
Resumo:
Recent research indicates that some of the difficulties faced by novice programmers are manifested very early in their learning. In this paper, we present data from think aloud studies that demonstrate the nature of those difficulties. In the think alouds, novices were required to complete short programming tasks which involved either hand executing ("tracing") a short piece of code, or writing a single sentence describing the purpose of the code. We interpret our think aloud data within a neo-Piagetian framework, demonstrating that some novices reason at the sensorimotor and preoperational stages, not at the higher concrete operational stage at which most instruction is implicitly targeted.
Resumo:
Purpose – Integrated supplier management (ISM), new product development (NPD) and knowledge sharing (KS) practices are three primary business activities utilised to enhance manufacturers' business performance (BP). The purpose of this paper is to empirically investigate the relationships between these three business activities (i.e. ISM, NPD, KS) and BP in a Taiwanese electronics manufacturing context. Design/methodology/approach – A questionnaire survey is first administered to a sample of electronic manufacturing companies operating in Taiwan to elicit the opinions of technical and managerial professionals regarding business activities and BP within their companies. A total of 170 respondents from 83 companies respond to the survey. Factor, correlation and path analysis are undertaken on this quantitative data set to derive the key factors which leverage business outcomes in these companies. Following empirical analysis, six semi-structured interviews are undertaken with manufacturing executives to provide qualitative insights into the underlying reasons why certain business activity factors are the strongest predictors of BP. Findings – The investigation shows that the ISM, NPD and KS constructs all play an important role in the success of company operations and creating business outcomes. Specifically, the key factors within these constructs which influenced BP are: supplier evaluation and selection; design simplification and modular design; information technology infrastructure and systems and open communication. Accordingly, sufficient financial and human resources should be allocated to these important activities to derive accelerated rates of improved BP. These findings are supported by the qualitative interviews with manufacturing executives. Originality/value – The paper depicts the pathways to improved manufacturing BP, through targeting efforts into the above-mentioned factors within the ISM, NPD and KS constructs. Based on the empirical path model, and the specific insights derived from the explanatory interviews with manufacturing executives, the paper also provides a number of practical implications for manufacturing companies seeking to enhance their BP through improved operational activities.
Resumo:
The management and improvement of business processes are a core topic of the information systems discipline. The persistent demand in corporations within all industry sectors for increased operational efficiency and innovation, an emerging set of established and evaluated methods, tools, and techniques as well as the quickly growing body of academic and professional knowledge are indicative for the standing that Business Process Management (BPM) has nowadays. During the last decades, intensive research has been conducted with respect to the design, implementation, execution, and monitoring of business processes. Comparatively low attention, however, has been paid to questions related to organizational issues such as the adoption, usage, implications, and overall success of BPM approaches, technologies, and initiatives. This research gap motivated us to edit a corresponding special focus issue for the journal BISE/WIRTSCHAFTSINFORMATIK. We are happy that we are able to present a selection of three research papers and a state-of-the-art paper in the scientific section of the issue at hand. As these papers differ in the topics they investigate, the research method they apply, and the theoretical foundations they build on, the diversity within the BPM field becomes evident. The academic papers are complemented by an interview with Phil Gilbert, IBM’s Vice President for Business Process and Decision Management, who reflects on the relationship between business processes and the data flowing through them, the need to establish a process context for decision making, and the calibration of BPM efforts toward executives who see processes as a means to an end, rather than a first-order concept in its own right.
Resumo:
Australian universities are currently engaging with new governmental policies and regulations that require them to demonstrate enhanced quality and accountability in teaching and research. The development of national academic standards for learning outcomes in higher education is one such instance of this drive for excellence. These discipline-specific standards articulate the minimum, or Threshold Learning Outcomes, to be addressed by higher education institutions so that graduating students can demonstrate their achievement to their institutions, accreditation agencies, and industry recruiters. This impacts not only on the design of Engineering courses (with particular emphasis on pedagogy and assessment), but also on the preparation of academics to engage with these standards and implement them in their day-to-day teaching practice on a micro level. This imperative for enhanced quality and accountability in teaching is also significant at a meso level, for according to the Australian Bureau of Statistics, about 25 per cent of teachers in Australian universities are aged 55 and above and more than 54 per cent are aged 45 and above (ABS, 2006). A number of institutions have undertaken recruitment drives to regenerate and enrich their academic workforce by appointing capacity-building research professors and increasing the numbers of early- and mid-career academics. This nationally driven agenda for quality and accountability in teaching permeates also the micro level of engineering education, since the demand for enhanced academic standards and learning outcomes requires both a strong advocacy for a shift to an authentic, collaborative, outcomes-focused education and the mechanisms to support academics in transforming their professional thinking and practice. Outcomes-focused education means giving greater attention to the ways in which the curriculum design, pedagogy, assessment approaches and teaching activities can most effectively make a positive, verifiable difference to students’ learning. Such education is authentic when it is couched firmly in the realities of learning environments, student and academic staff characteristics, and trustworthy educational research. That education will be richer and more efficient when staff works collaboratively, contributing their knowledge, experience and skills to achieve learning outcomes based on agreed objectives. We know that the school or departmental levels of universities are the most effective loci of changes in approaches to teaching and learning practices in higher education (Knight & Trowler, 2000). Heads of Schools are being increasingly entrusted with more responsibilities - in addition to setting strategic directions and managing the operational and sometimes financial aspects of their school, they are also expected to lead the development and delivery of the teaching, research and other academic activities. Guiding and mentoring individuals and groups of academics is one critical aspect of the Head of School’s role. Yet they do not always have the resources or support to help them mentor staff, especially the more junior academics. In summary, the international trend in undergraduate engineering course accreditation towards the demonstration of attainment of graduate attributes poses new challenges in addressing academic staff development needs and the assessment of learning. This paper will give some insights into the conceptual design, implementation and empirical effectiveness to date, of a Fellow-In-Residence Engagement (FIRE) program. The program is proposed as a model for achieving better engagement of academics with contemporary issues and effectively enhancing their teaching and assessment practices. It will also report on the program’s collaborative approach to working with Heads of Schools to better support academics, especially early-career ones, by utilizing formal and informal mentoring. Further, the paper will discuss possible factors that may assist the achievement of the intended outcomes of such a model, and will examine its contributions to engendering an outcomes-focussed thinking in engineering education.