709 resultados para Process Instrumentation
Resumo:
In most art exhibitions, the creative part of the exhibition is assumed to be the artworks on display. But for the Capricornia Arts Mob’s first collective art exhibition in Rockhampton during NAIDOC Week in 2012, the process of developing the exhibition became the focus of creative action learning and action research. In working together to produce a multi-media exhibition, we learned about the collaborative processes and time required to develop a combined exhibition. We applied Indigenous ways of working – including yarning, cultural respect, cultural protocols, mentoring young people, providing a culturally safe working environment and sharing both time and food – to develop our first collective art exhibition. We developed a process that allowed us to ask deep questions, engage in a joint journey of learning, and develop our collective story. This paper explores the processes that the Capricornia Arts Mob used to develop the exhibition for NAIDOC 2012.
Resumo:
In this editorial letter, we provide the readers of Information Systems Management with a background on process design before we discuss the content of the special issue proper. By introducing and describing a so-called process design compass we aim to clarify what developments in the field are taking place and how the papers in this special issue expand on our current knowledge in this domain.
Resumo:
This study explored how the social context influences the stress-buffering effects of social support on employee adjustment. It was anticipated that the positive relationship between support from colleagues and employee adjustment would be more marked for those strongly identifying with their work team. Furthermore, as part of a three-way interactive effect, it was predicted that high identification would increase the efficacy of coworker support as a buffer of two role stressors (role overload and role ambiguity). One hundred and 55 employees recruited from first-year psychology courses enrolled at two Australian universities were surveyed. Hierarchical multiple regression analyses revealed that the negative main effect of role ambiguity on job satisfaction was significant for those employees with low levels of team identification, whereas high team identifiers were buffered from the deleterious effect of role ambiguity on job satisfaction. There also was a significant interaction between coworker support and team identification. The positive effect of coworker support on job satisfaction was significant for high team identifiers, whereas coworker support was not a source of satisfaction for those employees with low levels of team identification. A three-way interaction emerged among the focal variables in the prediction of psychological well-being, suggesting that the combined benefits of coworker support and team identification under conditions of high demand may be limited and are more likely to be observed when demands are low.
Resumo:
Background Wearable monitors are increasingly being used to objectively monitor physical activity in research studies within the field of exercise science. Calibration and validation of these devices are vital to obtaining accurate data. This article is aimed primarily at the physical activity measurement specialist, although the end user who is conducting studies with these devices also may benefit from knowing about this topic. Best Practices Initially, wearable physical activity monitors should undergo unit calibration to ensure interinstrument reliability. The next step is to simultaneously collect both raw signal data (e.g., acceleration) from the wearable monitors and rates of energy expenditure, so that algorithms can be developed to convert the direct signals into energy expenditure. This process should use multiple wearable monitors and a large and diverse subject group and should include a wide range of physical activities commonly performed in daily life (from sedentary to vigorous). Future Directions New methods of calibration now use "pattern recognition" approaches to train the algorithms on various activities, and they provide estimates of energy expenditure that are much better than those previously available with the single-regression approach. Once a method of predicting energy expenditure has been established, the next step is to examine its predictive accuracy by cross-validating it in other populations. In this article, we attempt to summarize the best practices for calibration and validation of wearable physical activity monitors. Finally, we conclude with some ideas for future research ideas that will move the field of physical activity measurement forward.
Resumo:
Passenger experience has become a major factor that influences the success of an airport. In this context, passenger flow simulation has been used in designing and managing airports. However, most passenger flow simulations failed to consider the group dynamics when developing passenger flow models. In this paper, an agent-based model is presented to simulate passenger behaviour at the airport check-in and evacuation process. The simulation results show that the passenger behaviour can have significant influences on the performance and utilisation of services in airport terminals. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.
Resumo:
There is a wide variety of drivers for business process modelling initiatives, reaching from business evolution and process optimisation over compliance checking and process certification to process enactment. That, in turn, results in models that differ in content due to serving different purposes. In particular, processes are modelled on different abstraction levels and assume different perspectives. Vertical alignment of process models aims at handling these deviations. While the advantages of such an alignment for inter-model analysis and change propagation are out of question, a number of challenges has still to be addressed. In this paper, we discuss three main challenges for vertical alignment in detail. Against this background, the potential application of techniques from the field of process integration is critically assessed. Based thereon, we identify specific research questions that guide the design of a framework for model alignment.
Resumo:
Purpose The purpose of this paper is to foster a common understanding of business process management (BPM) by proposing a set of ten principles that characterize BPM as a research domain and guide its successful use in organizational practice. Design/methodology/approach The identification and discussion of the principles reflects our viewpoint, which was informed by extant literature and focus groups, including 20 BPM experts from academia and practice. Findings We identify ten principles which represent a set of capabilities essential for mastering contemporary and future challenges in BPM. Their antonyms signify potential roadblocks and bad practices in BPM. We also identify a set of open research questions that can guide future BPM research. Research limitation/implication Our findings suggest several areas of research regarding each of the identified principles of good BPM. Also, the principles themselves should be systematically and empirically examined in future studies. Practical implications – Our findings allow practitioners to comprehensively scope their BPM initiatives and provide a general guidance for BPM implementation. Moreover, the principles may also serve to tackle contemporary issues in other management areas. Originality/value This is the first paper that distills principles of BPM in the sense of both good and bad practice recommendations. The value of the principles lies in providing normative advice to practitioners as well as in identifying open research areas for academia, thereby extending the reach and richness of BPM beyond its traditional frontiers.
Resumo:
Small Businesses account for a significant portion of the Australian business sector. With Business Process Management (BPM) gaining prominence in recent decades as a means of improving business performance, it would seem to only be a matter of time before it gains momentum within the Small Business sector. One may even question why it has not already achieved more traction within the sector. This case study involves a BPM initiative to develop process infrastructure in an establishing Small Business. It explores whether mainstream BPM tools, techniques and technologies can be applied in a Small Business setting. The chapter provides a background to the case organisation, outlines the activities undertaken in the BPM initiative and distils key observations drawn from participation in the initiative and consultation with stakeholders. Based on the case study experiences, a number of implications are identified for further consideration by the BPM discipline as it continues to address the question of how it can become more widely adopted amongst Small Businesses.
Resumo:
This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.
Resumo:
Continuous monitoring of diesel engine performance is critical for early detection of fault developments in an engine before they materialize into a functional failure. Instantaneous crank angular speed (IAS) analysis is one of a few nonintrusive condition monitoring techniques that can be utilized for such a task. Furthermore, the technique is more suitable for mass industry deployments than other non-intrusive methods such as vibration and acoustic emission techniques due to the low instrumentation cost, smaller data size and robust signal clarity since IAS is not affected by the engine operation noise and noise from the surrounding environment. A combination of IAS and order analysis was employed in this experimental study and the major order component of the IAS spectrum was used for engine loading estimation and fault diagnosis of a four-stroke four-cylinder diesel engine. It was shown that IAS analysis can provide useful information about engine speed variation caused by changing piston momentum and crankshaft acceleration during the engine combustion process. It was also found that the major order component of the IAS spectra directly associated with the engine firing frequency (at twice the mean shaft rotating speed) can be utilized to estimate engine loading condition regardless of whether the engine is operating at healthy condition or with faults. The amplitude of this order component follows a distinctive exponential curve as the loading condition changes. A mathematical relationship was then established in the paper to estimate the engine power output based on the amplitude of this order component of the IAS spectrum. It was further illustrated that IAS technique can be employed for the detection of a simulated exhaust valve fault in this study.
Resumo:
This paper introduces an integral approach to the study of plasma-surface interactions during the catalytic growth of selected nanostructures (NSs). This approach involves basic understanding of the plasma-specific effects in NS nucleation and growth, theoretical modelling, numerical simulations, plasma diagnostics, and surface microanalysis. Using an example of plasma-assisted growth of surface-supported single-walled carbon nanotubes, we discuss how the combination of these techniques may help improve the outcomes of the growth process. A specific focus here is on the effects of nanoscale plasma-surface interactions on the NS growth and how the available techniques may be used, both in situ and ex situ to optimize the growth process and structural parameters of NSs.
Resumo:
Highly efficient solar cells (conversion efficiency 11.9%, fill factor 70%) based on the vertically aligned single-crystalline nanostructures are fabricated without any pre-fabricated p-n junctions in a very simple, single-step process of Si nanoarray formation by etching p-type Si(100) wafers in low-temperature environment-friendly plasmas of argon and hydrogen mixtures.
Resumo:
An effective technique to improve the precision and throughput of energetic ion condensation through dielectric nanoporous templates and reduce nanopore clogging by using finely tuned pulsed bias is proposed. Multiscale numerical simulations of ion deposition show the possibility of controlling the dynamic charge balance on the upper template's surface to minimize ion deposition on nanopore sidewalls and to deposit ions selectively on the substrate surface in contact with the pore opening. In this way, the shapes of nanodots in template-assisted nanoarray fabrication can be effectively controlled. The results are applicable to various processes involving porous dielectric nanomaterials and dense nanoarrays.
Resumo:
This feature article introduces a deterministic approach for the rapid, single-step, direct synthesis of metal oxide nanowires. This approach is based on the exposure of thin metal samples to reactive oxygen plasmas and does not require any intervening processing or external substrate heating. The critical roles of the reactive oxygen plasmas, surface processes, and plasma-surface interactions that enable this growth are critically examined by using a deterministic viewpoint. The essentials of the experimental procedures and reactor design are presented and related to the key process requirements. The nucleation and growth kinetics is discussed for typical solid-liquid-solid and vapor-solid-solid mechanisms related to the synthesis of the oxide nanowires of metals with low (Ga, Cd) and high (Fe) melting points, respectively. Numerical simulations are focused on the possibility to predict the nanowire nucleation points through the interaction of the plasma radicals and ions with the nanoscale morphological features on the surface, as well as to control the localized 'hot spots' that in turn determine the nanowire size and shape. This generic approach can be applied to virtually any oxide nanoscale system and further confirms the applicability of the plasma nanoscience approaches for deterministic nanoscale synthesis and processing.
Resumo:
The possibility to discriminate between the relative importance of the fluxes of energy and matter in plasma-surface interaction is demonstrated by the energy flux measurements in low-temperature plasmas ignited by the radio frequency discharge (power and pressure ranges 50-250 W and 8-11.5 Pa) in Ar, Ar+ H2, and Ar+ H2 + CH4 gas mixtures typically used in nanoscale synthesis and processing of silicon- and carbon-based nanostructures. It is shown that by varying the gas composition and pressure, the discharge power, and the surface bias one can effectively control the surface temperature and the matter supply rates. The experimental findings are explained in terms of the plasma-specific reactions in the plasma bulk and on the surface.