893 resultados para Statistical process control
Resumo:
Process modeling grammars are used to create models of business processes. In this paper, we discuss how different routing symbol designs affect an individual's ability to comprehend process models. We conduct an experiment with 154 students to ascertain which visual design principles influence process model comprehension. Our findings suggest that design principles related to perceptual discriminability and pop out improve comprehension accuracy. Furthermore, semantic transparency and aesthetic design of symbols lower the perceived difficulty of comprehension. Our results inform important principles about notational design of process modeling grammars and the effective use of process modeling in practice.
Resumo:
Use of focus groups as a technique of inquiry is gaining attention in the area of health-care research. This paper will report on the technique of focus group interviewing to investigate the role of the infection control practitioner. Infection control is examined as a specialty area of health-care practice that has received little research attention to date. Additionally, it is an area of practice that is expanding in response to social, economic and microbiological forces. The focus group technique in this study helped a group of infection control practitioners from urban, regional and rural areas throughout Queensland identify and categorise their daily work activities. The outcomes of this process were then analysed to identify the growth in breadth and complexity of the role of the infection control practitioner in the contemporary health-care environment. Findings indicate that the role of the infection control practitioner in Australia has undergone changes consistent with and reflecting changing models of health-care delivery.
Resumo:
The following proposal is submitted by the AICA's credentialling and certification subcommittee for your consideration. It outlines a process and procedure for interim credentialling of infection control practitioners. In submitting this proposal, the subcommittee acknowledges that, while competency-based education is the preferred process for credentialling, there are clinicians who, in the absence of educational opportunities, have developed a specialist level of competency in infection control practice through self education and experience. Committee members also recognise the need for self-regulation of accrediting processes, to maintain standards in practice and support members in their clinical roles. We ask you to review the following proposal and invite your comments and critique, to be received by the last week in January 1998.
Resumo:
Background: Despite important implications for the budgets, statistical power and generalisability of research findings, detailed reports of recruitment and retention in randomised controlled trials (RCTs) are rare. The NOURISH RCT evaluated a community-based intervention for first-time mothers that promoted protective infant feeding practices as a primary prevention strategy for childhood obesity. The aim of this paper is to provide a detailed description and evaluation of the recruitment and retention strategies used. Methods: A two stage recruitment process designed to provide a consecutive sampling framework was used. First time mothers delivering healthy term infants were initially approached in postnatal wards of the major maternity services in two Australian cities for consent to later contact (Stage 1). When infants were about four months old mothers were re-contacted by mail for enrolment (Stage 2), baseline measurements (Time 1) and subsequent random allocation to the intervention or control condition. Outcomes were assessed at infant ages 14 months (Time 2) and 24 months (Time 3). Results: At Stage 1, 86% of eligible mothers were approached and of these women, 76% consented to later contact. At Stage 2, 3% had become ineligible and 76% could be recontacted. Of the latter, 44% consented to full enrolment and were allocated. This represented 21% of mothers screened as eligible at Stage 1. Retention at Time 3 was 78%. Mothers who did not consent or discontinued the study were younger and less likely to have a university education. Conclusions: The consent and retention rates of our sample of first time mothers are comparable with or better than other similar studies. The recruitment strategy used allowed for detailed information from non-consenters to be collected; thus selection bias could be estimated. Recommendations for future studies include being able to contact participants via mobile phone (particular text messaging), offering home visits to reduce participant burden and considering the use of financial incentives to support participant retention.
Resumo:
The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.
Resumo:
Widespread adoption by electricity utilities of Non-Conventional Instrument Transformers, such as optical or capacitive transducers, has been limited due to the lack of a standardised interface and multi-vendor interoperability. Low power analogue interfaces are being replaced by IEC 61850 9 2 and IEC 61869 9 digital interfaces that use Ethernet networks for communication. These ‘process bus’ connections achieve significant cost savings by simplifying connections between switchyard and control rooms; however the in-service performance when these standards are employed is largely unknown. The performance of real-time Ethernet networks and time synchronisation was assessed using a scale model of a substation automation system. The test bed was constructed from commercially available timing and protection equipment supplied by a range of vendors. Test protocols have been developed to thoroughly evaluate the performance of Ethernet networks and network based time synchronisation. The suitability of IEEE Std 1588 Precision Time Protocol (PTP) as a synchronising system for sampled values was tested in the steady state and under transient conditions. Similarly, the performance of hardened Ethernet switches designed for substation use was assessed under a range of network operating conditions. This paper presents test methods that use a precision Ethernet capture card to accurately measure PTP and network performance. These methods can be used for product selection and to assess ongoing system performance as substations age. Key findings on the behaviour of multi-function process bus networks are presented. System level tests were performed using a Real Time Digital Simulator and transformer protection relay with sampled value and Generic Object Oriented Substation Events (GOOSE) capability. These include the interactions between sampled values, PTP and GOOSE messages. Our research has demonstrated that several protocols can be used on a shared process bus, even with very high network loads. This should provide confidence that this technology is suitable for transmission substations.
Resumo:
The Beauty Leaf tree (Calophyllum inophyllum) is a potential source of non-edible vegetable oil for producing future generation biodiesel because of its ability to grow in a wide range of climate conditions, easy cultivation, high fruit production rate, and the high oil content in the seed. This plant naturally occurs in the coastal areas of Queensland and the Northern Territory in Australia, and is also widespread in south-east Asia, India and Sri Lanka. Although Beauty Leaf is traditionally used as a source of timber and orientation plant, its potential as a source of second generation biodiesel is yet to be exploited. In this study, the extraction process from the Beauty Leaf oil seed has been optimised in terms of seed preparation, moisture content and oil extraction methods. The two methods that have been considered to extract oil from the seed kernel are mechanical oil extraction using an electric powered screw press, and chemical oil extraction using n-hexane as an oil solvent. The study found that seed preparation has a significant impact on oil yields, especially in the screw press extraction method. Kernels prepared to 15% moisture content provided the highest oil yields for both extraction methods. Mechanical extraction using the screw press can produce oil from correctly prepared product at a low cost, however overall this method is ineffective with relatively low oil yields. Chemical extraction was found to be a very effective method for oil extraction for its consistence performance and high oil yield, but cost of production was relatively higher due to the high cost of solvent. However, a solvent recycle system can be implemented to reduce the production cost of Beauty Leaf biodiesel. The findings of this study are expected to serve as the basis from which industrial scale biodiesel production from Beauty Leaf can be made.
Resumo:
Norms regulate the behaviour of their subjects and define what is legal and what is illegal. Norms typically describe the conditions under which they are applicable and the normative effects as a results of their applications. On the other hand, process models specify how a business operation or service is to be carried out to achieve a desired outcome. Norms can have significant impact on how business operations are conducted and they can apply to the whole or part of a business process. For example, they may impose conditions on the different aspects of a process (e.g., perform tasks in a specific sequence (control-flow), at a specific time or within a certain time frame (temporal aspect), by specific people (resources)). We propose a framework that provides the formal semantics of the normative requirements for determining whether a business process complies with a normative document (where a normative document can be understood in a very broad sense, ranging from internal policies to best practice policies, to statutory acts). We also present a classification of normal requirements based on the notion of different types of obligations and the effects of violating these obligations.
Resumo:
We examine which capabilities technologies provide to support collaborative process modeling. We develop a model that explains how technology capabilities impact cognitive group processes, and how they lead to improved modeling outcomes and positive technology beliefs. We test this model through a free simulation experiment of collaborative process modelers structured around a set of modeling tasks. With our study, we provide an understanding of the process of collaborative process modeling, and detail implications for research and guidelines for the practical design of collaborative process modeling.
Resumo:
A novel in-cylinder pressure method for determining ignition delay has been proposed and demonstrated. This method proposes a new Bayesian statistical model to resolve the start of combustion, defined as being the point at which the band-pass in-cylinder pressure deviates from background noise and the combustion resonance begins. Further, it is demonstrated that this method is still accurate in situations where there is noise present. The start of combustion can be resolved for each cycle without the need for ad hoc methods such as cycle averaging. Therefore, this method allows for analysis of consecutive cycles and inter-cycle variability studies. Ignition delay obtained by this method and by the net rate of heat release have been shown to give good agreement. However, the use of combustion resonance to determine the start of combustion is preferable over the net rate of heat release method because it does not rely on knowledge of heat losses and will still function accurately in the presence of noise. Results for a six-cylinder turbo-charged common-rail diesel engine run with neat diesel fuel at full, three quarters and half load have been presented. Under these conditions the ignition delay was shown to increase as the load was decreased with a significant increase in ignition delay at half load, when compared with three quarter and full loads.
Resumo:
Background: Previous attempts at costing infection control programmes have tended to focus on accounting costs rather than economic costs. For studies using economic costs, estimates tend to be quite crude and probably underestimate the true cost. One of the largest costs of any intervention is staff time, but this cost is difficult to quantify and has been largely ignored in previous attempts. Aim: To design and evaluate the costs of hospital-based infection control interventions or programmes. This article also discusses several issues to consider when costing interventions, and suggests strategies for overcoming these issues. Methods: Previous literature and techniques in both health economics and psychology are reviewed and synthesized. Findings: This article provides a set of generic, transferable costing guidelines. Key principles such as definition of study scope and focus on large costs, as well as pitfalls (e.g. overconfidence and uncertainty), are discussed. Conclusion: These new guidelines can be used by hospital staff and other researchers to cost their infection control programmes and interventions more accurately.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.
Resumo:
The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.
Resumo:
When a community already torn by an event such as a prolonged war, is then hit by a natural disaster, the negative impact of this subsequent disaster in the longer term can be extremely devastating. Natural disasters further damage already destabilised and demoralised communities, making it much harder for them to be resilient and recover. Communities often face enormous challenges during the immediate recovery and the subsequent long term reconstruction periods, mainly due to the lack of a viable community involvement process. In post-war settings, affected communities, including those internally displaced, are often conceived as being completely disabled and are hardly ever consulted when reconstruction projects are being instigated. This lack of community involvement often leads to poor project planning, decreased community support, and an unsustainable completed project. The impact of war, coupled with the tensions created by the uninhabitable and poor housing provision, often hinders the affected residents from integrating permanently into their home communities. This paper outlines a number of fundamental factors that act as barriers to community participation related to natural disasters in post-war settings. The paper is based on a statistical analysis of, and findings from, a questionnaire survey administered in early 2012 in Afghanistan.
Resumo:
During the last several decades, the quality of natural resources and their services have been exposed to significant degradation from increased urban populations combined with the sprawl of settlements, development of transportation networks and industrial activities (Dorsey, 2003; Pauleit et al., 2005). As a result of this environmental degradation, a sustainable framework for urban development is required to provide the resilience of natural resources and ecosystems. Sustainable urban development refers to the management of cities with adequate infrastructure to support the needs of its population for the present and future generations as well as maintain the sustainability of its ecosystems (UNEP/IETC, 2002; Yigitcanlar, 2010). One of the important strategic approaches for planning sustainable cities is „ecological planning‟. Ecological planning is a multi-dimensional concept that aims to preserve biodiversity richness and ecosystem productivity through the sustainable management of natural resources (Barnes et al., 2005). As stated by Baldwin (1985, p.4), ecological planning is the initiation and operation of activities to direct and control the acquisition, transformation, disruption and disposal of resources in a manner capable of sustaining human activities with a minimum disruption of ecosystem processes. Therefore, ecological planning is a powerful method for creating sustainable urban ecosystems. In order to explore the city as an ecosystem and investigate the interaction between the urban ecosystem and human activities, a holistic urban ecosystem sustainability assessment approach is required. Urban ecosystem sustainability assessment serves as a tool that helps policy and decision-makers in improving their actions towards sustainable urban development. There are several methods used in urban ecosystem sustainability assessment among which sustainability indicators and composite indices are the most commonly used tools for assessing the progress towards sustainable land use and urban management. Currently, a variety of composite indices are available to measure the sustainability at the local, national and international levels. However, the main conclusion drawn from the literature review is that they are too broad to be applied to assess local and micro level sustainability and no benchmark value for most of the indicators exists due to limited data availability and non-comparable data across countries. Mayer (2008, p. 280) advocates that by stating "as different as the indices may seem, many of them incorporate the same underlying data because of the small number of available sustainability datasets". Mori and Christodoulou (2011) also argue that this relative evaluation and comparison brings along biased assessments, as data only exists for some entities, which also means excluding many nations from evaluation and comparison. Thus, there is a need for developing an accurate and comprehensive micro-level urban ecosystem sustainability assessment method. In order to develop such a model, it is practical to adopt an approach that uses a method to utilise indicators for collecting data, designate certain threshold values or ranges, perform a comparative sustainability assessment via indices at the micro-level, and aggregate these assessment findings to the local level. Hereby, through this approach and model, it is possible to produce sufficient and reliable data to enable comparison at the local level, and provide useful results to inform the local planning, conservation and development decision-making process to secure sustainable ecosystems and urban futures. To advance research in this area, this study investigated the environmental impacts of an existing urban context by using a composite index with an aim to identify the interaction between urban ecosystems and human activities in the context of environmental sustainability. In this respect, this study developed a new comprehensive urban ecosystem sustainability assessment tool entitled the „Micro-level Urban-ecosystem Sustainability IndeX‟ (MUSIX). The MUSIX model is an indicator-based indexing model that investigates the factors affecting urban sustainability in a local context. The model outputs provide local and micro-level sustainability reporting guidance to help policy-making concerning environmental issues. A multi-method research approach, which is based on both quantitative analysis and qualitative analysis, was employed in the construction of the MUSIX model. First, a qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. Afterwards, a quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. The MUSIX model was tested in four pilot study sites selected from the Gold Coast City, Queensland, Australia. The model results detected the sustainability performance of current urban settings referring to six main issues of urban development: (1) hydrology, (2) ecology, (3) pollution, (4) location, (5) design, and; (6) efficiency. For each category, a set of core indicators was assigned which are intended to: (1) benchmark the current situation, strengths and weaknesses, (2) evaluate the efficiency of implemented plans, and; (3) measure the progress towards sustainable development. While the indicator set of the model provided specific information about the environmental impacts in the area at the parcel scale, the composite index score provided general information about the sustainability of the area at the neighbourhood scale. Finally, in light of the model findings, integrated ecological planning strategies were developed to guide the preparation and assessment of development and local area plans in conjunction with the Gold Coast Planning Scheme, which establishes regulatory provisions to achieve ecological sustainability through the formulation of place codes, development codes, constraint codes and other assessment criteria that provide guidance for best practice development solutions. These relevant strategies can be summarised as follows: • Establishing hydrological conservation through sustainable stormwater management in order to preserve the Earth’s water cycle and aquatic ecosystems; • Providing ecological conservation through sustainable ecosystem management in order to protect biological diversity and maintain the integrity of natural ecosystems; • Improving environmental quality through developing pollution prevention regulations and policies in order to promote high quality water resources, clean air and enhanced ecosystem health; • Creating sustainable mobility and accessibility through designing better local services and walkable neighbourhoods in order to promote safe environments and healthy communities; • Sustainable design of urban environment through climate responsive design in order to increase the efficient use of solar energy to provide thermal comfort, and; • Use of renewable resources through creating efficient communities in order to provide long-term management of natural resources for the sustainability of future generations.