560 resultados para Process capability index
Resumo:
The present paper focuses on some interesting classes of process-control games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winning strategy. In this paper we investigate situations in which even a complete model (given by a program) of a particular game does not provide enough information to synthesize—even incrementally—a winning strategy. However, if in addition to getting a program, a machine may also watch masters play winning strategies, then the machine is able to incrementally learn a winning strategy for the given game. Studied are successful learning from arbitrary masters and from pedagogically useful selected masters. It is shown that selected masters are strictly more helpful for learning than are arbitrary masters. Both for learning from arbitrary masters and for learning from selected masters, though, there are cases where one can learn programs for winning strategies from masters but not if one is required to learn a program for the master's strategy itself. Both for learning from arbitrary masters and for learning from selected masters, one can learn strictly more by watching m+1 masters than one can learn by watching only m. Last, a simulation result is presented where the presence of a selected master reduces the complexity from infinitely many semantic mind changes to finitely many syntactic ones.
Resumo:
This paper describes Electronic Blocks, a new robot construction element designed to allow children as young as age three to build and program robotic structures. The Electronic Blocks encapsulate input, output and logic concepts in tangible elements that young children can use to create a wide variety of physical agents. The children are able to determine the behavior of these agents by the choice of blocks and the manner in which they are connected. The Electronic Blocks allow children without any knowledge of mechanical design or computer programming to create and control physically embodied robots. They facilitate the development of technological capability by enabling children to design, construct, explore and evaluate dynamic robotics systems. A study of four and five year-old children using the Electronic Blocks has demonstrated that the interface is well suited to young children. The complexity of the implementation is hidden from the children, leaving the children free to autonomously explore the functionality of the blocks. As a consequence, children are free to move their focus beyond the technology. Instead they are free to focus on the construction process, and to work on goals related to the creation of robotic behaviors and interactions. As a resource for robot building, the blocks have proved to be effective in encouraging children to create robot structures, allowing children to design and program robot behaviors.
Resumo:
In this conversation, Kevin K. Kumashiro shares his reflections on challenges to publishing anti-oppressive research in educational journals. He then invites eight current and former editors of leading educational research journals--William F. Pinar, Elizabeth Graue, Carl A. Grant, Maenette K. P. Benham, Ronald H. Heck, James Joseph Scheurich, Allan Luke, and Carmen Luke--to critique and expand on his analysis. Kumashiro begins the conversation by describing his own experiences submitting manuscripts to educational research journals and receiving comments by anonymous reviewers and journal editors. He suggests three ways to rethink the collaborative potential of the peer-review process: as constructive, as multilensed, and as situated. The eight current and former editors of leading educational research journals then critique and expand Kumashiro's analysis. Kumashiro concludes the conversation with additional reflections on barriers and contradictions involved in advancing anti-oppressive educational research in educational journals. (Contains 3 notes.)
Resumo:
Business process modeling is widely regarded as one of the most popular forms of conceptual modeling. However, little is known about the capabilities and deficiencies of process modeling grammars and how existing deficiencies impact actual process modeling practice. This paper is a first contribution towards a theory-driven, exploratory empirical investigation of the ontological deficiencies of process modeling with the industry standard Business Process Modeling Notation (BPMN). We perform an analysis of BPMN using a theory of ontological expressiveness. Through a series of semi-structured interviews with BPMN adopters we explore empirically the actual use of this grammar. Nine ontological deficiencies related to the practice of modeling with BPMN are identified, for example, the capture of business rules and the specification of process decompositions. We also uncover five contextual factors that impact on the use of process modeling grammars, such as tool support and modeling conventions. We discuss implications for research and practice, highlighting the need for consideration of representational issues and contextual factors in decisions relating to BPMN adoption in organizations.
Resumo:
The task addressed in this thesis is the automatic alignment of an ensemble of misaligned images in an unsupervised manner. This application is especially useful in computer vision applications where annotations of the shape of an object of interest present in a collection of images is required. Performing this task manually is a slow, tedious, expensive and error prone process which hinders the progress of research laboratories and businesses. Most recently, the unsupervised removal of geometric variation present in a collection of images has been referred to as congealing based on the seminal work of Learned-Miller [21]. The only assumption made in congealing is that the parametric nature of the misalignment is known a priori (e.g. translation, similarity, a�ne, etc) and that the object of interest is guaranteed to be present in each image. The capability to congeal an ensemble of misaligned images stemming from the same object class has numerous applications in object recognition, detection and tracking. This thesis concerns itself with the construction of a congealing algorithm titled, least-squares congealing, which is inspired by the well known image to image alignment algorithm developed by Lucas and Kanade [24]. The algorithm is shown to have superior performance characteristics when compared to previously established methods: canonical congealing by Learned-Miller [21] and stochastic congealing by Z�ollei [39].
Resumo:
Dental pulp cells (DPCs) are capable of differentiating into odontoblasts that secrete reparative dentin after pulp injury. The molecular mechanisms governing reparative dentinogenesis are yet to be fully understood. Here we investigated the differential protein profile of human DPCs undergoing odontogenic induction for 7 days. Using two-dimensional differential gel electrophoresis coupled with matrix-assisted laser adsorption ionization time of flight mass spectrometry, 2 3 protein spots related to the early odontogenic differentiation were identified. These proteins included cytoskeleton proteins, nuclear proteins, cell membrane-bound molecules, proteins involved in matrix synthesis, and metabolic enzymes. The expression of four identified proteins, which were heteronuclear ribonuclear proteins C, annexin VI, collagen type VI, and matrilin-2, was confirmed by Western blot and real-time realtime polymerase chain reaction analyses. This study generated a proteome reference map during odontoblast- like differentiation of human DPCs, which will be valuable to better understand the underlying molecular mechanisms in odontoblast-like differentiation.
Resumo:
The Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991 mandated the consideration of safety in the regional transportation planning process. As part of National Cooperative Highway Research Program Project 8-44, "Incorporating Safety into the Transportation Planning Process," we conducted a telephone survey to assess safety-related activities and expertise at Governors Highway Safety Associations (GHSAs), and GHSA relationships with metropolitan planning organizations (MPOs) and state departments of transportation (DOTs). The survey results were combined with statewide crash data to enable exploratory modeling of the relationship between GHSA policies and programs and statewide safety. The modeling objective was to illuminate current hurdles to ISTEA implementation, so that appropriate institutional, analytical, and personnel improvements can be made. The study revealed that coordination of transportation safety across DOTs, MPOs, GHSAs, and departments of public safety is generally beneficial to the implementation of safety. In addition, better coordination is characterized by more positive and constructive attitudes toward incorporating safety into planning.
Resumo:
This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.
Resumo:
This paper outlines a method of constructing narratives about an individual’s self-efficacy. Self-efficacy is defined as “people’s judgments of their capabilities to organise and execute courses of action required to attain designated types of performances” (Bandura, 1986, p. 391), and as such represents a useful construct for thinking about personal agency. Social cognitive theory provides the theoretical framework for understanding the sources of self-efficacy, that is, the elements that contribute to a sense of self-efficacy. The narrative approach adopted offers an alternative to traditional, positivist psychology, characterised by a preoccupation with measuring psychological constructs (like self-efficacy) by means of questionnaires and scales. It is argued that these instruments yield scores which are somewhat removed from the lived experience of the person—respondent or subject—associated with the score. The method involves a cyclical and iterative process using qualitative interviews to collect data from participants – four mature aged university students. The method builds on a three-interview procedure designed for life history research (Dolbeare & Schuman, cited in Seidman, 1998). This is achieved by introducing reflective homework tasks, as well as written data generated by research participants, as they are guided in reflecting on those experiences (including behaviours, cognitions and emotions) that constitute a sense of self-efficacy, in narrative and by narrative. The method illustrates how narrative analysis is used “to produce stories as the outcome of the research” (Polkinghorne, 1995, p.15), with detail and depth contributing to an appreciation of the ‘lived experience’ of the participants. The method is highly collaborative, with narratives co-constructed by researcher and research participants. The research outcomes suggest an enhanced understanding of self-efficacy contributes to motivation, application of effort and persistence in overcoming difficulties. The paper concludes with an evaluation of the research process by the students who participated in the author’s doctoral study.
Resumo:
This paper explores models of teaching and learning music composition in higher education. It analyses the pedagogical approaches apparent in the literature on teaching and learning composition in schools and universities, and introduces a teaching model as: learning from the masters; mastery of techniques; exploring ideas; and developing voice. It then presents a learning model developed from a qualitative study into students’ experiences of learning composition at university as: craft, process and art. The relationship between the students’ experiences and the pedagogical model is examined. Finally, the implications for composition curricula in higher education are presented.
Resumo:
The trend of diminished funding, demands for greater efficiency and higher public accountability have led to a rapid expansion of interest in the bibliometric assessment of research performance of universities. A pilot research is conducted to provide a preliminary overview of the research performance of the building and construction schools or departments through the analysis of bibliometric indicators including the journal impact factor (JIF) published by Institute for Scientific Information (ISI). The suitability of bibliometric evaluation approaches as a measure of research quality in building and construction management research field is discussed.
Resumo:
Traffic control at road junctions is one of the major concerns in most metropolitan cities. Controllers of various approaches are available and the required control action is the effective green-time assigned to each traffic stream within a traffic-light cycle. The application of fuzzy logic provides the controller with the capability to handle uncertain natures of the system, such as drivers’ behaviour and random arrivals of vehicles. When turning traffic is allowed at the junction, the number of phases in the traffic-light cycle increases. The additional input variables inevitably complicate the controller and hence slow down the decision-making process, which is critical in this real-time control problem. In this paper, a hierarchical fuzzy logic controller is proposed to tackle this traffic control problem at a 2-way road junction with turning traffic. The two levels of fuzzy logic controllers devise the minimum effective green-time and fine-tune it respectively at each phase of a traffic-light cycle. The complexity of the controller at each level is reduced with smaller rule-set. The performance of this hierarchical controller is examined by comparison with a fixed-time controller under various traffic conditions. Substantial delay reduction has been achieved as a result and the performance and limitation of the controller will be discussed.
Resumo:
Many infrastructure and necessity systems such as electricity and telecommunication in Europe and the Northern America were used to be operated as monopolies, if not state-owned. However, they have now been disintegrated into a group of smaller companies managed by different stakeholders. Railways are no exceptions. Since the early 1980s, there have been reforms in the shape of restructuring of the national railways in different parts of the world. Continuous refinements are still conducted to allow better utilisation of railway resources and quality of service. There has been a growing interest for the industry to understand the impacts of these reforms on the operation efficiency and constraints. A number of post-evaluations have been conducted by analysing the performance of the stakeholders on their profits (Crompton and Jupe 2003), quality of train service (Shaw 2001) and engineering operations (Watson 2001). Results from these studies are valuable for future improvement in the system, followed by a new cycle of post-evaluations. However, direct implementation of these changes is often costly and the consequences take a long period of time (e.g. years) to surface. With the advance of fast computing technologies, computer simulation is a cost-effective means to evaluate a hypothetical change in a system prior to actual implementation. For example, simulation suites have been developed to study a variety of traffic control strategies according to sophisticated models of train dynamics, traction and power systems (Goodman, Siu and Ho 1998, Ho and Yeung 2001). Unfortunately, under the restructured railway environment, it is by no means easy to model the complex behaviour of the stakeholders and the interactions between them. Multi-agent system (MAS) is a recently developed modelling technique which may be useful in assisting the railway industry to conduct simulations on the restructured railway system. In MAS, a real-world entity is modelled as a software agent that is autonomous, reactive to changes, able to initiate proactive actions and social communicative acts. It has been applied in the areas of supply-chain management processes (García-Flores, Wang and Goltz 2000, Jennings et al. 2000a, b) and e-commerce activities (Au, Ngai and Parameswaran 2003, Liu and You 2003), in which the objectives and behaviour of the buyers and sellers are captured by software agents. It is therefore beneficial to investigate the suitability or feasibility of applying agent modelling in railways and the extent to which it might help in developing better resource management strategies. This paper sets out to examine the benefits of using MAS to model the resource management process in railways. Section 2 first describes the business environment after the railway 2 Modelling issues on the railway resource management process using MAS reforms. Then the problems emerge from the restructuring process are identified in section 3. Section 4 describes the realisation of a MAS for railway resource management under the restructured scheme and the feasible studies expected from the model.
Resumo:
Extensive groundwater withdrawal has resulted in a severe seawater intrusion problem in the Gooburrum aquifers at Bundaberg, Queensland, Australia. Better management strategies can be implemented by understanding the seawater intrusion processes in those aquifers. To study the seawater intrusion process in the region, a two-dimensional density-dependent, saturated and unsaturated flow and transport computational model is used. The model consists of a coupled system of two non-linear partial differential equations. The first equation describes the flow of a variable-density fluid, and the second equation describes the transport of dissolved salt. A two-dimensional control volume finite element model is developed for simulating the seawater intrusion into the heterogeneous aquifer system at Gooburrum. The simulation results provide a realistic mechanism by which to study the convoluted transport phenomena evolving in this complex heterogeneous coastal aquifer.