389 resultados para process analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality has been an important factor for shopping centers in competitive conditions. However, quality measurement has no standard. In Surabaya, only two regional shopping centers will be measured in this research. The objective is assessing quality of shopping centers building using Analytical Hierarchy Process (AHP) method and calculating the Building Quality Index. An overall ranking of Hierarchy priorities of quality criteria founded as a result from AHP analysis. Access and Circulation became the highest priority in affecting quality of shopping centers building according to respondents’ perception of quality. Weightened value as a result from comparison between two shopping centers as follows: Tunjungan Plaza get 0,732 point and Surabaya Plaza get 0,268 point. The first shopping center got higher weight than the second shopping center. The BQI for Tunjungan Plaza is 66% and for Surabaya Plaza is 64%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite more than three decades of research, there is a limited understanding of the transactional processes of appraisal, stress and coping. This has led to calls for more focused research on the entire process that underlies these variables. To date, there remains a paucity of such research. The present study examined Lazarus and Folkman’s (1984) transactional model of stress and coping. One hundred and twenty nine Australian participants with full time employment (i.e. nurses and administration employees) were recruited. There were 49 male (age mean = 34, SD = 10.51) and 80 female (age mean = 36, SD = 10.31) participants. The analysis of three path models indicated that in addition to the original paths, which were found in Lazarus and Folkman’s transactional model (primary appraisal-->secondary appraisal-->stress-->coping), there were also direct links between primary appraisal and stress level time one and between stress level time one to stress level time two. This study has provided additional insights into the transactional process which will extend our understanding of how individuals appraise, cope and experience occupational stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective than developing them from scratch. As process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. Experiments are conducted to demonstrate that our proposal achieves a significant reduction in query evaluation time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis critically analyses sperm donation practices from a child-centred perspective. It examines the effects, both personal and social, of disrupting the unity of biological and social relatedness in families affected by donor conception. It examines how disruption is facilitated by a process of mediation which is detailed using a model provided by Sunderland (2002). This model identifies mediating movements - alienation, translation, re-contextualisation and absorption - which help to explain the powerful and dominating material, and social and political processes which occur in biotechnology, or in reproductive technology in this case. The understanding of such movements and mediation of meanings is inspired by the complementary work of Silverstone (1999) and Sunderland. This model allows for a more critical appreciation of the movement of meaning from previously inalienable aspects of life to alienable products through biotechnology (Sunderland, 2002). Once this mediation in donor conception is subjected to critical examination here, it is then approached from different angles of investigation. The thesis posits that two conflicting notions of the self are being applied to fertility-frustrated adults and the offspring of reproductive interventions. Adults using reproductive interventions receive support to maximise their genetic continuity, but in so doing they create and dismiss the corresponding genetic discontinuity produced for the offspring. The offspring’s kinship and identity are then framed through an experimental postmodernist notion, presenting them as social rather than innate constructs. The adults using the reproductive intervention, on the other hand, have their identity and kinship continuity framed and supported as normative, innate, and based on genetic connection. This use of shifting frameworks is presented as unjust and harmful, creating double standards and a corrosion of kinship values, connection and intelligibility between generations; indeed, it is put forward as adult-centric. The analysis of other forms of human kinship dislocation provided by this thesis explores an under-utilised resource which is used to counter the commonly held opinion that any disruption of social and genetic relatedness for donor offspring is insignificant. The experiences of adoption and the stolen generations are used to inform understanding of the personal and social effects of such kinship disruption and potential reunion for donor offspring. These examples, along with laws governing international human rights, further strengthen the appeal here for normative principles and protections based on collective knowledge and standards to be applied to children of reproductive technology. The thesis presents the argument that the framing and regulation of reproductive technology is excessively influenced by industry providers and users. The interests of these parties collide with and corrode any accurate assessments and protections afforded to the children of reproductive technology. The thesis seeks to counter such encroachments and concludes by presenting these protections, frameworks, and human experiences as resources which can help to address the problems created for the offspring of such reproductive interventions, thereby illustrating why these reproductive interventions should be discontinued.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project explored ways in which Adult and Community Education (ACE) could make a greater contribution to the human capital development outcome under the National Reform Agenda (NRA), and increase the number of skilled workers in Australia. Data on current vocational and non-vocational ACE programs was analysed. Strategies to improve ACE were collated for consideration by government authorities and ACE providers. There is much diversity in the perceived role and activities of ACE. Researchers have found it challenging to create a profile that depicts the whole sector, particularly in the absence of much reliable, valid and comparable data on ACE activities and outcomes. However, there is evidence indicative of ACE’s assistance in re-engaging with learning and training, and initiating pathways to further training or employment. The potential for ACE to make a bigger contribution to skilling Australia is recognised by governments across the nation (Senate Employment, Workplace Relations, Small Business and Education Committee, 1997). Yet policy changes to facilitate an increased role of ACE in the skilling process, and resourcing for ACE programs continue to receive less attention. This project explored three research questions: • What does the current profile of the ACE sector look like? • How is ACE contributing to reducing the skills deficit? • How can ACE enhance its contributions to reduce the skills deficit and achieve the human capital development outcome of the National Reform Agenda? The responsiveness

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process models provide visual support for analyzing and improving complex organizational processes. In this paper, we discuss differences of process modeling languages using cognitive effectiveness considerations, to make statements about the ease of use and quality of user experience. Aspects of cognitive effectiveness are of importance for learning a modeling language, creating models, and understanding models. We identify the criteria representational clarity, perceptual discriminability, perceptual immediacy, visual expressiveness, and graphic parsimony to compare and assess the cognitive effectiveness of different modeling languages. We apply these criteria in an analysis of the routing elements of UML Activity Diagrams, YAWL, BPMN, and EPCs, to uncover their relative strengths and weaknesses from a quality of user experience perspective. We draw conclusions that are relevant to the usability of these languages in business process modeling projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to advance understandings of the processes of cluster-building and evolution, or transformative and adaptive change, through the conscious design and reflective activities of private and public actors. A model of transformation is developed which illustrates the importance of actors becoming exposed to new ideas and visions for industrial change by political entrepreneurs and external networks. Further, actors must be guided in their decision-making and action by the new vision, and this requires that they are persuaded of its viability through the provision of test cases and supportive resources and institutions. In order for new ideas to become guiding models, actors must be convinced of their desirability through the portrayal of models as a means of confronting competitive challenges and serving the economic interests of the city/region. Subsequent adaptive change is iterative and reflexive, involving a process of strategic learning amongst key industrial and political actors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermogravimetric analysis-mass spectrometry, X-ray diffraction and scanning electron microscopy (SEM) were used to characterize eight kaolinite samples from China. The results show that the thermal decomposition occurs in three main steps (a) desorption of water below 100 °C, (b) dehydration at about 225 °C, (c) well defined dehydroxylation at around 450 °C. It is also found that decarbonization took place at 710 °C due to the decomposition of calcite impurity in kaolin. The temperature of dehydroxylation of kaolinite is found to be influenced by the degree of disorder of the kaolinite structure and the gases evolved in the decomposition process can be various because of the different amount and kinds of impurities. It is evident by the mass spectra that the interlayer carbonate from impurity of calcite and organic carbon is released as CO2 around 225, 350 and 710 °C in the kaolinite samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the formal recognition of practice-led research in the 1990s, many higher research degree candidates in art, design and media have submitted creative works along with an accompanying written document or ‘exegesis’ for examination. Various models for the exegesis have been proposed in university guidelines and academic texts during the past decade, and students and supervisors have experimented with its contents and structure. With a substantial number of exegeses submitted and archived, it has now become possible to move beyond proposition to empirical analysis. In this article we present the findings of a content analysis of a large, local sample of submitted exegeses. We identify the emergence of a persistent pattern in the types of content included as well as overall structure. Besides an introduction and conclusion, this pattern includes three main parts, which can be summarized as situating concepts (conceptual definitions and theories); precedents of practice (traditions and exemplars in the field); and researcher’s creative practice (the creative process, the artifacts produced and their value as research). We argue that this model combines earlier approaches to the exegesis, which oscillated between academic objectivity, by providing a contextual framework for the practice, and personal reflexivity, by providing commentary on the creative practice. But this model is more than simply a hybrid: it provides a dual orientation, which allows the researcher to both situate their creative practice within a trajectory of research and do justice to its personally invested poetics. By performing the important function of connecting the practice and creative work to a wider emergent field, the model helps to support claims for a research contribution to the field. We call it a connective model of exegesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a resource constrained business world, strategic choices must be made on process improvement and service delivery. There are calls for more agile forms of enterprises and much effort is being directed at moving organizations from a complex landscape of disparate application systems to that of an integrated and flexible enterprise accessing complex systems landscapes through service oriented architecture (SOA). This paper describes the deconstruction of an enterprise into business services using value chain analysis as each element in the value chain can be rendered as a business service in the SOA. These business services are explicitly linked to the attainment of specific organizational strategies and their contribution to the attainment of strategy is assessed and recorded. This contribution is then used to provide a rank order of business service to strategy. This information facilitates executive decision making on which business service to develop into the SOA. The paper describes an application of this Critical Service Identification Methodology (CSIM) to a case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliability analysis has several important engineering applications. Designers and operators of equipment are often interested in the probability of the equipment operating successfully to a given age - this probability is known as the equipment's reliability at that age. Reliability information is also important to those charged with maintaining an item of equipment, as it enables them to model and evaluate alternative maintenance policies for the equipment. In each case, information on failures and survivals of a typical sample of items is used to estimate the required probabilities as a function of the item's age, this process being one of many applications of the statistical techniques known as distribution fitting. In most engineering applications, the estimation procedure must deal with samples containing survivors (suspensions or censorings); this thesis focuses on several graphical estimation methods that are widely used for analysing such samples. Although these methods have been current for many years, they share a common shortcoming: none of them is continuously sensitive to changes in the ages of the suspensions, and we show that the resulting reliability estimates are therefore more pessimistic than necessary. We use a simple example to show that the existing graphical methods take no account of any service recorded by suspensions beyond their respective previous failures, and that this behaviour is inconsistent with one's intuitive expectations. In the course of this thesis, we demonstrate that the existing methods are only justified under restricted conditions. We present several improved methods and demonstrate that each of them overcomes the problem described above, while reducing to one of the existing methods where this is justified. Each of the improved methods thus provides a realistic set of reliability estimates for general (unrestricted) censored samples. Several related variations on these improved methods are also presented and justified. - i

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experts in injection molding often refer to previous solutions to find a mold design similar to the current mold and use previous successful molding process parameters with intuitive adjustment and modification as a start for the new molding application. This approach saves a substantial amount of time and cost in experimental based corrective actions which are required in order to reach optimum molding conditions. A Case-Based Reasoning (CBR) System can perform the same task by retrieving a similar case which is applied to the new case from the case library and uses the modification rules to adapt a solution to the new case. Therefore, a CBR System can simulate human e~pertise in injection molding process design. This research is aimed at developing an interactive Hybrid Expert System to reduce expert dependency needed on the production floor. The Hybrid Expert System (HES) is comprised of CBR, flow analysis, post-processor and trouble shooting systems. The HES can provide the first set of operating parameters in order to achieve moldability condition and producing moldings free of stress cracks and warpage. In this work C++ programming language is used to implement the expert system. The Case-Based Reasoning sub-system is constructed to derive the optimum magnitude of process parameters in the cavity. Toward this end the Flow Analysis sub-system is employed to calculate the pressure drop and temperature difference in the feed system to determine the required magnitude of parameters at the nozzle. The Post-Processor is implemented to convert the molding parameters to machine setting parameters. The parameters designed by HES are implemented using the injection molding machine. In the presence of any molding defect, a trouble shooting subsystem can determine which combination of process parameters must be changed iii during the process to deal with possible variations. Constraints in relation to the application of this HES are as follows. - flow length (L) constraint: 40 mm < L < I 00 mm, - flow thickness (Th) constraint: -flow type: - material types: I mm < Th < 4 mm, unidirectional flow, High Impact Polystyrene (HIPS) and Acrylic. In order to test the HES, experiments were conducted and satisfactory results were obtained.