911 resultados para top-down approach


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Biased Competition Model (BCM) suggests both top-down and bottom-up biases operate on selective attention (e.g., Desimone & Duncan, 1995). It has been suggested that top-down control signals may arise from working memory. In support, Downing (2000) found faster responses to probes presented in the location of stimuli held vs. not held in working memory. Soto, Heinke, Humphreys, and Blanco (2005) showed the involuntary nature of this effect and that shared features between stimuli were sufficient to attract attention. Here we show that stimuli held in working memory had an influence on the deployment of attentional resources even when: (1) It was detrimental to the task, (2) there was equal prior exposure, and (3) there was no bottom-up priming. These results provide further support for involuntary top-down guidance of attention from working memory and the basic tenets of the BCM, but further discredit the notion that bottom-up priming is necessary for the effect to occur.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid developments in computer technology have resulted in a widespread use of discrete event dynamic systems (DEDSs). This type of system is complex because it exhibits properties such as concurrency, conflict and non-determinism. It is therefore important to model and analyse such systems before implementation to ensure safe, deadlock free and optimal operation. This thesis investigates current modelling techniques and describes Petri net theory in more detail. It reviews top down, bottom up and hybrid Petri net synthesis techniques that are used to model large systems and introduces on object oriented methodology to enable modelling of larger and more complex systems. Designs obtained by this methodology are modular, easy to understand and allow re-use of designs. Control is the next logical step in the design process. This thesis reviews recent developments in control DEDSs and investigates the use of Petri nets in the design of supervisory controllers. The scheduling of exclusive use of resources is investigated and an efficient Petri net based scheduling algorithm is designed and a re-configurable controller is proposed. To enable the analysis and control of large and complex DEDSs, an object oriented C++ software tool kit was developed and used to implement a Petri net analysis tool, Petri net scheduling and control algorithms. Finally, the methodology was applied to two industrial DEDSs: a prototype can sorting machine developed by Eurotherm Controls Ltd., and a semiconductor testing plant belonging to SGS Thomson Microelectronics Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enterprise Risk Management (ERM) and Knowledge Management (KM) both encompass top-down and bottom-up approaches developing and embedding risk knowledge concepts and processes in strategy, policies, risk appetite definition, the decision-making process and business processes. The capacity to transfer risk knowledge affects all stakeholders and understanding of the risk knowledge about the enterprise's value is a key requirement in order to identify protection strategies for business sustainability. There are various factors that affect this capacity for transferring and understanding. Previous work has established that there is a difference between the influence of KM variables on Risk Control and on the perceived value of ERM. Communication among groups appears as a significant variable in improving Risk Control but only as a weak factor in improving the perceived value of ERM. However, the ERM mandate requires for its implementation a clear understanding, of risk management (RM) policies, actions and results, and the use of the integral view of RM as a governance and compliance program to support the value driven management of the organization. Furthermore, ERM implementation demands better capabilities for unification of the criteria of risk analysis, alignment of policies and protection guidelines across the organization. These capabilities can be affected by risk knowledge sharing between the RM group and the Board of Directors and other executives in the organization. This research presents an exploratory analysis of risk knowledge transfer variables used in risk management practice. A survey to risk management executives from 65 firms in various industries was undertaken and 108 answers were analyzed. Potential relationships among the variables are investigated using descriptive statistics and multivariate statistical models. The level of understanding of risk management policies and reports by the board is related to the quality of the flow of communication in the firm and perceived level of integration of the risk policy in the business processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Operators can become confused while diagnosing faults in process plant while in operation. This may prevent remedial actions being taken before hazardous consequences can occur. The work in this thesis proposes a method to aid plant operators in systematically finding the causes of any fault in the process plant. A computer aided fault diagnosis package has been developed for use on the widely available IBM PC compatible microcomputer. The program displays a coloured diagram of a fault tree on the VDU of the microcomputer, so that the operator can see the link between the fault and its causes. The consequences of the fault and the causes of the fault are also shown to provide a warning of what may happen if the fault is not remedied. The cause and effect data needed by the package are obtained from a hazard and operability (HAZOP) study on the process plant. The result of the HAZOP study is recorded as cause and symptom equations which are translated into a data structure and stored in the computer as a file for the package to access. Probability values are assigned to the events that constitute the basic causes of any deviation. From these probability values, the a priori probabilities of occurrence of other events are evaluated. A top-down recursive algorithm, called TDRA, for evaluating the probability of every event in a fault tree has been developed. From the a priori probabilities, the conditional probabilities of the causes of the fault are then evaluated using Bayes' conditional probability theorem. The posteriori probability values could then be used by the operators to check in an orderly manner the cause of the fault. The package has been tested using the results of a HAZOP study on a pilot distillation plant. The results from the test show how easy it is to trace the chain of events that leads to the primary cause of a fault. This method could be applied in a real process environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1, vertical canonical targets were detected faster when they were tilted (incongruent) than when they were vertical (congruent). This search asymmetry was reversed for tilted canonical targets. The effect of canonical orientation was partially preserved when objects were high-pass filtered, but it was eliminated when they were low-pass filtered, rendering them as unfamiliar shapes (Experiment 2). The effect of canonical orientation was also eliminated by inverting the objects (Experiment 3) and in a patient with visual agnosia (Experiment 4). These results indicate that orientation search with familiar objects can be modulated by canonical orientation, and they indicate a top-down influence on orientation processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article deals with language contact between a dominant standard language -German - and a lesser-used variety - Low German - in a situation in which the minoritised language is threatened by language shift and language loss. It analyses the application of Low German in forms of public language display and the selfpresentation of the community in tourism brochures, focusing on bilingual linguistic practices on the one hand and on underlying discourses on the other. It reveals that top-down and bottom-up approaches to implementing Low German in public language display show a remarkable homogeneity, thus creating a regional 'brand'. The article asks whether a raised level of visibility will in itself guarantee better chances for linguistic maintenance and survival of the threatened language. © 2011 Taylor & Francis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computerised production control developments have concentrated on Manufacturing Resources Planning (MRP II) systems. The literature suggests however, that despite the massive investment in hardware, software and management education, successful implementation of such systems in manufacturing industries has proved difficult. This thesis reviews the development of production planning and control systems, in particular, investigates the causes of failures in implementing MRP/MRP II systems in industrial environments and argues that the centralised and top-down planning structure, as well as the routine operational methodology of such systems, is inherently prone to failure. The thesis reviews the control benefits of cellular manufacturing systems but concludes that in more dynamic manufacturing environments, techniques such as Kanban are inappropriate. The basic shortcomings of MRP II systems are highlighted and a new enhanced operational methodology based on distributed planning and control principles is introduced. Distributed Manufacturing Resources Planning (DMRP), was developed as a capacity sensitive production planning and control solution for cellular manufacturing environments. The system utilises cell based, independently operated MRP II systems, integrated into a plant-wide control system through a Local Area Network. The potential benefits of adopting the system in industrial environments is discussed and the results of computer simulation experiments to compare the performance of the DMRP system against the conventional MRP II systems presented. DMRP methodology is shown to offer significant potential advantages which include ease of implementation, cost effectiveness, capacity sensitivity, shorter manufacturing lead times, lower working in progress levels and improved customer service.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background - Bipolar disorder is frequently misdiagnosed as major depressive disorder, delaying appropriate treatment and worsening outcome for many bipolar individuals. Emotion dysregulation is a core feature of bipolar disorder. Measures of dysfunction in neural systems supporting emotion regulation might therefore help discriminate bipolar from major depressive disorder. Methods - Thirty-one depressed individuals—15 bipolar depressed (BD) and 16 major depressed (MDD), DSM-IV diagnostic criteria, ages 18–55 years, matched for age, age of illness onset, illness duration, and depression severity—and 16 age- and gender-matched healthy control subjects performed two event-related paradigms: labeling the emotional intensity of happy and sad faces, respectively. We employed dynamic causal modeling to examine significant among-group alterations in effective connectivity (EC) between right- and left-sided neural regions supporting emotion regulation: amygdala and orbitomedial prefrontal cortex (OMPFC). Results - During classification of happy faces, we found profound and asymmetrical differences in EC between the OMPFC and amygdala. Left-sided differences involved top-down connections and discriminated between depressed and control subjects. Furthermore, greater medication load was associated with an amelioration of this abnormal top-down EC. Conversely, on the right side the abnormality was in bottom-up EC that was specific to bipolar disorder. These effects replicated when we considered only female subjects. Conclusions - Abnormal, left-sided, top-down OMPFC–amygdala and right-sided, bottom-up, amygdala–OMPFC EC during happy labeling distinguish BD and MDD, suggesting different pathophysiological mechanisms associated with the two types of depression.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report two functional magnetic resonance imaging (fMRI) experiments which reveal a cortical network activated when perceiving coloured grids, and experiencing the McCollough effect (ME). Our results show that perception of red-black and green-black grids activate the right fusiform gyrus (area V4) plus the left and right lingual gyri, right striate cortex (V1) and left insula. The ME activated the left anterior fusiform gyrus as well as the ventrolateral prefrontal cortex, and in common with colour perception, the left insula. These data confirm the critical role of the fusiform gyrus in actual and illusory colour perception as well as revealing localized frontal cortical activation associated with the ME, which would suggest that a 'top-down' mechanism is implicated in this illusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the important bioinformatics problem of predicting protein function from a protein's primary sequence. We consider the functional classification of G-Protein-Coupled Receptors (GPCRs), whose functions are specified in a class hierarchy. We tackle this task using a novel top-down hierarchical classification system where, for each node in the class hierarchy, the predictor attributes to be used in that node and the classifier to be applied to the selected attributes are chosen in a data-driven manner. Compared with a previous hierarchical classification system selecting classifiers only, our new system significantly reduced processing time without significantly sacrificing predictive accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper outlines, using evidence from several case studies, the use of alternative forms of manufacturing strategy processes. Our investigation shows that the manufacturing strategy development practices of manufacturers are evolving in many directions; we found several alternatives to the formal top-down planning process. Manufacturers use one or more of the following alternatives with or without the top-down manufacturing strategy process: a coherent pattern of actions; manufacturing/process improvement programs; or the pursuit of core manufacturing capabilities. It appears that the various manufacturing strategy development processes may be tied to the strategic role of manufacturing in a company. This paper offers a framework that captures the relationship between the strategic role of manufacturing and the process of manufacturing strategy development. An in-depth case from a UK company illustrates the evolving forms of manufacturing strategy development processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a disconnection between the top-down, elite, nature of sports mega-events and the ostensible redistributive and participatory sustainable development agendas staked out by BINGOs (Business-based International Non-Governmental Organizations) such as the contemporary International Olympic Committee (IOC). Focusing specifically on the London 2012 Summer Olympic and Paralympic Games, we argue that, for all the environmental technology advances offered by sports mega-events, their dominant model remains one of a hollowed-out form of sustainable development. Despite significant technical and methodological innovations in environmental stewardship, the development model of the London Olympics remains predicated on the satisfaction of transnational investment flows. We discuss what this means for claims about the staging of a ‘green’ Olympic Games.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose of review: It has recently been argued that the future of intensive care medicine will rely on high quality management and teamwork. Therefore, this review takes an organizational psychology perspective to examine the most recent research on the relationship between teamwork, care processes, and patient outcomes in intensive care. Recent findings: Interdisciplinary communication within a team is crucial for the development of negotiated shared treatment goals and short-team patient outcomes. Interventions for maximizing team communication have received substantial interest in recent literature. Intensive care coordination is not a linear process, and intensive care teams often fail to discuss how to implement goals, trigger and align activities, or reflect on their performance. Despite a move toward interdisciplinary team working, clinical decision-making is still problematic and continues to be perceived as a top-down and authoritative process. The topic of team leadership in intensive care is underexplored and requires further research. Summary: Based on findings from the most recent research evidence in medicine and management, four principles are identified for improving the effectiveness of team working in intensive care: engender professional efficacy, create stable teams and leaders, develop trust and participative safety, and enable frequent team reflexivity.