105 resultados para Linear decision rules

em University of Queensland eSpace - Australia


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adopting a social identity perspective, the research was designed to examine the interplay between premerger group status and integration pattern in the prediction of responses to a merger. The research employed a 2 (status: high versus low) x 3 (integration pattern: assimilation versus integrational equality versus transformation) between-participants factorial design. We predicted that integration pattern and group status would interact such that the responses of the members of high status group would be most positive under conditions of an assimilation pattern, whereas members of low status groups were expected to favour an integration-equality pattern. After working on a task in small groups, group status was manipulated and the groups worked on a second task. The merger was then announced and the integration pattern was manipulated (e.g., in terms of the logo, location, and decision rules). The main dependent variables were assessed after the merged groups had worked together on a third task. As expected, there was evidence that the effects of group status on responses to the merger were moderated by integration pattern. Field data also indicated that both premerger status and perceived integration pattern influenced employee responses to an organisational merger.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning processes are widely held to be the mechanism by which boundedly rational agents adapt to environmental changes. We argue that this same outcome might also be achieved by a different mechanism, namely specialisation and the division of knowledge, which we here extend to the consumer side of the economy. We distinguish between high-level preferences and low-level preferences as nested systems of rules used to solve particular choice problems. We argue that agents, while sovereign in high-level preferences, may often find it expedient to acquire, in a pseudo-market, the low-level preferences in order to make good choices when purchasing complex commodities about which they have little or no experience. A market for preferences arises when environmental complexity overwhelms learning possibilities and leads agents to make use of other people's specialised knowledge and decision rules.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Magnitudes and patterns of energy expenditure in animal contests are seldom measured, but can be critical for predicting contest dynamics and understanding the evolution of ritualized fighting behaviour. In the sierra dome spider, males compete for sexual access to females and their webs. They show three distinct phases of fighting behaviour, escalating from ritualized noncontact display (phase 1) to cooperative wrestling (phase 2), and finally to unritualized, potentially fatal fighting (phase 3). Using CO2 respirometry, we estimated energetic costs of male-male combat in terms of mean and maximum metabolic rates and the rate of increase in energy expenditure. We also investigated the energetic consequences of age and body mass, and compared fighting metabolism to metabolism during courtship. All three phases involved mean energy expenditures well above resting metabolic rate (3.5 X, 7.4 X and 11.5 X). Both mean and maximum energy expenditure became substantially greater as fights escalated through successive phases. The rates of increase in energy use during phases 2 and 3 were much higher than in phase 1. In addition, age and body mass affected contest energetics. These results are consistent with a basic prediction of evolutionarily stable strategy contest models, that sequences of agonistic behaviours should be organized into phases of escalating energetic costs. Finally, higher energetic costs of escalated fighting compared to courtship provide a rationale for first-male sperm precedence in this spider species. (C) 2004 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scorpion toxins are common experimental tools for studies of biochemical and pharmacological properties of ion channels. The number of functionally annotated scorpion toxins is steadily growing, but the number of identified toxin sequences is increasing at much faster pace. With an estimated 100,000 different variants, bioinformatic analysis of scorpion toxins is becoming a necessary tool for their systematic functional analysis. Here, we report a bioinformatics-driven system involving scorpion toxin structural classification, functional annotation, database technology, sequence comparison, nearest neighbour analysis, and decision rules which produces highly accurate predictions of scorpion toxin functional properties. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Systematic protocols that use decision rules or scores arc, seen to improve consistency and transparency in classifying the conservation status of species. When applying these protocols, assessors are typically required to decide on estimates for attributes That are inherently uncertain, Input data and resulting classifications are usually treated as though they arc, exact and hence without operator error We investigated the impact of data interpretation on the consistency of protocols of extinction risk classifications and diagnosed causes of discrepancies when they occurred. We tested three widely used systematic classification protocols employed by the World Conservation Union, NatureServe, and the Florida Fish and Wildlife Conservation Commission. We provided 18 assessors with identical information for 13 different species to infer estimates for each of the required parameters for the three protocols. The threat classification of several of the species varied from low risk to high risk, depending on who did the assessment. This occurred across the three Protocols investigated. Assessors tended to agree on their placement of species in the highest (50-70%) and lowest risk categories (20-40%), but There was poor agreement on which species should be placed in the intermediate categories, Furthermore, the correspondence between The three classification methods was unpredictable, with large variation among assessors. These results highlight the importance of peer review and consensus among multiple assessors in species classifications and the need to be cautious with assessments carried out 4), a single assessor Greater consistency among assessors requires wide use of training manuals and formal methods for estimating parameters that allow uncertainties to be represented, carried through chains of calculations, and reported transparently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the construction of Australia-wide soil property predictions from a compiled national soils point database. Those properties considered include pH, organic carbon, total phosphorus, total nitrogen, thickness. texture, and clay content. Many of these soil properties are used directly in environmental process modelling including global climate change models. Models are constructed at the 250-m resolution using decision trees. These relate the soil property to the environment through a suite of environmental predictors at the locations where measurements are observed. These models are then used to extend predictions to the continental extent by applying the rules derived to the exhaustively available environmental predictors. The methodology and performance is described in detail for pH and summarized for other properties. Environmental variables are found to be important predictors, even at the 250-m resolution at which they are available here as they can describe the broad changes in soil property.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The classification rules of linear discriminant analysis are defined by the true mean vectors and the common covariance matrix of the populations from which the data come. Because these true parameters are generally unknown, they are commonly estimated by the sample mean vector and covariance matrix of the data in a training sample randomly drawn from each population. However, these sample statistics are notoriously susceptible to contamination by outliers, a problem compounded by the fact that the outliers may be invisible to conventional diagnostics. High-breakdown estimation is a procedure designed to remove this cause for concern by producing estimates that are immune to serious distortion by a minority of outliers, regardless of their severity. In this article we motivate and develop a high-breakdown criterion for linear discriminant analysis and give an algorithm for its implementation. The procedure is intended to supplement rather than replace the usual sample-moment methodology of discriminant analysis either by providing indications that the dataset is not seriously affected by outliers (supporting the usual analysis) or by identifying apparently aberrant points and giving resistant estimators that are not affected by them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of ensuring compliance of business processes, implemented within and across organisational boundaries, with the constraints stated in related business contracts. In order to deal with the complexity of this problem we propose two solutions that allow for a systematic and increasingly automated support for addressing two specific compliance issues. One solution provides a set of guidelines for progressively transforming contract conditions into business processes that are consistent with contract conditions thus avoiding violation of the rules in contract. Another solution compares rules in business contracts and rules in business processes to check for possible inconsistencies. Both approaches rely on a computer interpretable representation of contract conditions that embodies contract semantics. This semantics is described in terms of a logic based formalism allowing for the description of obligations, prohibitions, permissions and violations conditions in contracts. This semantics was based on an analysis of typical building blocks of many commercial, financial and government contracts. The study proved that our contract formalism provides a good foundation for describing key types of conditions in contracts, and has also given several insights into valuable transformation techniques and formalisms needed to establish better alignment between these two, traditionally separate areas of research and endeavour. The study also revealed a number of new areas of research, some of which we intend to address in near future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on a system for automated agent negotiation, based on a formal and executable approach to capture the behavior of parties involved in a negotiation. It uses the JADE agent framework, and its major distinctive feature is the use of declarative negotiation strategies. The negotiation strategies are expressed in a declarative rules language, defeasible logic, and are applied using the implemented system DR-DEVICE. The key ideas and the overall system architecture are described, and a particular negotiation case is presented in detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Faced with today’s ill-structured business environment of fast-paced change and rising uncertainty, organizations have been searching for management tools that will perform satisfactorily under such ambiguous conditions. In the arena of managerial decision making, one of the approaches being assessed is the use of intuition. Based on our definition of intuition as a non-sequential information-processing mode, which comprises both cognitive and affective elements and results in direct knowing without any use of conscious reasoning, we develop a testable model of integrated analytical and intuitive decision making and propose ways to measure the use of intuition.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).