974 resultados para Linear decision rules


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adopting a social identity perspective, the research was designed to examine the interplay between premerger group status and integration pattern in the prediction of responses to a merger. The research employed a 2 (status: high versus low) x 3 (integration pattern: assimilation versus integrational equality versus transformation) between-participants factorial design. We predicted that integration pattern and group status would interact such that the responses of the members of high status group would be most positive under conditions of an assimilation pattern, whereas members of low status groups were expected to favour an integration-equality pattern. After working on a task in small groups, group status was manipulated and the groups worked on a second task. The merger was then announced and the integration pattern was manipulated (e.g., in terms of the logo, location, and decision rules). The main dependent variables were assessed after the merged groups had worked together on a third task. As expected, there was evidence that the effects of group status on responses to the merger were moderated by integration pattern. Field data also indicated that both premerger status and perceived integration pattern influenced employee responses to an organisational merger.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning processes are widely held to be the mechanism by which boundedly rational agents adapt to environmental changes. We argue that this same outcome might also be achieved by a different mechanism, namely specialisation and the division of knowledge, which we here extend to the consumer side of the economy. We distinguish between high-level preferences and low-level preferences as nested systems of rules used to solve particular choice problems. We argue that agents, while sovereign in high-level preferences, may often find it expedient to acquire, in a pseudo-market, the low-level preferences in order to make good choices when purchasing complex commodities about which they have little or no experience. A market for preferences arises when environmental complexity overwhelms learning possibilities and leads agents to make use of other people's specialised knowledge and decision rules.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Magnitudes and patterns of energy expenditure in animal contests are seldom measured, but can be critical for predicting contest dynamics and understanding the evolution of ritualized fighting behaviour. In the sierra dome spider, males compete for sexual access to females and their webs. They show three distinct phases of fighting behaviour, escalating from ritualized noncontact display (phase 1) to cooperative wrestling (phase 2), and finally to unritualized, potentially fatal fighting (phase 3). Using CO2 respirometry, we estimated energetic costs of male-male combat in terms of mean and maximum metabolic rates and the rate of increase in energy expenditure. We also investigated the energetic consequences of age and body mass, and compared fighting metabolism to metabolism during courtship. All three phases involved mean energy expenditures well above resting metabolic rate (3.5 X, 7.4 X and 11.5 X). Both mean and maximum energy expenditure became substantially greater as fights escalated through successive phases. The rates of increase in energy use during phases 2 and 3 were much higher than in phase 1. In addition, age and body mass affected contest energetics. These results are consistent with a basic prediction of evolutionarily stable strategy contest models, that sequences of agonistic behaviours should be organized into phases of escalating energetic costs. Finally, higher energetic costs of escalated fighting compared to courtship provide a rationale for first-male sperm precedence in this spider species. (C) 2004 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scorpion toxins are common experimental tools for studies of biochemical and pharmacological properties of ion channels. The number of functionally annotated scorpion toxins is steadily growing, but the number of identified toxin sequences is increasing at much faster pace. With an estimated 100,000 different variants, bioinformatic analysis of scorpion toxins is becoming a necessary tool for their systematic functional analysis. Here, we report a bioinformatics-driven system involving scorpion toxin structural classification, functional annotation, database technology, sequence comparison, nearest neighbour analysis, and decision rules which produces highly accurate predictions of scorpion toxin functional properties. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Systematic protocols that use decision rules or scores arc, seen to improve consistency and transparency in classifying the conservation status of species. When applying these protocols, assessors are typically required to decide on estimates for attributes That are inherently uncertain, Input data and resulting classifications are usually treated as though they arc, exact and hence without operator error We investigated the impact of data interpretation on the consistency of protocols of extinction risk classifications and diagnosed causes of discrepancies when they occurred. We tested three widely used systematic classification protocols employed by the World Conservation Union, NatureServe, and the Florida Fish and Wildlife Conservation Commission. We provided 18 assessors with identical information for 13 different species to infer estimates for each of the required parameters for the three protocols. The threat classification of several of the species varied from low risk to high risk, depending on who did the assessment. This occurred across the three Protocols investigated. Assessors tended to agree on their placement of species in the highest (50-70%) and lowest risk categories (20-40%), but There was poor agreement on which species should be placed in the intermediate categories, Furthermore, the correspondence between The three classification methods was unpredictable, with large variation among assessors. These results highlight the importance of peer review and consensus among multiple assessors in species classifications and the need to be cautious with assessments carried out 4), a single assessor Greater consistency among assessors requires wide use of training manuals and formal methods for estimating parameters that allow uncertainties to be represented, carried through chains of calculations, and reported transparently.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Authors from Burrough (1992) to Heuvelink et al. (2007) have highlighted the importance of GIS frameworks which can handle incomplete knowledge in data inputs, in decision rules and in the geometries and attributes modelled. It is particularly important for this uncertainty to be characterised and quantified when GI data is used for spatial decision making. Despite a substantial and valuable literature on means of representing and encoding uncertainty and its propagation in GI (e.g.,Hunter and Goodchild 1993; Duckham et al. 2001; Couclelis 2003), no framework yet exists to describe and communicate uncertainty in an interoperable way. This limits the usability of Internet resources of geospatial data, which are ever-increasing, based on specifications that provide frameworks for the ‘GeoWeb’ (Botts and Robin 2007; Cox 2006). In this paper we present UncertML, an XML schema which provides a framework for describing uncertainty as it propagates through many applications, including online risk management chains. This uncertainty description ranges from simple summary statistics (e.g., mean and variance) to complex representations such as parametric, multivariate distributions at each point of a regular grid. The philosophy adopted in UncertML is that all data values are inherently uncertain, (i.e., they are random variables, rather than values with defined quality metadata).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a methodology: 'decision rules for analyzing manufacturing activities', which is designed to be a practical system of enquiry linking a strategic analysis to the design of production systems. The paper describes the development of the system, an industry specific design methodology, into DRAMA II which is a model that serves as an analytical tool for studying decision processes and implementation of production systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation studies the process of operations systems design within the context of the manufacturing organization. Using the DRAMA (Design Routine for Adopting Modular Assembly) model as developed by a team from the IDOM Research Unit at Aston University as a starting point, the research employed empirically based fieldwork and a survey to investigate the process of production systems design and implementation within four UK manufacturing industries: electronics assembly, electrical engineering, mechanical engineering and carpet manufacturing. The intention was to validate the basic DRAMA model as a framework for research enquiry within the electronics industry, where the initial IDOM work was conducted, and then to test its generic applicability, further developing the model where appropriate, within the other industries selected. The thesis contains a review of production systems design theory and practice prior to presenting thirteen industrial case studies of production systems design from the four industry sectors. The results and analysis of the postal survey into production systems design are then presented. The strategic decisions of manufacturing and their relationship to production systems design, and the detailed process of production systems design and operation are then discussed. These analyses are used to develop the generic model of production systems design entitled DRAMA II (Decision Rules for Analysing Manufacturing Activities). The model contains three main constituent parts: the basic DRAMA model, the extended DRAMA II model showing the imperatives and relationships within the design process, and a benchmark generic approach for the design and analysis of each component in the design process. DRAMA II is primarily intended for use by researchers as an analytical framework of enquiry, but is also seen as having application for manufacturing practitioners.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses how numerically imprecise information can be modelled and how a risk evaluation process can be elaborated by integrating procedures for numerically imprecise probabilities and utilities. More recently, representations and methods for stating and analysing probabilities and values (utilities) with belief distributions over them (second order representations) have been suggested. In this paper, we are discussing some shortcomings in the use of the principle of maximising the expected utility and of utility theory in general, and offer remedies by the introduction of supplementary decision rules based on a concept of risk constraints taking advantage of second-order distributions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work represents an original contribution to the methodology for ecosystem models' development as well as the rst attempt of an end-to-end (E2E) model of the Northern Humboldt Current Ecosystem (NHCE). The main purpose of the developed model is to build a tool for ecosystem-based management and decision making, reason why the credibility of the model is essential, and this can be assessed through confrontation to data. Additionally, the NHCE exhibits a high climatic and oceanographic variability at several scales, the major source of interannual variability being the interruption of the upwelling seasonality by the El Niño Southern Oscillation, which has direct e ects on larval survival and sh recruitment success. Fishing activity can also be highly variable, depending on the abundance and accessibility of the main shery resources. This context brings the two main methodological questions addressed in this thesis, through the development of an end-to-end model coupling the high trophic level model OSMOSE to the hydrodynamics and biogeochemical model ROMS-PISCES: i) how to calibrate ecosystem models using time series data and ii) how to incorporate the impact of the interannual variability of the environment and shing. First, this thesis highlights some issues related to the confrontation of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration of ecosystem models. We propose two criteria to classify the parameters of a model: the model dependency and the time variability of the parameters. Then, these criteria along with the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. Additionally, a new Evolutionary Algorithm designed for the calibration of stochastic models (e.g Individual Based Model) and optimized for maximum likelihood estimation has been developed and applied to the calibration of the OSMOSE model to time series data. The environmental variability is explicit in the model: the ROMS-PISCES model forces the OSMOSE model and drives potential bottom-up e ects up the foodweb through plankton and sh trophic interactions, as well as through changes in the spatial distribution of sh. The latter e ect was taken into account using presence/ absence species distribution models which are traditionally assessed through a confusion matrix and the statistical metrics associated to it. However, when considering the prediction of the habitat against time, the variability in the spatial distribution of the habitat can be summarized and validated using the emerging patterns from the shape of the spatial distributions. We modeled the potential habitat of the main species of the Humboldt Current Ecosystem using several sources of information ( sheries, scienti c surveys and satellite monitoring of vessels) jointly with environmental data from remote sensing and in situ observations, from 1992 to 2008. The potential habitat was predicted over the study period with monthly resolution, and the model was validated using quantitative and qualitative information of the system using a pattern oriented approach. The nal ROMS-PISCES-OSMOSE E2E ecosystem model for the NHCE was calibrated using our evolutionary algorithm and a likelihood approach to t monthly time series data of landings, abundance indices and catch at length distributions from 1992 to 2008. To conclude, some potential applications of the model for shery management are presented and their limitations and perspectives discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation describes two studies on macroeconomic trends and cycles. The first chapter studies the impact of Information Technology (IT) on the U.S. labor market. Over the past 30 years, employment and income shares of routine-intensive occupations have declined significantly relative to nonroutine occupations, and the overall U.S. labor income share has declined relative to capital. Furthermore, the decline of routine employment has been largely concentrated during recessions and ensuing recoveries. I build a model of unbalanced growth to assess the role of computerization and IT in driving these labor market trends and cycles. I augment a neoclassical growth model with exogenous IT progress as a form of Routine-Biased Technological Change (RBTC). I show analytically that RBTC causes the overall labor income share to follow a U-shaped time path, as the monotonic decline of routine labor share is increasingly offset by the monotonic rise of nonroutine labor share and the elasticity of substitution between the overall labor and capital declines under IT progress. Quantitatively, the model explains nearly all the divergence between routine and nonroutine labor in the period 1986-2014, as well as the mild decline of the overall labor share between 1986 and the early 2000s. However, the model with IT progress alone cannot explain the accelerated decline of labor income share after the early 2000s, suggesting that other factors, such as globalization, may have played a larger role in this period. Lastly, when nonconvex labor adjustment costs are present, the model generates a stepwise decline in routine labor hours, qualitatively consistent with the data. The timing of these trend adjustments can be significantly affected by aggregate productivity shocks and concentrated in recessions. The second chapter studies the implications of loss aversion on the business cycle dynamics of aggregate consumption and labor hours. Loss aversion refers to the fact that people are distinctively more sensitive to losses than to gains. Loss averse agents are very risk averse around the reference point and exhibit asymmetric responses to positive and negative income shocks. In an otherwise standard Real Business Cycle (RBC) model, I study loss aversion in both consumption alone and consumption-and-leisure together. My results indicate that how loss aversion affects business cycle dynamics depends critically on the nature of the reference point. If, for example, the reference point is status quo, loss aversion dramatically lowers the effective inter-temporal rate of substitution and induces excessive consumption smoothing. In contrast, if the reference point is fixed at a constant level, loss aversion generates a flat region in the decision rules and asymmetric impulse responses to technology shocks. Under a reasonable parametrization, loss aversion has the potential to generate asymmetric business cycles with deeper and more prolonged recessions.