938 resultados para Adverse selection, contract theory, experiment, principal-agent problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The width of a closed convex subset of n-dimensional Euclidean space is the distance between two parallel supporting hyperplanes. The Blaschke-Lebesgue problem consists of minimizing the volume in the class of convex sets of fixed constant width and is still open in dimension n >= 3. In this paper we describe a necessary condition that the minimizer of the Blaschke-Lebesgue must satisfy in dimension n = 3: we prove that the smooth components of the boundary of the minimizer have their smaller principal curvature constant and therefore are either spherical caps or pieces of tubes (canal surfaces).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here, I investigate the use of Bayesian updating rules applied to modeling how social agents change their minds in the case of continuous opinion models. Given another agent statement about the continuous value of a variable, we will see that interesting dynamics emerge when an agent assigns a likelihood to that value that is a mixture of a Gaussian and a uniform distribution. This represents the idea that the other agent might have no idea about what is being talked about. The effect of updating only the first moments of the distribution will be studied, and we will see that this generates results similar to those of the bounded confidence models. On also updating the second moment, several different opinions always survive in the long run, as agents become more stubborn with time. However, depending on the probability of error and initial uncertainty, those opinions might be clustered around a central value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Science is a fundamental human activity and we trust its results because it has several error-correcting mechanisms. It is subject to experimental tests that are replicated by independent parts. Given the huge amount of information available and the information asymetry between producers and users of knowledge, scientists have to rely on the reports of others. This makes it possible for social effects to influence the scientific community. Here, an Opinion Dynamics agent model is proposed to describe this situation. The influence of Nature through experiments is described as an external field that acts on the experimental agents. We will see that the retirement of old scientists can be fundamental in the acceptance of a new theory. We will also investigate the interplay between social influence and observations. This will allow us to gain insight in the problem of when social effects can have negligible effects in the conclusions of a scientific community and when we should worry about them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The involvement of nephrotoxic agents in acute renal failure (ARF) has increased over the last few decades. Among the drugs associated with nephrotoxic ARF are the radiologic contrast media whose nephrotoxic effects have grown, following the increasing diagnostic use of these agents. Methods: We evaluated the effect of iodinated contrast (IC) medium, administered in combination, or not, with hyperhydration or N-acetylcysteine (NAC), on creatinine clearance, production of urinary peroxides and renal histology of rats. Adult Wistar rats treated for 5 days were divided into the following groups: control (saline, 3 ml/kg/day, intraperitoneally [i.p.]), IC (sodium iothalamate meglumine, 3 ml/kg/day i.p.), IC + water (12 mL water, orally + IC, 3 ml/kg/day i.p. after 1 hour), IC + NAC (NAC, 150 mg/kg/day, orally + IC, 3 ml/kg/day i.p. after 1 hour) and IC + water + NAC. Results: IC medium reduced renal function, with maintenance of urinary flow. Hyperhydration did not reduce the nephrotoxic effect of the IC agent, which was observed in the group IC + NAC. The combination of hyperhydration and NAC had no superior protective effect compared with NAC alone. An increase in urinary peroxides was observed in the IC group, with NAC or water or the combination of both reducing this parameter. Histopathologic analysis revealed no significant alterations. Conclusions: In summary, given 5 days previously, NAC was found to be more effective than hyperhydration alone in the prevention of contrast-induced acute renal failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates how to make improved action selection for online policy learning in robotic scenarios using reinforcement learning (RL) algorithms. Since finding control policies using any RL algorithm can be very time consuming, we propose to combine RL algorithms with heuristic functions for selecting promising actions during the learning process. With this aim, we investigate the use of heuristics for increasing the rate of convergence of RL algorithms and contribute with a new learning algorithm, Heuristically Accelerated Q-learning (HAQL), which incorporates heuristics for action selection to the Q-Learning algorithm. Experimental results on robot navigation show that the use of even very simple heuristic functions results in significant performance enhancement of the learning rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As many countries are moving toward water sector reforms, practical issues of how water management institutions can better effect allocation, regulation, and enforcement of water rights have emerged. The problem of nonavailability of water to tailenders on an irrigation system in developing countries, due to unlicensed upstream diversions is well documented. The reliability of access or equivalently the uncertainty associated with water availability at their diversion point becomes a parameter that is likely to influence the application by users for water licenses, as well as their willingness to pay for licensed use. The ability of a water agency to reduce this uncertainty through effective water rights enforcement is related to the fiscal ability of the agency to monitor and enforce licensed use. In this paper, this interplay across the users and the agency is explored, considering the hydraulic structure or sequence of water use and parameters that define the users and the agency`s economics. The potential for free rider behavior by the users, as well as their proposals for licensed use are derived conditional on this setting. The analyses presented are developed in the framework of the theory of ""Law and Economics,`` with user interactions modeled as a game theoretic enterprise. The state of Ceara, Brazil, is used loosely as an example setting, with parameter values for the experiments indexed to be approximately those relevant for current decisions. The potential for using the ideas in participatory decision making is discussed. This paper is an initial attempt to develop a conceptual framework for analyzing such situations but with a focus on the reservoir-canal system water rights enforcement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maize (Zea mays L.) is a very important cereal to world-wide economy which is also true for Brazil, particularly in the South region. Grain yield and plant height have been chosen as important criteria by breeders and farmers from Santa Catarina State (SC), Brazil. The objective of this work was to estimate genetic-statistic parameters associated with genetic gain for grain yield and plant height, in the first cycle of convergent-divergent half-sib selection in a maize population (MPA1) cultivated by farmers within the municipality of Anchieta (SC). Three experiments were carried out in different small farms at Anchieta using low external agronomic inputs; each experiment represented independent samples of half-sib families, which were evaluated in randomized complete blocks with three replications per location. Significant differences among half-sib families were observed for both variables in all experiments. The expected responses to truncated selection of the 25% better families in each experiment were 5.1, 5.8 and 5.2% for reducing plant height and 3.9, 5.7 and 5.0% for increasing grain yield, respectively. The magnitudes of genetic-statistic parameters estimated evidenced that the composite population MPA1 exhibits enough genetic variability to be used in cyclical process of recurrent selection. There were evidences that the genetic structure of the base population MPA1, as indicated by its genetic variability, may lead to expressive changes in the traits under selection, even under low selection pressure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the problem of argument-function mismatch observed in the apparent subject-object inversion in Chinese consumption verbs, e.g., chi 'eat' and he 'drink', and accommodation verbs, e.g., zhu 'live' and shui 'sleep'. These verbs seem to allow the linking of [agent-SUBJ theme-OBJ] as well as [agent-OBJ theme-SUBJ], but only when the agent is also the semantic role denoting the measure or extent of the action. The account offered is formulated within LFG's lexical mapping theory. Under the simplest and also the strictest interpretation of the one-to-one argument-function mapping principle (or the theta-criterion), a composite role such as ag-ext receives syntactic assignment via one composing role only. One-to-one linking thus entails the suppression of the other composing role. Apparent subject-object inversion occurs when the more prominent agent role is suppressed and thus allows the less prominent extent role to dictate the linking of the entire ag-ext composite role. This LMT account also potentially facilitates a natural explanation of markedness among the competing syntactic structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.