936 resultados para Bounded Variables


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A neural network model for solving constrained nonlinear optimization problems with bounded variables is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are completed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. The network is shown to be completely stable and globally convergent to the solutions of constrained nonlinear optimization problems. A fuzzy logic controller is incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a method for learning treewidth-bounded Bayesian networks from data sets containing thousands of variables. Bounding the treewidth of a Bayesian network greatly reduces the complexity of inferences. Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. Our novel algorithm accomplishes this task, scaling both to large domains and to large treewidths. Our novel approach consistently outperforms the state of the art on experiments with up to thousands of variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When can a single variable be more accurate in binary choice than multiple sources of information? We derive analytically the probability that a single variable (SV) will correctly predict one of two choices when both criterion and predictor are continuous variables. We further provide analogous derivations for multiple regression (MR) and equal weighting (EW) and specify the conditions under which the models differ in expected predictive ability. Key factors include variability in cue validities, intercorrelation between predictors, and the ratio of predictors to observations in MR. Theory and simulations are used to illustrate the differential effects of these factors. Results directly address why and when one-reason decision making can be more effective than analyses that use more information. We thus provide analytical backing to intriguing empirical results that, to date, have lacked theoretical justification. There are predictable conditions for which one should expect less to be more.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans cette thèse, nous proposons de nouveaux résultats de systèmes superintégrables séparables en coordonnées polaires. Dans un premier temps, nous présentons une classification complète de tous les systèmes superintégrables séparables en coordonnées polaires qui admettent une intégrale du mouvement d'ordre trois. Des potentiels s'exprimant en terme de la sixième transcendante de Painlevé et de la fonction elliptique de Weierstrass sont présentés. Ensuite, nous introduisons une famille infinie de systèmes classiques et quantiques intégrables et exactement résolubles en coordonnées polaires. Cette famille s'exprime en terme d'un paramètre k. Le spectre d'énergie et les fonctions d'onde des systèmes quantiques sont présentés. Une conjecture postulant la superintégrabilité de ces systèmes est formulée et est vérifiée pour k=1,2,3,4. L'ordre des intégrales du mouvement proposées est 2k où k ∈ ℕ. La structure algébrique de la famille de systèmes quantiques est formulée en terme d'une algèbre cachée où le nombre de générateurs dépend du paramètre k. Une généralisation quasi-exactement résoluble et intégrable de la famille de potentiels est proposée. Finalement, les trajectoires classiques de la famille de systèmes sont calculées pour tous les cas rationnels k ∈ ℚ. Celles-ci s'expriment en terme des polynômes de Chebyshev. Les courbes associées aux trajectoires sont présentées pour les premiers cas k=1, 2, 3, 4, 1/2, 1/3 et 3/2 et les trajectoires bornées sont fermées et périodiques dans l'espace des phases. Ainsi, les résultats obtenus viennent renforcer la possible véracité de la conjecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The heat conduction problem, in the presence of a change of state, was solved for the case of an indefinitely long cylindrical layer cavity. As boundary conditions, it is imposed that the internal surface of the cavity is maintained below the fusion temperature of the infilling substance and the external surface is kept above it. The solution, obtained in nondimensional variables, consists in two closed form heat conduction equation solutions for the solidified and liquid regions, which formally depend of the, at first, unknown position of the phase change front. The energy balance through the phase change front furnishes the equation for time dependence of the front position, which is numerically solved. Substitution of the front position for a particular instant in the heat conduction equation solutions gives the temperature distribution inside the cavity at that moment. The solution is illustrated with numerical examples. [DOI: 10.1115/1.4003542]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with an overview of upwinding schemes, and further nonlinear applications of a recently introduced high resolution upwind differencing scheme, namely the ADBQUICKEST [V.G. Ferreira, F.A. Kurokawa, R.A.B. Queiroz, M.K. Kaibara, C.M. Oishi, J.A.Cuminato, A.F. Castelo, M.F. Tomé, S. McKee, assessment of a high-order finite difference upwind scheme for the simulation of convection-diffusion problems, International Journal for Numerical Methods in Fluids 60 (2009) 1-26]. The ADBQUICKEST scheme is a new TVD version of the QUICKEST [B.P. Leonard, A stable and accurate convective modeling procedure based on quadratic upstream interpolation, Computer Methods in Applied Mechanics and Engineering 19 (1979) 59-98] for solving nonlinear balance laws. The scheme is based on the concept of NV and TVD formalisms and satisfies a convective boundedness criterion. The accuracy of the scheme is compared with other popularly used convective upwinding schemes (see, for example, Roe (1985) [19], Van Leer (1974) [18] and Arora & Roe (1997) [17]) for solving nonlinear conservation laws (for example, Buckley-Leverett, shallow water and Euler equations). The ADBQUICKEST scheme is then used to solve six types of fluid flow problems of increasing complexity: namely, 2D aerosol filtration by fibrous filters; axisymmetric flow in a tubular membrane; 2D two-phase flow in a fluidized bed; 2D compressible Orszag-Tang MHD vortex; axisymmetric jet onto a flat surface at low Reynolds number and full 3D incompressible flows involving moving free surfaces. The numerical simulations indicate that this convective upwinding scheme is a good generic alternative for solving complex fluid dynamics problems. © 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integer-valued data envelopment analysis (DEA) with alternative returns to scale technology has been introduced and developed recently by Kuosmanen and Kazemi Matin. The proportionality assumption of their introduced "natural augmentability" axiom in constant and nondecreasing returns to scale technologies makes it possible to achieve feasible decision-making units (DMUs) of arbitrary large size. In many real world applications it is not possible to achieve such production plans since some of the input and output variables are bounded above. In this paper, we extend the axiomatic foundation of integer-valuedDEAmodels for including bounded output variables. Some model variants are achieved by introducing a new axiom of "boundedness" over the selected output variables. A mixed integer linear programming (MILP) formulation is also introduced for computing efficiency scores in the associated production set. © 2011 The Authors. International Transactions in Operational Research © 2011 International Federation of Operational Research Societies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is an ecological, analytical and retrospective study comprising the 645 municipalities in the State of São Paulo, the scope of which was to determine the relationship between socioeconomic, demographic variables and the model of care in relation to infant mortality rates in the period from 1998 to 2008. The ratio of average annual change for each indicator per stratum coverage was calculated. Infant mortality was analyzed according to the model for repeated measures over time, adjusted for the following correction variables: the city's population, proportion of Family Health Programs (PSFs) deployed, proportion of Growth Acceleration Programs (PACs) deployed, per capita GDP and SPSRI (São Paulo social responsibility index). The analysis was performed by generalized linear models, considering the gamma distribution. Multiple comparisons were performed with the likelihood ratio with chi-square approximate distribution, considering a significance level of 5%. There was a decrease in infant mortality over the years (p < 0.05), with no significant difference from 2004 to 2008 (p > 0.05). The proportion of PSFs deployed (p < 0.0001) and per capita GDP (p < 0.0001) were significant in the model. The decline of infant mortality in this period was influenced by the growth of per capita GDP and PSFs.