70 resultados para General Linear Methods
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Calculating explicit closed form solutions of Cournot models where firms have private information about their costs is, in general, very cumbersome. Most authors consider therefore linear demands and constant marginal costs. However, within this framework, the nonnegativity constraint on prices (and quantities) has been ignored or not properly dealt with and the correct calculation of all Bayesian Nash equilibria is more complicated than expected. Moreover, multiple symmetric and interior Bayesianf equilibria may exist for an open set of parameters. The reason for this is that linear demand is not really linear, since there is a kink at zero price: the general ''linear'' inverse demand function is P (Q) = max{a - bQ, 0} rather than P (Q) = a - bQ.
Resumo:
Background: Evidence of a role of brain-derived neurotrophic factor (BDNF) in the pathophysiology of eating disorders (ED) has been provided by association studies and by murine models. BDNF plasma levels have been found altered in ED and in psychiatric disorders that show comorbidity with ED. Aims: Since the role of BDNF levels in ED-related psychopathological symptoms has not been tested, we investigatedthe correlation of BDNF plasma levels with the Symptom Checklist 90 Revised (SCL-90R) questionnaire in a total of 78 ED patients. Methods: BDNF levels, measured bythe enzyme-linked immunoassay system, and SCL-90R questionnaire, were assessed in a total of 78 ED patients. The relationship between BDNF levels and SCL-90R scales was calculated using a general linear model. Results: BDNF plasma levels correlated with the Global Severity Index and the Positive Symptom Distress Index global scales and five of the nine subscales in the anorexia nervosa patients. BDNF plasma levels were able to explain, in the case of the Psychoticism subscale, up to 17% of the variability (p = 0.006). Conclusion: Our data suggest that BDNF levels could be involved in the severity of the disease through the modulation of psychopathological traits that are associated with the ED phenotype.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
Maximal-length binary sequences have been known for a long time. They have many interesting properties, one of them is that when taken in blocks of n consecutive positions they form 2ⁿ-1 different codes in a closed circular sequence. This property can be used for measuring absolute angular positions as the circle can be divided in as many parts as different codes can be retrieved. This paper describes how can a closed binary sequence with arbitrary length be effectively designed with the minimal possible block-length, using linear feedback shift registers (LFSR). Such sequences can be used for measuring a specified exact number of angular positions, using the minimal possible number of sensors that linear methods allow.
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given
Resumo:
T-cell mediated immune response (CMI) hasbeen widely studied in relation to individual andfitness components in birds. However, few studieshave simultaneously examined individual and socialfactors and habitat-mediated variance in theimmunity of chicks and adults from the samepopulation and in the same breeding season. Weinvestigated ecological and physiological variancein CMI of male and female nestlings and adults in abreeding population of Cory's Shearwaters(Calonectrisdiomedea) in theMediterranean Sea. Explanatory variables includedindividual traits (body condition, carbon andnitrogen stable isotope ratios, plasma totalproteins, triglycerides, uric acid, osmolarity,β-hydroxy-butyrate, erythrocyte meancorpuscular diameter, hematocrit, andhemoglobin) and burrow traits(temperature, isolation, and physicalstructure). During incubation, immune responseof adult males was significantly greater than thatof females. Nestlings exhibited a lower immuneresponse than adults. Ecological and physiologicalfactors affecting immune response differed betweenadults and nestlings. General linear models showedthat immune response in adult males was positivelyassociated with burrow isolation, suggesting thatmales breeding at higher densities suffer immunesystem suppression. In contrast, immune response inchicks was positively associated with bodycondition and plasma triglyceride levels.Therefore, adult immune response appears to beassociated with social stress, whereas a trade-offbetween immune function and fasting capability mayexist for nestlings. Our results, and those fromprevious studies, provide support for anasymmetrical influence of ecological andphysiological factors on the health of differentage and sex groups within a population, and for theimportance of simultaneously considering individualand population characteristics in intraspecificstudies of immune response.
Resumo:
T-cell mediated immune response (CMI) hasbeen widely studied in relation to individual andfitness components in birds. However, few studieshave simultaneously examined individual and socialfactors and habitat-mediated variance in theimmunity of chicks and adults from the samepopulation and in the same breeding season. Weinvestigated ecological and physiological variancein CMI of male and female nestlings and adults in abreeding population of Cory's Shearwaters(Calonectrisdiomedea) in theMediterranean Sea. Explanatory variables includedindividual traits (body condition, carbon andnitrogen stable isotope ratios, plasma totalproteins, triglycerides, uric acid, osmolarity,β-hydroxy-butyrate, erythrocyte meancorpuscular diameter, hematocrit, andhemoglobin) and burrow traits(temperature, isolation, and physicalstructure). During incubation, immune responseof adult males was significantly greater than thatof females. Nestlings exhibited a lower immuneresponse than adults. Ecological and physiologicalfactors affecting immune response differed betweenadults and nestlings. General linear models showedthat immune response in adult males was positivelyassociated with burrow isolation, suggesting thatmales breeding at higher densities suffer immunesystem suppression. In contrast, immune response inchicks was positively associated with bodycondition and plasma triglyceride levels.Therefore, adult immune response appears to beassociated with social stress, whereas a trade-offbetween immune function and fasting capability mayexist for nestlings. Our results, and those fromprevious studies, provide support for anasymmetrical influence of ecological andphysiological factors on the health of differentage and sex groups within a population, and for theimportance of simultaneously considering individualand population characteristics in intraspecificstudies of immune response.
Resumo:
The problem of finding a feasible solution to a linear inequality system arises in numerous contexts. In [12] an algorithm, called extended relaxation method, that solves the feasibility problem, has been proposed by the authors. Convergence of the algorithm has been proven. In this paper, we onsider a class of extended relaxation methods depending on a parameter and prove their convergence. Numerical experiments have been provided, as well.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
The choice network revenue management (RM) model incorporates customer purchase behavioras customers purchasing products with certain probabilities that are a function of the offeredassortment of products, and is the appropriate model for airline and hotel network revenuemanagement, dynamic sales of bundles, and dynamic assortment optimization. The underlyingstochastic dynamic program is intractable and even its certainty-equivalence approximation, inthe form of a linear program called Choice Deterministic Linear Program (CDLP) is difficultto solve in most cases. The separation problem for CDLP is NP-complete for MNL with justtwo segments when their consideration sets overlap; the affine approximation of the dynamicprogram is NP-complete for even a single-segment MNL. This is in contrast to the independentclass(perfect-segmentation) case where even the piecewise-linear approximation has been shownto be tractable. In this paper we investigate the piecewise-linear approximation for network RMunder a general discrete-choice model of demand. We show that the gap between the CDLP andthe piecewise-linear bounds is within a factor of at most 2. We then show that the piecewiselinearapproximation is polynomially-time solvable for a fixed consideration set size, bringing itinto the realm of tractability for small consideration sets; small consideration sets are a reasonablemodeling tradeoff in many practical applications. Our solution relies on showing that forany discrete-choice model the separation problem for the linear program of the piecewise-linearapproximation can be solved exactly by a Lagrangian relaxation. We give modeling extensionsand show by numerical experiments the improvements from using piecewise-linear approximationfunctions.
Resumo:
We study markets where the characteristics or decisions of certain agents are relevant but not known to their trading partners. Assuming exclusive transactions, the environment is described as a continuum economy with indivisible commodities. We characterize incentive efficient allocations as solutions to linear programming problems and appeal to duality theory to demonstrate the generic existence of external effects in these markets. Because under certain conditions such effects may generate non-convexities, randomization emerges as a theoretic possibility. In characterizing market equilibria we show that, consistently with the personalized nature of transactions, prices are generally non-linear in the underlying consumption. On the other hand, external effects may have critical implications for market efficiency. With adverse selection, in fact, cross-subsidization across agents with different private information may be necessary for optimality, and so, the market need not even achieve an incentive efficient allocation. In contrast, for the case of a single commodity, we find that when informational asymmetries arise after the trading period (e.g. moral hazard; ex post hidden types) external effects are fully internalized at a market equilibrium.
Stabilized Petrov-Galerkin methods for the convection-diffusion-reaction and the Helmholtz equations
Resumo:
We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.
Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension
Resumo:
In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.