43 resultados para Linear and nonlinear methods
Resumo:
Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.
Resumo:
This paper extends multivariate Granger causality to take into account the subspacesalong which Granger causality occurs as well as long run Granger causality. The propertiesof these new notions of Granger causality, along with the requisite restrictions, are derivedand extensively studied for a wide variety of time series processes including linear invertibleprocess and VARMA. Using the proposed extensions, the paper demonstrates that: (i) meanreversion in L2 is an instance of long run Granger non-causality, (ii) cointegration is a specialcase of long run Granger non-causality along a subspace, (iii) controllability is a special caseof Granger causality, and finally (iv) linear rational expectations entail (possibly testable)Granger causality restriction along subspaces.
Resumo:
Price bubbles in an Arrow-Debreu valuation equilibrium in infinite-timeeconomy are a manifestation of lack of countable additivity of valuationof assets. In contrast, known examples of price bubbles in sequentialequilibrium in infinite time cannot be attributed to the lack of countableadditivity of valuation. In this paper we develop a theory of valuation ofassets in sequential markets (with no uncertainty) and study the nature ofprice bubbles in light of this theory. We consider an operator, calledpayoff pricing functional, that maps a sequence of payoffs to the minimumcost of an asset holding strategy that generates it. We show that thepayoff pricing functional is linear and countably additive on the set ofpositive payoffs if and only if there is no Ponzi scheme, and providedthat there is no restriction on long positions in the assets. In the knownexamples of equilibrium price bubbles in sequential markets valuation islinear and countably additive. The presence of a price bubble indicatesthat the asset's dividends can be purchased in sequential markers at acost lower than the asset's price. We also present examples of equilibriumprice bubbles in which valuation is nonlinear but not countably additive.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
Principal curves have been defined Hastie and Stuetzle (JASA, 1989) assmooth curves passing through the middle of a multidimensional dataset. They are nonlinear generalizations of the first principalcomponent, a characterization of which is the basis for the principalcurves definition.In this paper we propose an alternative approach based on a differentproperty of principal components. Consider a point in the space wherea multivariate normal is defined and, for each hyperplane containingthat point, compute the total variance of the normal distributionconditioned to belong to that hyperplane. Choose now the hyperplaneminimizing this conditional total variance and look for thecorresponding conditional mean. The first principal component of theoriginal distribution passes by this conditional mean and it isorthogonal to that hyperplane. This property is easily generalized todata sets with nonlinear structure. Repeating the search from differentstarting points, many points analogous to conditional means are found.We call them principal oriented points. When a one-dimensional curveruns the set of these special points it is called principal curve oforiented points. Successive principal curves are recursively definedfrom a generalization of the total variance.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
A Lagrangian treatment of the quantization of first class Hamiltonian systems with constraints and Hamiltonian linear and quadratic in the momenta, respectively, is performed. The first reduce and then quantize and the first quantize and then reduce (Diracs) methods are compared. A source of ambiguities in this latter approach is pointed out and its relevance on issues concerning self-consistency and equivalence with the first reduce method is emphasized. One of the main results is the relation between the propagator obtained la Dirac and the propagator in the full space. As an application of the formalism developed, quantization on coset spaces of compact Lie groups is presented. In this case it is shown that a natural selection of a Dirac quantization allows for full self-consistency and equivalence. Finally, the specific case of the propagator on a two-dimensional sphere S2 viewed as the coset space SU(2)/U(1) is worked out. 1995 American Institute of Physics.
Resumo:
The oxidation of solutions of glucose with methylene-blue as a catalyst in basic media can induce hydrodynamic overturning instabilities, termed chemoconvection in recognition of their similarity to convective instabilities. The phenomenon is due to gluconic acid, the marginally dense product of the reaction, which gradually builds an unstable density profile. Experiments indicate that dominant pattern wavenumbers initially increase before gradually decreasing or can even oscillate for long times. Here, we perform a weakly nonlinear analysis for an established model of the system with simple kinetics, and show that the resulting amplitude equation is analogous to that obtained in convection with insulating walls. We show that the amplitude description predicts that dominant pattern wavenumbers should decrease in the long term, but does not reproduce the aforementioned increasing wavenumber behavior in the initial stages of pattern development. We hypothesize that this is due to horizontally homogeneous steady states not being attained before pattern onset. We show that the behavior can be explained using a combination of pseudo-steady-state linear and steady-state weakly nonlinear theories. The results obtained are in qualitative agreement with the analysis of experiments.
Resumo:
Interfacial hydrodynamic instabilities arise in a range of chemical systems. One mechanism for instability is the occurrence of unstable density gradients due to the accumulation of reaction products. In this paper we conduct two-dimensional nonlinear numerical simulations for a member of this class of system: the methylene-blue¿glucose reaction. The result of these reactions is the oxidation of glucose to a relatively, but marginally, dense product, gluconic acid, that accumulates at oxygen permeable interfaces, such as the surface open to the atmosphere. The reaction is catalyzed by methylene-blue. We show that simulations help to disassemble the mechanisms responsible for the onset of instability and evolution of patterns, and we demonstrate that some of the results are remarkably consistent with experiments. We probe the impact of the upper oxygen boundary condition, for fixed flux, fixed concentration, or mixed boundary conditions, and find significant qualitative differences in solution behavior; structures either attract or repel one another depending on the boundary condition imposed. We suggest that measurement of the form of the boundary condition is possible via observation of oxygen penetration, and improved product yields may be obtained via proper control of boundary conditions in an engineering setting. We also investigate the dependence on parameters such as the Rayleigh number and depth. Finally, we find that pseudo-steady linear and weakly nonlinear techniques described elsewhere are useful tools for predicting the behavior of instabilities beyond their formal range of validity, as good agreement is obtained with the simulations.
Resumo:
Canopy characterization is a key factor to improve pesticide application methods in tree crops and vineyards. Development of quick, easy and efficient methods to determine the fundamental parameters used to characterize canopy structure is thus an important need. In this research the use of ultrasonic and LIDAR sensors have been compared with the traditional manual and destructive canopy measurement procedure. For both methods the values of key parameters such as crop height, crop width, crop volume or leaf area have been compared. Obtained results indicate that an ultrasonic sensor is an appropriate tool to determine the average canopy characteristics, while a LIDAR sensor provides more accuracy and detailed information about the canopy. Good correlations have been obtained between crop volume (CVU) values measured with ultrasonic sensors and leaf area index, LAI (R2 = 0.51). A good correlation has also been obtained between the canopy volume measured with ultrasonic and LIDAR sensors (R2 = 0.52). Laser measurements of crop height (CHL) allow one to accurately predict the canopy volume. The proposed new technologies seems very appropriate as complementary tools to improve the efficiency of pesticide applications, although further improvements are still needed.