951 resultados para two stage least square


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ownership concentration as a governance mechanism has received considerable attention among academician, practitioners as well as policy makers because large-block shareholders are increasingly active in their demands that corporations adopt effective governance mechanisms to control managerial decisions, which include corporate debt policy. Earlier study on the agency model of the firm widely recognizes that the managerial ownership and external debt play an important role in mitigating agency conflicts and enhancing firm value. They also found that increase in the external monitors, for example the institutional investors, can actually play a useful role in limiting agency problems in the firm. This paper, using 100 Composite Index companies from Brusa Malaysia between 1998 to 2002 explores the impact of institutional holdings on managerial ownership and debt policy in an integrated framework by using a simultaneous equations estimation procedure (2SLS). The findings show that there is a significant impact of institutional ownership which serves effective control mechanism on managerial ownership and corporate debt policy as hypothesized. Findings of such evidence suggest that institutional holding thus have played an important role in managers' strategic management decision and reduce agency conflict. In addition, corporate debt policy too is governed by managerial ownership and exhibited a negative relation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides a root-n consistent, asymptotically normal weighted least squares estimator of the coefficients in a truncated regression model. The distribution of the errors is unknown and permits general forms of unknown heteroskedasticity. Also provided is an instrumental variables based two-stage least squares estimator for this model, which can be used when some regressors are endogenous, mismeasured, or otherwise correlated with the errors. A simulation study indicates that the new estimators perform well in finite samples. Our limiting distribution theory includes a new asymptotic trimming result addressing the boundary bias in first-stage density estimation without knowledge of the support boundary. © 2007 Cambridge University Press.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation deals with the problem of making inference when there is weak identification in models of instrumental variables regression. More specifically we are interested in one-sided hypothesis testing for the coefficient of the endogenous variable when the instruments are weak. The focus is on the conditional tests based on likelihood ratio, score and Wald statistics. Theoretical and numerical work shows that the conditional t-test based on the two-stage least square (2SLS) estimator performs well even when instruments are weakly correlated with the endogenous variable. The conditional approach correct uniformly its size and when the population F-statistic is as small as two, its power is near the power envelopes for similar and non-similar tests. This finding is surprising considering the bad performance of the two-sided conditional t-tests found in Andrews, Moreira and Stock (2007). Given this counter intuitive result, we propose novel two-sided t-tests which are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test of Moreira (2003).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider the non-central chi-square chart with two stage samplings. During the first stage, one item of the sample is inspected and, depending on the result, the sampling is either interrupted, or it goes on to the second stage, where the remaining sample items are inspected and the non-central chi-square statistic is computed. The proposed chart is not only more sensitive than the joint (X) over bar and R charts, but operationally simpler too, particularly when appropriate devices, such as go-no-go gauges, can be used to decide if the sampling should go on to the second stage or not. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Throughout this article, it is assumed that the no-central chi-square chart with two stage samplings (TSS Chisquare chart) is employed to monitor a process where the observations from the quality characteristic of interest X are independent and identically normally distributed with mean μ and variance σ2. The process is considered to start with the mean and the variance on target (μ = μ0; σ2 = σ0 2), but at some random time in the future an assignable cause shifts the mean from μ0 to μ1 = μ0 ± δσ0, δ >0 and/or increases the variance from σ0 2 to σ1 2 = γ2σ0 2, γ > 1. Before the assignable cause occurrence, the process is considered to be in a state of statistical control (defined by the in-control state). Similar to the Shewhart charts, samples of size n 0+ 1 are taken from the process at regular time intervals. The samplings are performed in two stages. At the first stage, the first item of the i-th sample is inspected. If its X value, say Xil, is close to the target value (|Xil-μ0|< w0σ 0, w0>0), then the sampling is interrupted. Otherwise, at the second stage, the remaining n0 items are inspected and the following statistic is computed. Wt = Σj=2n 0+1(Xij - μ0 + ξiσ 0)2 i = 1,2 Let d be a positive constant then ξ, =d if Xil > 0 ; otherwise ξi =-d. A signal is given at sample i if |Xil-μ0| > w0σ 0 and W1 > knia:tl, where kChi is the factor used in determining the upper control limit for the non-central chi-square chart. If devices such as go and no-go gauges can be considered, then measurements are not required except when the sampling goes to the second stage. Let P be the probability of deciding that the process is in control and P 1, i=1,2, be the probability of deciding that the process is in control at stage / of the sampling procedure. Thus P = P1 + P 2 - P1P2, P1 = Pr[μ0 - w0σ0 ≤ X ≤ μ0+ w 0σ0] P2=Pr[W ≤ kChi σ0 2], (3) During the in-control period, W / σ0 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ0 = n0d2, i.e. W / σ0 2 - xn0 22 (λ0) During the out-of-control period, W / σ1 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ1 = n0(δ + ξ)2 / γ2 The effectiveness of a control chart in detecting a process change can be measured by the average run length (ARL), which is the speed with which a control chart detects process shifts. The ARL for the proposed chart is easily determined because in this case, the number of samples before a signal is a geometrically distributed random variable with parameter 1-P, that is, ARL = I /(1-P). It is shown that the performance of the proposed chart is better than the joint X̄ and R charts, Furthermore, if the TSS Chi-square chart is used for monitoring diameters, volumes, weights, etc., then appropriate devices, such as go-no-go gauges can be used to decide if the sampling should go to the second stage or not. When the process is stable, and the joint X̄ and R charts are in use, the monitoring becomes monotonous because rarely an X̄ or R value fall outside the control limits. The natural consequence is the user to pay less and less attention to the steps required to obtain the X̄ and R value. In some cases, this lack of attention can result in serious mistakes. The TSS Chi-square chart has the advantage that most of the samplings are interrupted, consequently, most of the time the user will be working with attributes. Our experience shows that the inspection of one item by attribute is much less monotonous than measuring four or five items at each sampling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is known that for the open shop scheduling problem to minimize the makespan there exists no polynomial-time heuristic algorithm that guarantees a worst-case performance ratio better than 5/4, unless P6≠NP. However, this result holds only if the instance of the problem contains jobs consisting of at least three operations. This paper considers the open shop scheduling problem, provided that each job consists of at most two operations, one of which is to be processed on one of the m⩾2 machines, while the other operation must be performed on the bottleneck machine, the same for all jobs. For this NP-hard problem we present a heuristic algorithm and show that its worst-case performance ratio is 5/4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The conventional radial basis function (RBF) network optimization methods, such as orthogonal least squares or the two-stage selection, can produce a sparse network with satisfactory generalization capability. However, the RBF width, as a nonlinear parameter in the network, is not easy to determine. In the aforementioned methods, the width is always pre-determined, either by trial-and-error, or generated randomly. Furthermore, all hidden nodes share the same RBF width. This will inevitably reduce the network performance, and more RBF centres may then be needed to meet a desired modelling specification. In this paper we investigate a new two-stage construction algorithm for RBF networks. It utilizes the particle swarm optimization method to search for the optimal RBF centres and their associated widths. Although the new method needs more computation than conventional approaches, it can greatly reduce the model size and improve model generalization performance. The effectiveness of the proposed technique is confirmed by two numerical simulation examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following the recent success in quantitative analysis of essential fatty acid compositions in a commercial microencapsulated fish oil (?EFO) supplement, we extended the application of portable attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopic technique and partial least square regression (PLSR) analysis for rapid determination of total protein contents-the other major component in most commercial ?EFO powders. In contrast to the traditional chromatographic methodology used in a routine amino acid analysis (AAA), the ATR-FTIR spectra of the ?EFO powder can be acquired directly from its original powder form with no requirement of any sample preparation, making the technique exceptionally fast, noninvasive, and environmentally friendly as well as being cost effective and hence eminently suitable for routine use by industry. By optimizing the spectral region of interest and number of latent factors through the developed PLSR strategy, a good linear calibration model was produced as indicated by an excellent value of coefficient of determination R2 = 0.9975, using standard ?EFO powders with total protein contents in the range of 140-450 mg/g. The prediction of the protein contents acquired from an independent validation set through the optimized PLSR model was highly accurate as evidenced through (1) a good linear fitting (R2 = 0.9759) in the plot of predicted versus reference values, which were obtained from a standard AAA method, (2) lowest root mean square error of prediction (11.64 mg/g), and (3) high residual predictive deviation (6.83) ranked in very good level of predictive quality indicating high robustness and good predictive performance of the achieved PLSR calibration model. The study therefore demonstrated the potential application of the portable ATR-FTIR technique when used together with PLSR analysis for rapid online monitoring of the two major components (i.e., oil and protein contents) in finished ?EFO powders in the actual manufacturing setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage. © 2014 ARVO.