27 resultados para partial least-squares regression
Resumo:
Trace gases are important to our environment even though their presence comes only by ‘traces’, but their concentrations must be monitored, so any necessary interventions can be done at the right time. There are some lower and upper boundaries which produce nice conditions for our lives and then monitoring trace gases comes as an essential task nowadays to be accomplished by many techniques. One of them is the differential optical absorption spectroscopy (DOAS), which consists mathematically on a regression - the classical method uses least-squares - to retrieve the trace gases concentrations. In order to achieve better results, many works have tried out different techniques instead of the classical approach. Some have tried to preprocess the signals to be analyzed by a denoising procedure - e.g. discrete wavelet transform (DWT). This work presents a semi-empirical study to find out the most suitable DWT family to be used in this denoising. The search seeks among many well-known families the one to better remove the noise, keeping the original signal’s main features, then by decreasing the noise, the residual left after the regression is done decreases too. The analysis take account the wavelet decomposition level, the threshold to be applied on the detail coefficients and how to apply them - hard or soft thresholding. The signals used come from an open and online data base which contains characteristic signals from some trace gases usually studied.
Resumo:
When a company desires to invest in a project, it must obtain resources needed to make the investment. The alternatives are using firm s internal resources or obtain external resources through contracts of debt and issuance of shares. Decisions involving the composition of internal resources, debt and shares in the total resources used to finance the activities of a company related to the choice of its capital structure. Although there are studies in the area of finance on the debt determinants of firms, the issue of capital structure is still controversial. This work sought to identify the predominant factors that determine the capital structure of Brazilian share capital, non-financial firms. This work was used a quantitative approach, with application of the statistical technique of multiple linear regression on data in panel. Estimates were made by the method of ordinary least squares with model of fixed effects. About 116 companies were selected to participate in this research. The period considered is from 2003 to 2007. The variables and hypotheses tested in this study were built based on theories of capital structure and in empirical researches. Results indicate that the variables, such as risk, size, and composition of assets and firms growth influence their indebtedness. The profitability variable was not relevant to the composition of indebtedness of the companies analyzed. However, analyzing only the long-term debt, comes to the conclusion that the relevant variables are the size of firms and, especially, the composition of its assets (tangibility).This sense, the smaller the size of the undertaking or the greater the representation of fixed assets in total assets, the greater its propensity to long-term debt. Furthermore, this research could not identify a predominant theory to explain the capital structure of Brazilian
Resumo:
There are a great number of evidences showing that education is extremely important in many economic and social dimensions. In Brazil, education is a right guaranteed by the Federal Constitution; however, in the Brazilian legislation the right to the three stages of basic education: Kindergarten, Elementary and High School is better promoted and supported than the right to education at College level. According to educational census data (INEP, 2009), 78% of all enrolments in College education are in private schools, while the reverse is found in High School: 84% of all matriculations are in public schools, which shows a contradiction in the admission into the universities. The Brazilian scenario presents that public universities receive mostly students who performed better and were prepared in elementary and high school education in private schools, while private universities attend students who received their basic education in public schools, which are characterized as low quality. These facts have led researchers to raise the possible determinants of student performance on standardized tests, such as the Brazilian Vestibular exam, to guide the development of policies aimed at equal access to College education. Seeking inspiration in North American models of affirmative action policies, some Brazilian public universities have suggested rate policies to enable and facilitate the entry of "minorities" (blacks, pardos1, natives, people of low income and public school students) to free College education. At the Federal University of the state Rio Grande do Norte (UFRN), the first incentives for candidates from public schools emerged in 2006, being improved and widespread during the last 7 years. This study aimed to analyse and discuss the Argument of Inclution (AI) - the affirmative action policy that provides additional scoring for students from public schools. From an extensive database, the Ordinary Least Squares (OLS) technique was used as well as a Quantile Regression considering as control the variables of personal, socioeconomic and educational characteristics of the candidates from the Brazilian Vestibular exam 2010 of the Federal University of the state Rio Grande do Norte (UFRN). The results demonstrate the importance of this incentive system, besides the magnitude of other variables
Resumo:
This study aimed to examine how students perceives the factors that may influence them to attend a training course offered in the distance virtual learning environment (VLE) of the National School of Public Administration (ENAP). Thus, as theoretical basis it was used the Unified Theory of Acceptance and Use of Technology (UTAUT), the result of an integration of eight previous models which aimed to explain the same phenomenon (acceptance/use of information technology). The research approach was a quantitative and qualitative. To achieve the study objectives were made five semi-structured interviews and an online questionnaire (websurvey) in a valid sample of 101 public employees scattered throughout the country. The technique used to the analysis of quantitative data was the structural equation modeling (SEM), by the method of Partial Least Square Path Modeling (PLS-PM). To qualitative data was the thematic content analysis. Among the results, it was found that, in the context of public service, the degree whose the individual believes that the use of an AVA will help its performance at work (performance expectancy) is a factor to its intended use and also influence its use. Among the results, it was found that the belief which the public employee has in the use of a VLE as a way to improve the performance of his work (performance expectation) was determinant for its intended use that, in turn, influenced their use. It was confirmed that, under the voluntary use of technology, the general opinion of the student s social circle (social influence) has no effect on their intention to use the VLE. The effort expectancy and facilitating conditions were not directly related to the intended use and use, respectively. However, emerged from the students speeches that the opinions of their coworkers, the ease of manipulate the VLE, the flexibility of time and place of the distance learning program and the presence of a tutor are important to their intentions to do a distance learning program. With the results, it is expected that the managers of the distance learning program of ENAP turn their efforts to reduce the impact of the causes of non-use by those unwilling to adopt voluntarily the e-learning, and enhance the potentialities of distance learning for those who are already users
Resumo:
This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification
Resumo:
Several mobile robots show non-linear behavior, mainly due friction phenomena between the mechanical parts of the robot or between the robot and the ground. Linear models are efficient in some cases, but it is necessary take the robot non-linearity in consideration when precise displacement and positioning are desired. In this work a parametric model identification procedure for a mobile robot with differential drive that considers the dead-zone in the robot actuators is proposed. The method consists in dividing the system into Hammerstein systems and then uses the key-term separation principle to present the input-output relations which shows the parameters from both linear and non-linear blocks. The parameters are then simultaneously estimated through a recursive least squares algorithm. The results shows that is possible to identify the dead-zone thresholds together with the linear parameters
Resumo:
There are two main approaches for using in adaptive controllers. One is the so-called model reference adaptive control (MRAC), and the other is the so-called adaptive pole placement control (APPC). In MRAC, a reference model is chosen to generate the desired trajectory that the plant output has to follow, and it can require cancellation of the plant zeros. Due to its flexibility in choosing the controller design methodology (state feedback, compensator design, linear quadratic, etc.) and the adaptive law (least squares, gradient, etc.), the APPC is the most general type of adaptive control. Traditionally, it has been developed in an indirect approach and, as an advantage, it may be applied to non-minimum phase plants, because do not involve plant zero-pole cancellations. The integration to variable structure systems allows to aggregate fast transient and robustness to parametric uncertainties and disturbances, as well. In this work, a variable structure adaptive pole placement control (VS-APPC) is proposed. Therefore, new switching laws are proposed, instead of using the traditional integral adaptive laws. Additionally, simulation results for an unstable first order system and simulation and practical results for a three-phase induction motor are shown
Resumo:
The pattern classification is one of the machine learning subareas that has the most outstanding. Among the various approaches to solve pattern classification problems, the Support Vector Machines (SVM) receive great emphasis, due to its ease of use and good generalization performance. The Least Squares formulation of SVM (LS-SVM) finds the solution by solving a set of linear equations instead of quadratic programming implemented in SVM. The LS-SVMs provide some free parameters that have to be correctly chosen to achieve satisfactory results in a given task. Despite the LS-SVMs having high performance, lots of tools have been developed to improve them, mainly the development of new classifying methods and the employment of ensembles, in other words, a combination of several classifiers. In this work, our proposal is to use an ensemble and a Genetic Algorithm (GA), search algorithm based on the evolution of species, to enhance the LSSVM classification. In the construction of this ensemble, we use a random selection of attributes of the original problem, which it splits the original problem into smaller ones where each classifier will act. So, we apply a genetic algorithm to find effective values of the LS-SVM parameters and also to find a weight vector, measuring the importance of each machine in the final classification. Finally, the final classification is obtained by a linear combination of the decision values of the LS-SVMs with the weight vector. We used several classification problems, taken as benchmarks to evaluate the performance of the algorithm and compared the results with other classifiers
Resumo:
This work proposes a new technique for phasor estimation applied in microprocessor numerical relays for distance protection of transmission lines, based on the recursive least squares method and called least squares modified random walking. The phasor estimation methods have compromised their performance, mainly due to the DC exponential decaying component present in fault currents. In order to reduce the influence of the DC component, a Morphological Filter (FM) was added to the method of least squares and previously applied to the process of phasor estimation. The presented method is implemented in MATLABr and its performance is compared to one-cycle Fourier technique and conventional phasor estimation, which was also based on least squares algorithm. The methods based on least squares technique used for comparison with the proposed method were: forgetting factor recursive, covariance resetting and random walking. The techniques performance analysis were carried out by means of signals synthetic and signals provided of simulations on the Alternative Transient Program (ATP). When compared to other phasor estimation methods, the proposed method showed satisfactory results, when it comes to the estimation speed, the steady state oscillation and the overshoot. Then, the presented method performance was analyzed by means of variations in the fault parameters (resistance, distance, angle of incidence and type of fault). Through this study, the results did not showed significant variations in method performance. Besides, the apparent impedance trajectory and estimated distance of the fault were analysed, and the presented method showed better results in comparison to one-cycle Fourier algorithm
Resumo:
Natural gas, although basically composed by light hydrocarbons, also presents contaminant gases in its composition, such as CO2 (carbon dioxide) and H2S (hydrogen sulfide). The H2S, which commonly occurs in oil and gas exploration and production activities, causes damages in oil and natural gas pipelines. Consequently, the removal of hydrogen sulfide gas will result in an important reduction in operating costs. Also, it is essential to consider the better quality of the oil to be processed in the refinery, thus resulting in benefits in economic, environmental and social areas. All this facts demonstrate the need for the development and improvement in hydrogen sulfide scavengers. Currently, the oil industry uses several processes for hydrogen sulfide removal from natural gas. However, these processes produce amine derivatives which can cause damage in distillation towers, can cause clogging of pipelines by formation of insoluble precipitates, and also produce residues with great environmental impact. Therefore, it is of great importance the obtaining of a stable system, in inorganic or organic reaction media, able to remove hydrogen sulfide without formation of by-products that can affect the quality and cost of natural gas processing, transport, and distribution steps. Seeking the study, evaluation and modeling of mass transfer and kinetics of hydrogen removal, in this study it was used an absorption column packed with Raschig rings, where the natural gas, with H2S as contaminant, passed through an aqueous solution of inorganic compounds as stagnant liquid, being this contaminant gas absorbed by the liquid phase. This absorption column was coupled with a H2S detection system, with interface with a computer. The data and the model equations were solved by the least squares method, modified by Levemberg-Marquardt. In this study, in addition to the water, it were used the following solutions: sodium hydroxide, potassium permanganate, ferric chloride, copper sulfate, zinc chloride, potassium chromate, and manganese sulfate, all at low concentrations (»10 ppm). These solutions were used looking for the evaluation of the interference between absorption physical and chemical parameters, or even to get a better mass transfer coefficient, as in mixing reactors and absorption columns operating in counterflow. In this context, the evaluation of H2S removal arises as a valuable procedure for the treatment of natural gas and destination of process by-products. The study of the obtained absorption curves makes possible to determine the mass transfer predominant stage in the involved processes, the mass transfer volumetric coefficients, and the equilibrium concentrations. It was also performed a kinetic study. The obtained results showed that the H2S removal kinetics is greater for NaOH. Considering that the study was performed at low concentrations of chemical reagents, it was possible to check the effect of secondary reactions in the other chemicals, especially in the case of KMnO4, which shows that your by-product, MnO2, acts in H2S absorption process. In addition, CuSO4 and FeCl3 also demonstrated to have good efficiency in H2S removal
Resumo:
This work has as main objective to find mathematical models based on linear parametric estimation techniques applied to the problem of calculating the grow of gas in oil wells. In particular we focus on achieving grow models applied to the case of wells that produce by plunger-lift technique on oil rigs, in which case, there are high peaks in the grow values that hinder their direct measurement by instruments. For this, we have developed estimators based on recursive least squares and make an analysis of statistical measures such as autocorrelation, cross-correlation, variogram and the cumulative periodogram, which are calculated recursively as data are obtained in real time from the plant in operation; the values obtained for these measures tell us how accurate the used model is and how it can be changed to better fit the measured values. The models have been tested in a pilot plant which emulates the process gas production in oil wells
Resumo:
The Tucunduba Dam, is west of Fortaleza, Ceará State. The seismic monitoring of the area, with an analogical station and seven digital stations, had beginning on June 11, 1997. The digital stations, operated from June to November 1997. The data collected in the period of digital monitoring was analyzed for determination of hypocenters, focal mechanisms, and shear-wave anisotropy analysis. For determination of hypocenters, it was possible to find an active zone of nearly 1 km in length, with depth between 4.5 and 5.2 km. A 60AZ/88SE fault plane was determined using the least-squares method and hypocenters of a selected set of 16 earthquakes recorded. Focal mechanisms were determined, in the composite fault plane solution, a strike-slip fault, trending nearly E-W, was found. Single fault plane solutions were obteined to some earthquakes presented mean values of 65 (azimuth), and 80 (dip). Shear-wave anisotropy was found in the data. Polarization directions and travel time delays, between S spliting waves, were determined. It was not possible to obtain any conclusion on the cause of the observed anisotropy. It is not clear if there is correlation between seismicity and mapped faults in the area, although the directions obtained starting from the hipocentros and focal mechanism are they are consistent with directions, observed in the area, photo, topographic and fractures directions observed in the area