20 resultados para Generalized Least-squares
Influência das condições ambientais no verdor da vegetação da caatinga frente às mudanças climáticas
Resumo:
The Caatinga biome, a semi-arid climate ecosystem found in northeast Brazil, presents low rainfall regime and strong seasonality. It has the most alarming climate change projections within the country, with air temperature rising and rainfall reduction with stronger trends than the global average predictions. Climate change can present detrimental results in this biome, reducing vegetation cover and changing its distribution, as well as altering all ecosystem functioning and finally influencing species diversity. In this context, the purpose of this study is to model the environmental conditions (rainfall and temperature) that influence the Caatinga biome productivity and to predict the consequences of environmental conditions in the vegetation dynamics under future climate change scenarios. Enhanced Vegetation Index (EVI) was used to estimate vegetation greenness (presence and density) in the area. Considering the strong spatial and temporal autocorrelation as well as the heterogeneity of the data, various GLS models were developed and compared to obtain the best model that would reflect rainfall and temperature influence on vegetation greenness. Applying new climate change scenarios in the model, environmental determinants modification, rainfall and temperature, negatively influenced vegetation greenness in the Caatinga biome. This model was used to create potential vegetation maps for current and future of Caatinga cover considering 20% decrease in precipitation and 1 °C increase in temperature until 2040, 35% decrease in precipitation and 2.5 °C increase in temperature in the period 2041-2070 and 50% decrease in precipitation and 4.5 °C increase in temperature in the period 2071-2100. The results suggest that the ecosystem functioning will be affected on the future scenario of climate change with a decrease of 5.9% of the vegetation greenness until 2040, 14.2% until 2070 and 24.3% by the end of the century. The Caatinga vegetation in lower altitude areas (most of the biome) will be more affected by climatic changes.
Resumo:
When a company desires to invest in a project, it must obtain resources needed to make the investment. The alternatives are using firm s internal resources or obtain external resources through contracts of debt and issuance of shares. Decisions involving the composition of internal resources, debt and shares in the total resources used to finance the activities of a company related to the choice of its capital structure. Although there are studies in the area of finance on the debt determinants of firms, the issue of capital structure is still controversial. This work sought to identify the predominant factors that determine the capital structure of Brazilian share capital, non-financial firms. This work was used a quantitative approach, with application of the statistical technique of multiple linear regression on data in panel. Estimates were made by the method of ordinary least squares with model of fixed effects. About 116 companies were selected to participate in this research. The period considered is from 2003 to 2007. The variables and hypotheses tested in this study were built based on theories of capital structure and in empirical researches. Results indicate that the variables, such as risk, size, and composition of assets and firms growth influence their indebtedness. The profitability variable was not relevant to the composition of indebtedness of the companies analyzed. However, analyzing only the long-term debt, comes to the conclusion that the relevant variables are the size of firms and, especially, the composition of its assets (tangibility).This sense, the smaller the size of the undertaking or the greater the representation of fixed assets in total assets, the greater its propensity to long-term debt. Furthermore, this research could not identify a predominant theory to explain the capital structure of Brazilian
Resumo:
In recent decades the public sector comes under pressure in order to improve its performance. The use of Information Technology (IT) has been a tool increasingly used in reaching that goal. Thus, it has become an important issue in public organizations, particularly in institutions of higher education, determine which factors influence the acceptance and use of technology, impacting on the success of its implementation and the desired organizational results. The Technology Acceptance Model - TAM was used as the basis for this study and is based on the constructs perceived usefulness and perceived ease of use. However, when it comes to integrated management systems due to the complexity of its implementation,organizational factors were added to thus seek further explanation of the acceptance of such systems. Thus, added to the model five TAM constructs related to critical success factors in implementing ERP systems, they are: support of top management, communication, training, cooperation, and technological complexity (BUENO and SALMERON, 2008). Based on the foregoing, launches the following research problem: What factors influence the acceptance and use of SIE / module academic at the Federal University of Para, from the users' perception of teachers and technicians? The purpose of this study was to identify the influence of organizational factors, and behavioral antecedents of behavioral intention to use the SIE / module academic UFPA in the perspective of teachers and technical users. This is applied research, exploratory and descriptive, quantitative with the implementation of a survey, and data collection occurred through a structured questionnaire applied to a sample of 229 teachers and 30 technical and administrative staff. Data analysis was carried out through descriptive statistics and structural equation modeling with the technique of partial least squares (PLS). Effected primarily to assess the measurement model, which were verified reliability, convergent and discriminant validity for all indicators and constructs. Then the structural model was analyzed using the bootstrap resampling technique like. In assessing statistical significance, all hypotheses were supported. The coefficient of determination (R ²) was high or average in five of the six endogenous variables, so the model explains 47.3% of the variation in behavioral intention. It is noteworthy that among the antecedents of behavioral intention (BI) analyzed in this study, perceived usefulness is the variable that has a greater effect on behavioral intention, followed by ease of use (PEU) and attitude (AT). Among the organizational aspects (critical success factors) studied technological complexity (TC) and training (ERT) were those with greatest effect on behavioral intention to use, although these effects were lower than those produced by behavioral factors (originating from TAM). It is pointed out further that the support of senior management (TMS) showed, among all variables, the least effect on the intention to use (BI) and was followed by communications (COM) and cooperation (CO), which exert a low effect on behavioral intention (BI). Therefore, as other studies on the TAM constructs were adequate for the present research. Thus, the study contributed towards proving evidence that the Technology Acceptance Model can be applied to predict the acceptance of integrated management systems, even in public. Keywords: Technology
Resumo:
The study aims to identify the factors that influence the behavior intention to adopt an academic Information System (SIE), in an environment of mandatory use, applied in the procurement process at the Federal University of Pará (UFPA). For this, it was used a model of innovation adoption and technology acceptance (TAM), focused in attitudes and intentions regarding the behavior intention. The research was conducted a quantitative survey, through survey in a sample of 96 administrative staff of the researched institution. For data analysis, it was used structural equation modeling (SEM), using the partial least squares method (Partial Least Square PLS-PM). As to results, the constructs attitude and subjective norms were confirmed as strong predictors of behavioral intention in a pre-adoption stage. Despite the use of SIE is required, the perceived voluntariness also predicts the behavior intention. Regarding attitude, classical variables of TAM, like as ease of use and perceived usefulness, appear as the main influence of attitude towards the system. It is hoped that the results of this study may provide subsidies for more efficient management of the process of implementing systems and information technologies, particularly in public universities
Resumo:
There are a great number of evidences showing that education is extremely important in many economic and social dimensions. In Brazil, education is a right guaranteed by the Federal Constitution; however, in the Brazilian legislation the right to the three stages of basic education: Kindergarten, Elementary and High School is better promoted and supported than the right to education at College level. According to educational census data (INEP, 2009), 78% of all enrolments in College education are in private schools, while the reverse is found in High School: 84% of all matriculations are in public schools, which shows a contradiction in the admission into the universities. The Brazilian scenario presents that public universities receive mostly students who performed better and were prepared in elementary and high school education in private schools, while private universities attend students who received their basic education in public schools, which are characterized as low quality. These facts have led researchers to raise the possible determinants of student performance on standardized tests, such as the Brazilian Vestibular exam, to guide the development of policies aimed at equal access to College education. Seeking inspiration in North American models of affirmative action policies, some Brazilian public universities have suggested rate policies to enable and facilitate the entry of "minorities" (blacks, pardos1, natives, people of low income and public school students) to free College education. At the Federal University of the state Rio Grande do Norte (UFRN), the first incentives for candidates from public schools emerged in 2006, being improved and widespread during the last 7 years. This study aimed to analyse and discuss the Argument of Inclution (AI) - the affirmative action policy that provides additional scoring for students from public schools. From an extensive database, the Ordinary Least Squares (OLS) technique was used as well as a Quantile Regression considering as control the variables of personal, socioeconomic and educational characteristics of the candidates from the Brazilian Vestibular exam 2010 of the Federal University of the state Rio Grande do Norte (UFRN). The results demonstrate the importance of this incentive system, besides the magnitude of other variables
Resumo:
This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification
Resumo:
Several mobile robots show non-linear behavior, mainly due friction phenomena between the mechanical parts of the robot or between the robot and the ground. Linear models are efficient in some cases, but it is necessary take the robot non-linearity in consideration when precise displacement and positioning are desired. In this work a parametric model identification procedure for a mobile robot with differential drive that considers the dead-zone in the robot actuators is proposed. The method consists in dividing the system into Hammerstein systems and then uses the key-term separation principle to present the input-output relations which shows the parameters from both linear and non-linear blocks. The parameters are then simultaneously estimated through a recursive least squares algorithm. The results shows that is possible to identify the dead-zone thresholds together with the linear parameters
Resumo:
There are two main approaches for using in adaptive controllers. One is the so-called model reference adaptive control (MRAC), and the other is the so-called adaptive pole placement control (APPC). In MRAC, a reference model is chosen to generate the desired trajectory that the plant output has to follow, and it can require cancellation of the plant zeros. Due to its flexibility in choosing the controller design methodology (state feedback, compensator design, linear quadratic, etc.) and the adaptive law (least squares, gradient, etc.), the APPC is the most general type of adaptive control. Traditionally, it has been developed in an indirect approach and, as an advantage, it may be applied to non-minimum phase plants, because do not involve plant zero-pole cancellations. The integration to variable structure systems allows to aggregate fast transient and robustness to parametric uncertainties and disturbances, as well. In this work, a variable structure adaptive pole placement control (VS-APPC) is proposed. Therefore, new switching laws are proposed, instead of using the traditional integral adaptive laws. Additionally, simulation results for an unstable first order system and simulation and practical results for a three-phase induction motor are shown
Resumo:
The pattern classification is one of the machine learning subareas that has the most outstanding. Among the various approaches to solve pattern classification problems, the Support Vector Machines (SVM) receive great emphasis, due to its ease of use and good generalization performance. The Least Squares formulation of SVM (LS-SVM) finds the solution by solving a set of linear equations instead of quadratic programming implemented in SVM. The LS-SVMs provide some free parameters that have to be correctly chosen to achieve satisfactory results in a given task. Despite the LS-SVMs having high performance, lots of tools have been developed to improve them, mainly the development of new classifying methods and the employment of ensembles, in other words, a combination of several classifiers. In this work, our proposal is to use an ensemble and a Genetic Algorithm (GA), search algorithm based on the evolution of species, to enhance the LSSVM classification. In the construction of this ensemble, we use a random selection of attributes of the original problem, which it splits the original problem into smaller ones where each classifier will act. So, we apply a genetic algorithm to find effective values of the LS-SVM parameters and also to find a weight vector, measuring the importance of each machine in the final classification. Finally, the final classification is obtained by a linear combination of the decision values of the LS-SVMs with the weight vector. We used several classification problems, taken as benchmarks to evaluate the performance of the algorithm and compared the results with other classifiers
Resumo:
This work proposes a new technique for phasor estimation applied in microprocessor numerical relays for distance protection of transmission lines, based on the recursive least squares method and called least squares modified random walking. The phasor estimation methods have compromised their performance, mainly due to the DC exponential decaying component present in fault currents. In order to reduce the influence of the DC component, a Morphological Filter (FM) was added to the method of least squares and previously applied to the process of phasor estimation. The presented method is implemented in MATLABr and its performance is compared to one-cycle Fourier technique and conventional phasor estimation, which was also based on least squares algorithm. The methods based on least squares technique used for comparison with the proposed method were: forgetting factor recursive, covariance resetting and random walking. The techniques performance analysis were carried out by means of signals synthetic and signals provided of simulations on the Alternative Transient Program (ATP). When compared to other phasor estimation methods, the proposed method showed satisfactory results, when it comes to the estimation speed, the steady state oscillation and the overshoot. Then, the presented method performance was analyzed by means of variations in the fault parameters (resistance, distance, angle of incidence and type of fault). Through this study, the results did not showed significant variations in method performance. Besides, the apparent impedance trajectory and estimated distance of the fault were analysed, and the presented method showed better results in comparison to one-cycle Fourier algorithm
Resumo:
Natural gas, although basically composed by light hydrocarbons, also presents contaminant gases in its composition, such as CO2 (carbon dioxide) and H2S (hydrogen sulfide). The H2S, which commonly occurs in oil and gas exploration and production activities, causes damages in oil and natural gas pipelines. Consequently, the removal of hydrogen sulfide gas will result in an important reduction in operating costs. Also, it is essential to consider the better quality of the oil to be processed in the refinery, thus resulting in benefits in economic, environmental and social areas. All this facts demonstrate the need for the development and improvement in hydrogen sulfide scavengers. Currently, the oil industry uses several processes for hydrogen sulfide removal from natural gas. However, these processes produce amine derivatives which can cause damage in distillation towers, can cause clogging of pipelines by formation of insoluble precipitates, and also produce residues with great environmental impact. Therefore, it is of great importance the obtaining of a stable system, in inorganic or organic reaction media, able to remove hydrogen sulfide without formation of by-products that can affect the quality and cost of natural gas processing, transport, and distribution steps. Seeking the study, evaluation and modeling of mass transfer and kinetics of hydrogen removal, in this study it was used an absorption column packed with Raschig rings, where the natural gas, with H2S as contaminant, passed through an aqueous solution of inorganic compounds as stagnant liquid, being this contaminant gas absorbed by the liquid phase. This absorption column was coupled with a H2S detection system, with interface with a computer. The data and the model equations were solved by the least squares method, modified by Levemberg-Marquardt. In this study, in addition to the water, it were used the following solutions: sodium hydroxide, potassium permanganate, ferric chloride, copper sulfate, zinc chloride, potassium chromate, and manganese sulfate, all at low concentrations (»10 ppm). These solutions were used looking for the evaluation of the interference between absorption physical and chemical parameters, or even to get a better mass transfer coefficient, as in mixing reactors and absorption columns operating in counterflow. In this context, the evaluation of H2S removal arises as a valuable procedure for the treatment of natural gas and destination of process by-products. The study of the obtained absorption curves makes possible to determine the mass transfer predominant stage in the involved processes, the mass transfer volumetric coefficients, and the equilibrium concentrations. It was also performed a kinetic study. The obtained results showed that the H2S removal kinetics is greater for NaOH. Considering that the study was performed at low concentrations of chemical reagents, it was possible to check the effect of secondary reactions in the other chemicals, especially in the case of KMnO4, which shows that your by-product, MnO2, acts in H2S absorption process. In addition, CuSO4 and FeCl3 also demonstrated to have good efficiency in H2S removal
Resumo:
Waste stabilization ponds (WSP) have been widely used for sewage treatment in hot climate regions because they are economic and environmentally sustainable. In the present study a WSP complex comprising a primary facultative pond (PFP) followed by two maturation ponds (MP-1 and MP-2) was studied, in the city of Natal-RN. The main objective was to study the bio-degradability of organic matter through the determination of the kinetic constant k throughout the system. The work was carried out in two phases. In the first, the variability in BOD, COD and TOC concentrations and an analysis of the relations between these parameters, in the influent raw sewage, pond effluents and in specific areas inside the ponds was studied. In the second stage, the decay rate for organic matter (k) was determined throughout the system based on BOD tests on the influent sewage, pond effluents and water column samples taken from fixed locations within the ponds, using the mathematical methods of Least Squares and the Thomas equation. Subsequently k was estimated as a function of a hydrodynamic model determined from the dispersion number (d), using empirical methods and a Partial Hydrodynamic Evaluation (PHE), obtained from tracer studies in a section of the primary facultative pond corresponding to 10% of its total length. The concentrations of biodegradable organic matter, measured as BOD and COD, gradually reduced through the series of ponds, giving overall removal efficiencies of 71.95% for BOD and of 52.45% for COD. Determining the values for k, in the influent and effluent samples of the ponds using the mathematical method of Least Squares, gave the following values respectively: primary facultative pond (0,23 day-1 and 0,09 day-1), maturation 1 (0,04 day-1 and 0,03 day-1) and maturation 2 (0,03 day-1 and 0,08 day-1). When using the Thomas method, the values of k in the influents and effluents of the ponds were: primary facultative pond (0,17 day-1 and 0,07 day-1), maturation 1 (0,02 day-1 and 0,01 day-1) and maturation 2 (0,01 day-1 and 0,02 day-1). From the Partial Hydrodynamic Evaluation, in the first section of the facultative pond corresponding to 10% of its total length, it can be concluded from the dispersion number obtained of d = 0.04, that the hydraulic regime is one of dispersed flow with a kinetic constant value of 0.20 day-1
Resumo:
In this work we used chemometric tools to classify and quantify the protein content in samples of milk powder. We applied the NIR diffuse reflectance spectroscopy combined with multivariate techniques. First, we carried out an exploratory method of samples by principal component analysis (PCA), then the classification of independent modeling of class analogy (SIMCA). Thus it became possible to classify the samples that were grouped by similarities in their composition. Finally, the techniques of partial least squares regression (PLS) and principal components regression (PCR) allowed the quantification of protein content in samples of milk powder, compared with the Kjeldahl reference method. A total of 53 samples of milk powder sold in the metropolitan areas of Natal, Salvador and Rio de Janeiro were acquired for analysis, in which after pre-treatment data, there were four models, which were employed for classification and quantification of samples. The methods employed after being assessed and validated showed good performance, good accuracy and reliability of the results, showing that the NIR technique can be a non invasive technique, since it produces no waste and saves time in analyzing the samples
Resumo:
In this work calibration models were constructed to determine the content of total lipids and moisture in powdered milk samples. For this, used the near-infrared spectroscopy by diffuse reflectance, combined with multivariate calibration. Initially, the spectral data were submitted to correction of multiplicative light scattering (MSC) and Savitzsky-Golay smoothing. Then, the samples were divided into subgroups by application of hierarchical clustering analysis of the classes (HCA) and Ward Linkage criterion. Thus, it became possible to build regression models by partial least squares (PLS) that allowed the calibration and prediction of the content total lipid and moisture, based on the values obtained by the reference methods of Soxhlet and 105 ° C, respectively . Therefore, conclude that the NIR had a good performance for the quantification of samples of powdered milk, mainly by minimizing the analysis time, not destruction of the samples and not waste. Prediction models for determination of total lipids correlated (R) of 0.9955, RMSEP of 0.8952, therefore the average error between the Soxhlet and NIR was ± 0.70%, while the model prediction to content moisture correlated (R) of 0.9184, RMSEP, 0.3778 and error of ± 0.76%
Resumo:
This work is combined with the potential of the technique of near infrared spectroscopy - NIR and chemometrics order to determine the content of diclofenac tablets, without destruction of the sample, to which was used as the reference method, ultraviolet spectroscopy, which is one of the official methods. In the construction of multivariate calibration models has been studied several types of pre-processing of NIR spectral data, such as scatter correction, first derivative. The regression method used in the construction of calibration models is the PLS (partial least squares) using NIR spectroscopic data of a set of 90 tablets were divided into two sets (calibration and prediction). 54 were used in the calibration samples and the prediction was used 36, since the calibration method used was crossvalidation method (full cross-validation) that eliminates the need for a validation set. The evaluation of the models was done by observing the values of correlation coefficient R 2 and RMSEC mean square error (calibration error) and RMSEP (forecast error). As the forecast values estimated for the remaining 36 samples, which the results were consistent with the values obtained by UV spectroscopy