54 resultados para Análise Não-Linear
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
This work presents the positional nonlinear geometric formulation for trusses using different strain measures. The positional formulation presents an alternative approach for nonlinear problems. This formulation considers nodal positions as variables of the nonlinear system instead of displacements (widely found in literature). The work also describes the arc-length method used for tracing equilibrium paths with snap-through and snap-back. Numerical applications for trusses already established in the literature and comparisons with other studies are provided to prove the accuracy of the proposed formulation
Resumo:
This study offers an analytical approach in order to provide a determination of the temperature field developed during the DC TIG welding of a thin plate of aluminum. The non-linear characteristics of the phenomenon, such as the dependence of the thermophysical and mechanical properties with temperature were considered in this study. In addition to the conductive heat exchange process, were taken into account the exchange by natural convection and radiation. A transient analysis is performed in order to obtain the temperature field as a function of time. It is also discussed a three-dimensional modeling of the heat source. The results obtained from the analytical model were be compared with the experimental ones and those available in the literature. The analytical results show a good correlation with the experimental ones available in the literature, thus proving the feasibility and efficiency of the analytical method for the simulation of the heat cycle for this welding process.
Resumo:
This study investigates the chemical species produced water from the reservoir areas of oil production in the field of Monte Alegre (onshore production) with a proposal of developing a model applied to the identification of the water produced in different zones or groups of zones.Starting from the concentrations of anions and cátions from water produced as input parameters in Linear Discriminate Analysis, it was possible to estimate and compare the model predictions respecting the particularities of their methods in order to ascertain which one would be most appropriate. The methods Resubstitution, Holdout Method and Lachenbruch were used for adjustment and general evaluation of the built models. Of the estimated models for Wells producing water for a single production area, the most suitable method was the "Holdout Method and had a hit rate of 90%. Discriminant functions (CV1, CV2 and CV3) estimated in this model were used to modeling new functions for samples ofartificial mixtures of produced water (producedin our laboratory) and samples of mixtures actualproduced water (water collected inwellsproducingmore thanonezone).The experiment with these mixtures was carried out according to a schedule experimental mixtures simplex type-centroid also was simulated in which the presence of water from steam injectionin these tanks fora part of amostras. Using graphs of two and three dimensions was possible to estimate the proportion of water in the production area
Resumo:
VARELA, M.L. et al. Otimização de uma metodologia para análise mineralógica racional de argilominerais. Cerâmica, São Paulo, n. 51, p. 387-391, 2005.
Resumo:
This dissertation examines the organizational innovation as a nonlinear process, which occurs in a social and political context and, therefore, socially immersed. Examines the case of shrimp in the state of RN, starting from the following problem: although the norteriograndense shrimp occupies the largest producer of farmed shrimp from Brazil, has a series of bottlenecks concerning the generation of industry innovation, concerning the social relationships and policies between the various actors in the network, whether private or public, and its consequences in terms of opportunity and limits generated for the innovative dynamics. The objective of the research is to understand how the social embeddedness of political actors affects norteriograndense shrimp within the context of structural relations, the industry generation of innovation, throughout its technological trajectory . The approach of social embeddedness balances atomised perspectives, undersocialized and oversocialized, of economic action, considering both the human capacity to act as sources of constraint, whose mechanisms are analyzed the structural and political. In methodological terms this is a case study, analyzed from the research literature, documentary and experimental. Primary data were collected through semi-structured interviews and analyzed in depth by the technique of content analysis. Was adopted a longitudinal approach, seeking to understand the phenomenon from the perspective of the subjects, describing it in an inductive process of investigation. After characterizing the sector and defining their technological trajectory, the analysis of the results followed its four stages: (1) Introduction of Technology: 1973-1980, (2) Intensification of Research: 1981-1991, (3) Technological Adaptation, 1992 -2003, (4) Technological Crisis: 2004-2009. A cross-sectional analysis along the evolutionary trajectory revealed the character of structural changes and policies over time, and implications on the generating process of innovation. Note that, the technological limit to which the sector reached requires changes in technology standards, but is more likely that the potiguar shrimp is entering a new phase of his career in technology rather than a new technological paradigm
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
The objective is to analyze the relationship between risk and number of stocks of a portfolio for an individual investor when stocks are chosen by "naive strategy". For this, we carried out an experiment in which individuals select actions to reproduce this relationship. 126 participants were informed that the risk of first choice would be an asset average of all standard deviations of the portfolios consist of a single asset, and the same procedure should be used for portfolios composed of two, three and so on, up to 30 actions . They selected the assets they want in their portfolios without the support of a financial analysis. For comparison we also tested a hypothetical simulation of 126 investors who selected shares the same universe, through a random number generator. Thus, each real participant is compensated for random hypothetical investor facing the same opportunity. Patterns were observed in the portfolios of individual participants, characterizing the curves for the components of the samples. Because these groupings are somewhat arbitrary, it was used a more objective measure of behavior: a simple linear regression for each participant, in order to predict the variance of the portfolio depending on the number of assets. In addition, we conducted a pooled regression on all observations by analyzing cross-section. The result of pattern occurs on average but not for most individuals, many of which effectively "de-diversify" when adding seemingly random bonds. Furthermore, the results are slightly worse using a random number generator. This finding challenges the belief that only a small number of titles is necessary for diversification and shows that there is only applicable to a large sample. The implications are important since many individual investors holding few stocks in their portfolios
Resumo:
This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop
Resumo:
In this work we developed a computer simulation program for physics porous structures based on programming language C + + using a Geforce 9600 GT with the PhysX chip, originally developed for video games. With this tool, the ability of physical interaction between simulated objects is enlarged, allowing to simulate a porous structure, for example, reservoir rocks and structures with high density. The initial procedure for developing the simulation is the construction of porous cubic structure consisting of spheres with a single size and with varying sizes. In addition, structures can also be simulated with various volume fractions. The results presented are divided into two parts: first, the ball shall be deemed as solid grains, ie the matrix phase represents the porosity, the second, the spheres are considered as pores. In this case the matrix phase represents the solid phase. The simulations in both cases are the same, but the simulated structures are intrinsically different. To validate the results presented by the program, simulations were performed by varying the amount of grain, the grain size distribution and void fraction in the structure. All results showed statistically reliable and consistent with those presented in the literature. The mean values and distributions of stereological parameters measured, such as intercept linear section of perimeter area, sectional area and mean free path are in agreement with the results obtained in the literature for the structures simulated. The results may help the understanding of real structures.
Resumo:
This Thesis presents the elaboration of a methodological propose for the development of an intelligent system, able to automatically achieve the effective porosity, in sedimentary layers, from a data bank built with information from the Ground Penetrating Radar GPR. The intelligent system was built to model the relation between the porosity (response variable) and the electromagnetic attribute from the GPR (explicative variables). Using it, the porosity was estimated using the artificial neural network (Multilayer Perceptron MLP) and the multiple linear regression. The data from the response variable and from the explicative variables were achieved in laboratory and in GPR surveys outlined in controlled sites, on site and in laboratory. The proposed intelligent system has the capacity of estimating the porosity from any available data bank, which has the same variables used in this Thesis. The architecture of the neural network used can be modified according to the existing necessity, adapting to the available data bank. The use of the multiple linear regression model allowed the identification and quantification of the influence (level of effect) from each explicative variable in the estimation of the porosity. The proposed methodology can revolutionize the use of the GPR, not only for the imaging of the sedimentary geometry and faces, but mainly for the automatically achievement of the porosity one of the most important parameters for the characterization of reservoir rocks (from petroleum or water)
Resumo:
Existem diversas equações para predição do VO2máx a partir de variáveis dentro do teste ergométrico em vários ergômetros, no entanto equação semelhante utilizando os limiares ventilatórios na ergoespirometria em teste sub-máximo no cicloergômetro não está disponível. O objetivo do presente estudo foi avaliar a precisão de modelos de predição do VO2máx com base em indicadores de esforço sub-máximo. Neste sentido foram testados em protocolo incremental máximo no cicloergômetro 7.877 voluntários, sendo 4640 indivíduos do sexo feminino e 3147 do sexo masculino, todos saudáveis não atletas, com idades acima de 20 anos, divididos randomicamente em dois grupos: A de estimação e B de validação. A partir das variáveis independentes massa corporal (MC) em kg, carga de trabalho no limiar 2 (WL2) e freqüência cardíaca no limiar 2 (FCL2) foi possível construir um modelo de regressão linear múltipla para predição do VO2máx. Os resultados demonstram que em indivíduos saudáveis não atletas de ambos os sexos é possível predizer o VO2máx com um erro mínimo (EPE = 1,00%) a partir de indicadores submáximos obtidos em teste incremental. O caráter multidisciplinar do trabalho pôde ser caracterizado pelo emprego de técnicas que envolveram pneumologia, educação física, fisiologia e estatística
Resumo:
Introduction: Mouth cancer is classified as having one of the ten highest cancer incidences in the world. In Brazil, the incidence and mortality rates of oral cancer are among the highest in the world. Intraoral cancer (tongue, gum, floor of the mouth, and other non-specified parts of the mouth), the accumulated survival rate after five years is less than 50%. Objectives: Estimate the accumulated survival probability after five years and adjust the Cox regression model for mouth and oropharyngeal cancers, according to age range, sex, morphology, and location, for the city of Natal. Describe the mortality and incidence coefficients of oral and oropharyngeal cancer and their tendencies in the city of Natal, between 1980 and 2001 and between 1997 and 2001, respectively. Methods: Survival data of patients registered between 1997 and 2001 was obtained from the Population-based Cancer Record of Natal. Differences between the survival curves were tested using the log-rank test. The Cox proportional risk model was used to estimate risk ratios. The simple linear regression model was used for tendency analyses of the mortality and incidence coefficients. Results: The probability after five years was 22.9%. The patients with undifferentiated malignant neoplasia were 4.7 times more at risk of dying than those with epidermoid carcinoma, whereas the patients with oropharyngeal cancer had 2.0 times more at risk of dying than those with mouth cancer. The mouth cancer mortality and incidence coefficients for Natal were 4.3 and 2.9 per 100 000 inhabitants, respectively. The oropharyngeal cancer mortality and incidence coefficients were, respectively, 1.1 and 0.7 per 100 000 87 inhabitants. Conclusions: A low survival rate after five years was identified. Patients with oropharyngeal cancer had a greater risk of dying, independent of the factors considered in this study. Also independent of other factors, undifferentiated malignant neoplasia posed a greater risk of death. The magnitudes of the incidence coefficients found are not considered elevated, whereas the magnitudes of the mortality coefficients are high
Resumo:
The present work deals with the linear analysis of bi-dimensional axisymmetric structures, through development and implementation of a Finite Element Method code. The structures are initially studied alone and afterwards compatibilized into coupled structures, that is, assemblages, including tanks and pressure vessels. Examples are analysed and, in order to prove accuracy, the results were compared with those furnished by the analytical solutions
Resumo:
The aim of this work is the numerical simulation of the mechanical performance of concrete affected by Alkali-Aggregate Reaction or RAA, reported by Stanton in 1940. The RAA has aroused attention in the context of Civil Engineering from the early 80, when they were reported consequences of his swelling effect in concrete structures, including cracking, failure and loss of serviceability. Despite the availability of experimental results the problem formulation still lacks refinement so that your solution remains doubtful. The numerical simulation is important resource for the assessment of damages in structures caused by the reaction, and their recoveries The tasks of support of this work were performed by means of the finite element approach, about orthotropic non-linear formulation, and, thermodynamic model of deformation by RAA. The results obtained revealed that the swelling effect of RAA induced decline of the mechanical performance of concrete by decreasing the margin of safety prior to the material failure. They showed that the temperature influences, exclusively, the kinetics of the reaction, so that the failure was the more precocious the higher the temperature of the solid mass of concrete
Resumo:
Telecommunication is one of the most dynamic and strategic areas in the world. Many technological innovations has modified the way information is exchanged. Information and knowledge are now shared in networks. Broadband Internet is the new way of sharing contents and information. This dissertation deals with performance indicators related to maintenance services of telecommunications networks and uses models of multivariate regression to estimate churn, which is the loss of customers to other companies. In a competitive environment, telecommunications companies have devised strategies to minimize the loss of customers. Loosing customers presents a higher cost than obtaining new ones. Corporations have plenty of data stored in a diversity of databases. Usually the data are not explored properly. This work uses the Knowledge Discovery in Databases (KDD) to establish rules and new models to explain how churn, as a dependent variable, are related to a diversity of service indicators, such as time to deploy the service (in hours), time to repair (in hours), and so on. Extraction of meaningful knowledge is, in many cases, a challenge. Models were tested and statistically analyzed. The work also shows results that allows the analysis and identification of which quality services indicators influence the churn. Actions are also proposed to solve, at least in part, this problem