24 resultados para Cargas não-lineares

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The preparation of cement slurries for offshore well cementing involves mixing all solid components to be added to the mixing water on the platform. The aim of this work was to study the formulation of pre-prepared dry mixtures, or grouts, for offshore oilwell cementing. The addition of mineral fillers in the strength of lightweight grouts applied for depths down to 400 m under water depths of 500 m was investigated. Lightweight materials and fine aggregates were selected. For the choice of starting materials, a study of the pozzolanic activity of low-cost fillers such as porcelain tile residue, microsilica and diatomaceous earth was carried out by X-ray diffraction and mechanical strength tests. Hardened grouts containing porcelain tile residue and microsilica depicted high strength at early ages. Based on such preliminary investigation, a study of the mechanical strength of grouts with density 1.74 g/cm3 (14.5 lb/gal) cured initially at 27 °C was performed using cement, microsilica, porcelain tile residue and an anti-foaming agent. The results showed that the mixture containing 7% of porcelain tile residue and 7% of microsilica was the one with the highest compressive strength after curing for 24 hours. This composition was chosen to be studied and adapted for offshore conditions based on testes performed at 4 °C. The grout containing cement, 7% of porcelain tile residue, 7% of active silica and admixtures (CaCl2), anti-foaming and dispersant resulted satisfactory rheology and mechanical strength after curing for 24 hours of curing

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation, the theoretical principles governing the molecular modeling were applied for electronic characterization of oligopeptide α3 and its variants (5Q, 7Q)-α3, as well as in the quantum description of the interaction of the aminoglycoside hygromycin B and the 30S subunit of bacterial ribosome. In the first study, the linear and neutral dipeptides which make up the mentioned oligopeptides were modeled and then optimized for a structure of lower potential energy and appropriate dihedral angles. In this case, three subsequent geometric optimization processes, based on classical Newtonian theory, the semi-empirical and density functional theory (DFT), explore the energy landscape of each dipeptide during the search of ideal minimum energy structures. Finally, great conformers were described about its electrostatic potential, ionization energy (amino acids), and frontier molecular orbitals and hopping term. From the hopping terms described in this study, it was possible in subsequent studies to characterize the charge transport propertie of these peptides models. It envisioned a new biosensor technology capable of diagnosing amyloid diseases, related to an accumulation of misshapen proteins, based on the conductivity displayed by proteins of the patient. In a second step of this dissertation, a study carried out by quantum molecular modeling of the interaction energy of an antibiotic ribosomal aminoglicosídico on your receiver. It is known that the hygromycin B (hygB) is an aminoglycoside antibiotic that affects ribosomal translocation by direct interaction with the small subunit of the bacterial ribosome (30S), specifically with nucleotides in helix 44 of the 16S ribosomal RNA (16S rRNA). Due to strong electrostatic character of this connection, it was proposed an energetic investigation of the binding mechanism of this complex using different values of dielectric constants (ε = 0, 4, 10, 20 and 40), which have been widely used to study the electrostatic properties of biomolecules. For this, increasing radii centered on the hygB centroid were measured from the 30S-hygB crystal structure (1HNZ.pdb), and only the individual interaction energy of each enclosed nucleotide was determined for quantum calculations using molecular fractionation with conjugate caps (MFCC) strategy. It was noticed that the dielectric constants underestimated the energies of individual interactions, allowing the convergence state is achieved quickly. But only for ε = 40, the total binding energy of drug-receptor interaction is stabilized at r = 18A, which provided an appropriate binding pocket because it encompassed the main residues that interact more strongly with the hygB - C1403, C1404, G1405, A1493, G1494, U1495, U1498 and C1496. Thus, the dielectric constant ≈ 40 is ideal for the treatment of systems with many electrical charges. By comparing the individual binding energies of 16S rRNA nucleotides with the experimental tests that determine the minimum inhibitory concentration (MIC) of hygB, it is believed that those residues with high binding values generated bacterial resistance to the drug when mutated. With the same reasoning, since those with low interaction energy do not influence effectively the affinity of the hygB in its binding site, there is no loss of effectiveness if they were replaced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho propõe um ambiente computacional aplicado ao ensino de sistemas de controle, denominado de ModSym. O software implementa uma interface gráfica para a modelagem de sistemas físicos lineares e mostra, passo a passo, o processamento necessário à obtenção de modelos matemáticos para esses sistemas. Um sistema físico pode ser representado, no software, de três formas diferentes. O sistema pode ser representado por um diagrama gráfico a partir de elementos dos domínios elétrico, mecânico translacional, mecânico rotacional e hidráulico. Pode também ser representado a partir de grafos de ligação ou de diagramas de fluxo de sinal. Uma vez representado o sistema, o ModSym possibilita o cálculo de funções de transferência do sistema na forma simbólica, utilizando a regra de Mason. O software calcula também funções de transferência na forma numérica e funções de sensibilidade paramétrica. O trabalho propõe ainda um algoritmo para obter o diagrama de fluxo de sinal de um sistema físico baseado no seu grafo de ligação. Este algoritmo e a metodologia de análise de sistemas conhecida por Network Method permitiram a utilização da regra de Mason no cálculo de funções de transferência dos sistemas modelados no software

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work deals with an on-line control strategy based on Robust Model Predictive Control (RMPC) technique applied in a real coupled tanks system. This process consists of two coupled tanks and a pump to feed the liquid to the system. The control objective (regulator problem) is to keep the tanks levels in the considered operation point even in the presence of disturbance. The RMPC is a technique that allows explicit incorporation of the plant uncertainty in the problem formulation. The goal is to design, at each time step, a state-feedback control law that minimizes a 'worst-case' infinite horizon objective function, subject to constraint in the control. The existence of a feedback control law satisfying the input constraints is reduced to a convex optimization over linear matrix inequalities (LMIs) problem. It is shown in this work that for the plant uncertainty described by the polytope, the feasible receding horizon state feedback control design is robustly stabilizing. The software implementation of the RMPC is made using Scilab, and its communication with Coupled Tanks Systems is done through the OLE for Process Control (OPC) industrial protocol

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A neuro-fuzzy system consists of two or more control techniques in only one structure. The main characteristic of this structure is joining one or more good aspects from each technique to make a hybrid controller. This controller can be based in Fuzzy systems, artificial Neural Networks, Genetics Algorithms or rein forced learning techniques. Neuro-fuzzy systems have been shown as a promising technique in industrial applications. Two models of neuro-fuzzy systems were developed, an ANFIS model and a NEFCON model. Both models were applied to control a ball and beam system and they had their results and needed changes commented. Choose of inputs to controllers and the algorithms used to learning, among other information about the hybrid systems, were commented. The results show the changes in structure after learning and the conditions to use each one controller based on theirs characteristics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The competitiveness of the trade generated by the higher availability of products with lower quality and cost promoted a new reality of industrial production with small clearances. Track deviations at the production are not discarded, uncertainties can statistically occur. The world consumer and the Brazilian one are supported by the consumer protection code, in lawsuits against the products poor quality. An automobile is composed of various systems and thousands of constituent parts, increasing the likelihood of failure. The dynamic and security systems are critical in relation to the consequences of possible failures. The investigation of the failure gives us the possibility of learning and contributing to various improvements. Our main purpose in this work is to develop a systematic, specific methodology by investigating the root cause of the flaw occurred on an axle end of the front suspension of an automobile, and to perform comparative data analyses between the fractured part and the project information. Our research was based on a flaw generated in an automotive suspension system involved in a mechanical judicial cause, resulting in property and personal damages. In the investigations concerning the analysis of mechanical flaws, knowledge on materials engineering plays a crucial role in the process, since it enables applying techniques for characterizing materials, relating the technical attributes required from a respective part with its structure of manufacturing material, thus providing a greater scientific contribution to the work. The specific methodology developed follows its own flowchart. In the early phase, the data in the records and information on the involved ones were collected. The following laboratory analyses were performed: macrography of the fracture, micrography with SEM (Scanning Electron Microscope) of the initial and final fracture, phase analysis with optical microscopy, Brinell hardness and Vickers microhardness analyses, quantitative and qualitative chemical analysis, by using X-ray fluorescence and optical spectroscopy for carbon analysis, qualitative study on the state of tension was done. Field data were also collected. In the analyses data of the values resulting from the fractured stock parts and the design values were compared. After the investigation, one concluded that: the developed methodology systematized the investigation and enabled crossing data, thus minimizing diagnostic error probability, the morphology of the fracture indicates failure by the fatigue mechanism in a geometrically propitious location, a tension hub, the part was subjected to low tensions by the sectional area of the final fracture, the manufacturing material of the fractured part has low ductility, the component fractured in an earlier moment than the one recommended by the manufacturer, the percentages of C, Si, Mn and Cr of the fractured part present values which differ from the design ones, the hardness value of the superior limit of the fractured part is higher than that of the design, and there is no manufacturing uniformity between stock and fractured part. The work will contribute to optimizing the guidance of the actions in a mechanical engineering judicial expertise

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Composite materials can be defined as materials formed from two or more constituents with different compositions, structures and properties, which are separated by an interface. The main objective in producing composites is to combine different materials to produce a single device with superior properties to the component unit. The present study used a composite consisting of plaster, cement, EPS, tire, PET and water to build prototype solar attempt to reduce the manufacturing cost of such equipment. It was built two box type solar cookers, a cooler to be cooled by solar energy, a solar dryer and a solar cooker concentration. For these prototypes were discussed the processes of construction and assembly, determination of thermal and mechanical properties, and raising the performance of such solar systems. Were also determined the proportions of the constituents of the composite materials according to specific performance of each prototype designed. This compound proved to be feasible for the manufacture of such equipment, low cost and easy manufacturing and assembly processes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the current growth in consumption of industrialized products and the resulting increase in garbage production, their adequate disposal has become one of the greatest challenges of modern society. The use of industrial solid residues as fillers in composite materials is an idea that emerges aiming at investigating alternatives for reusing these residues, and, at the same time, developing materials with superior properties. In this work, the influence of the addition of sand, diatomite, and industrial residues of polyester and EVA (ethylene vinyl acetate), on the mechanical properties of polymer matrix composites, was studied. The main objective was to evaluate the mechanical properties of the materials with the addition of recycled residue fillers, and compare to those of the pure polyester resin. Composite specimens were fabricated and tested for the evaluation of the flexural properties and Charpy impact resistance. After the mechanical tests, the fracture surface of the specimens was analyzed by scanning electron microscopy (SEM). The results indicate that some of the composites with fillers presented greater Young s modulus than the pure resin; in particular composites made with sand and diatomite, where the increase in modulus was about 168 %. The composites with polyester and EVA presented Young s modulus lower than the resin. Both strength and maximum strain were reduced when fillers were added. The impact resistance was reduced in all composites with fillers when compared to the pure resin, with the exception of the composites with EVA, where an increase of about 6 % was observed. Based on the mechanical tests, microscopy analyses and the compatibility of fillers with the polyester resin, the use of industrial solid residues in composites may be viable, considering that for each type of filler there will be a specific application

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.