979 resultados para model base


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho foi conduzido com o objetivo de avaliar o desempenho de matrizes pesadas, submetidas a diferentes programas de alimentação estabelecidos pela aplicação de modelos para predizer as exigências energéticas, após o pico de postura. O experimento foi conduzido no setor de avicultura da UNESP Campus Jaboticabal, com duração de 84 dias (três períodos de 28 dias). Foram utilizadas 740 matrizes de corte Hubbard Hy-Yield e 80 machos Petterson, com 55 semanas de idade. O delineamento foi inteiramente casualizado, com quatro tratamentos e cinco repetições de 37 aves por repetição (box) e um modelo fatorial 4´3 (quatro tratamentos ´ três períodos). Os programas de alimentação avaliados foram: T1 - Fornecimento de ração de acordo com o padrão da linhagem (428 kcal/ave/dia de 55 a 66 semanas de idade); T2 - Redução semanal de energia (2 kcal de EM/ave em cada semana); T3 - Fornecimento de ração de acordo com o modelo de exigência de EM, UNESP (2000); e T4 - Fornecimento de ração de acordo com o modelo, NRC (1994). O programa de alimentação com redução semanal de energia foi adequado para manter os desempenhos produtivo e reprodutivo das aves, indicando a possibilidade de redução de 2 kcal/ave/dia, em cada semana, na alimentação de matrizes pesadas após 55 semanas de idade. Os modelos UNESP e NRC proporcionaram estimativas mais elevadas das exigências energéticas que o modelo padrão, provavelmente em decorrência do ganho de peso das matrizes, que esteve acima do recomendado para a linhagem, promovendo maiores exigências de energia para mantença.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work describes the use of a mathematical tool to solve problems arising from control theory, including the identification, analysis of the phase portrait and stability, as well as the temporal evolution of the plant s current induction motor. The system identification is an area of mathematical modeling that has as its objective the study of techniques which can determine a dynamic model in representing a real system. The tool used in the identification and analysis of nonlinear dynamical system is the Radial Basis Function (RBF). The process or plant that is used has a mathematical model unknown, but belongs to a particular class that contains an internal dynamics that can be modeled.Will be presented as contributions to the analysis of asymptotic stability of the RBF. The identification using radial basis function is demonstrated through computer simulations from a real data set obtained from the plant

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An alternative nonlinear technique for decoupling and control is presented. This technique is based on a RBF (Radial Basis Functions) neural network and it is applied to the synchronous generator model. The synchronous generator is a coupled system, in other words, a change at one input variable of the system, changes more than one output. The RBF network will perform the decoupling, separating the control of the following outputs variables: the load angle and flux linkage in the field winding. This technique does not require knowledge of the system parameters and, due the nature of radial basis functions, it shows itself stable to parametric uncertainties, disturbances and simpler when it is applied in control. The RBF decoupler is designed in this work for decouple a nonlinear MIMO system with two inputs and two outputs. The weights between hidden and output layer are modified online, using an adaptive law in real time. The adaptive law is developed by Lyapunov s Method. A decoupling adaptive controller uses the errors between system outputs and model outputs, and filtered outputs of the system to produce control signals. The RBF network forces each outputs of generator to behave like reference model. When the RBF approaches adequately control signals, the system decoupling is achieved. A mathematical proof and analysis are showed. Simulations are presented to show the performance and robustness of the RBF network

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ceramics with porous cellular structure, called ceramic foams, have a potential use in several applications, such as: thermal insulation, catalyst supports, filters, and others. Among these techniques to obtain porous ceramics the replication method is an important process. This method consists of impregnation of a sponge (usually polymer) with ceramic slurry, followed by a heat treatment, which will happen the decomposition of organic material and sintering the ceramic material, resulting in a ceramic structure which is a replica of impregnated sponge. Knowledge of the mechanical properties of these ceramics is important for these materials can be used commercially. Gibson and Ashby developed a mathematical model to describe the mechanical behavior of cellular solids. This model wasn´t for describing the ceramics behavior produced by the replica method, because it doesn´t consider the defects from this type of processing. In this study were researched mechanical behavior of porous alumina ceramics obtained by the replica method and proposed modifications to the model of Gibson and Ashby to accommodate this material. The polymer sponge used in processing was characterized by thermogravimetric analysis and scanning electron microscopy. The materials obtained after sintering were characterized by mechanical strength tests on 4-point bending and compression, density and porosity and by scanning electron microscopy. From these results it was evaluated the mechanical strength behavior compared to Gibson and Ashby model for solid cellular structure and was proposed a correction of this model through a factor related to struts integrity degree, which consider fissures present in the structure of these materials besides defects geometry within the struts

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The determination of the rheology of drilling fluids is of fundamental importance to select the best composition and the best treatment to be applied in these fluids. This work presents a study of the rheological behavior of some addictives used as viscosifiers in water-based drilling fluids. The evaluated addictives were: Carboxymethylcellulose (CMC), Xanthan gum (GX), and Bentonite. The main objective was to rheologically characterize suspensions composed by these addictives, by applying mathematical models for fluid flow behavior, in order to determine the best flow equation to represent the system, as well as the model parameters. The mathematical models applied in this research were: the Bingham Model, the Ostwald de Wale Model, and the Herschel-Bulkley Model. A previous study of hydration time for each used addictive was accomplished seeking to evaluate the effect of polymer and clay hydration on rheological behavior of the fluid. The rheological characterization was made through typical rheology experiments, using a coaxial cylinder viscosimeter, where the flow curves and the thixotropic magnitude of each fluid was obtained. For each used addictive the rheological behavior as a function of temperature was also evaluated as well as fluid stability as a function of the concentration and kind of addictive used. After analyses of results, mixtures of polymer and clay were made seeking to evaluate the rheological modifications provided by the polymer incorporation in the water + bentonite system. The obtained results showed that the Ostwald de Waale model provided the best fit for fluids prepared using CMC and for fluids with Xanthan gum and Bentonite the best fit was given by the Herschel-Bulkley one

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Actually, surveys have been developed for obtaining new materials and methodologies that aim to minimize environmental problems due to discharges of industrial effluents contaminated with heavy metals. The adsorption has been used as an alternative technology effectively, economically viable and potentially important for the reduction of metals, especially when using natural adsorbents such as certain types of clay. Chitosan, a polymer of natural origin, present in the shells of crustaceans and insects, has also been used for this purpose. Among the clays, vermiculite is distinguished by its good ion exchange capacity and in its expanded form enhances its properties by greatly increasing its specific surface. This study aimed to evaluate the functionality of the hybrid material obtained through the modification of expanded vermiculite with chitosan in the removal of lead ions (II) in aqueous solution. The material was characterized by infrared spectroscopy (IR) in order to evaluate the efficiency of modification of matrix, the vermiculite, the organic material, chitosan. The thermal stability of the material and the ratio clay / polymer was evaluated by thermogravimetry. To evaluate the surface of the material was used in scanning electron microscopy (SEM) and (BET). The BET analysis revealed a significant increase in surface area of vermiculite that after interaction with chitosan, was obtained a value of 21, 6156 m2 / g. Adsorption tests were performed according to the particle size, concentration and time. The results show that the capacity of removal of ions through the vermiculite was on average 88.4% for lead in concentrations ranging from 20-200 mg / L and 64.2% in the concentration range of 1000 mg / L. Regarding the particle size, there was an increase in adsorption with decreasing particle size. In fuction to the time of contact, was observed adsorption equilibrium in 60 minutes with adsorption capacity. The data of the isotherms were fitted to equation Freundlich. The kinetic study of adsorption showed that the pseudo second- order model best describes the adsorption adsorption, having been found following values K2=0,024 g. mg-1 min-1and Qmax=25,75 mg/g, value very close to the calculated Qe = 26.31 mg / g. From the results we can conclude that the material can be used in wastewater treatment systems as a source of metal ions adsorbent due to its high adsorption capacity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polyurethanes are very versatile macromolecular materials that can be used in the form of powders, adhesives and elastomers. As a consequence, they constitute important subject for research as well as outstanding materials used in several manufacturing processes. In addition to the search for new polyurethanes, the kinetics control during its preparation is a very important topic, mainly if the polyurethane is obtained via bulk polymerization. The work in thesis was directed towards this subject, particularly the synthesis of polyurethanes based castor oil and isophorone diisocianate. As a first step castor oil characterized using the following analytical methods: iodine index, saponification index, refraction index, humidity content and infrared absorption spectroscopy (FTIR). As a second step, test specimens of these polyurethanes were obtained via bulk polymerization and were submitted to swelling experiments with different solvents. From these experiments, the Hildebrand parameter was determined for this material. Finally, bulk polymerization was carried out in a differential scanning calorimetry (DSC) equipment, using different heating rates, at two conditions: without catalyst and with dibutyltin dilaurate (DBTDL) as catalyst. The DSC curves were adjusted to a kinetic model, using the isoconversional method, indicating the autocatalytic effect characteristic of this class of polymerization reaction

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seeking a greater appreciation of cheese whey was developed to process the hydrogenation of lactose for the production of lactitol, a polyol with high added value, using the catalyst Ni / activated carbon (15% and 20% nickel), the nitride Mo2N, the bimetallic carbide Ni-Mo/ activated carbon and carbide Mo2C. After synthesis, the prepared catalysts were analyzed by MEV, XRD, laser granulometry and B.E.T. The reactor used in catalytic hydrogenation of lactose was the type of bed mud with a pressure (68 atm), temperature (120 oC) and stirring speed (500 rpm) remained constant during the experiments. The system operated in batch mode for the solid and liquid and semi-continuous to gas. Besides the nature of the catalyst, we studied the influence of pH of reaction medium for Mo2C carbide as well as evaluating the character of the protein inhibitor and chloride ions on the activity of catalysts Ni (20%)/Activated Carbon and bimetallic carbide Ni-Mo/Activated Carbon. The decrease in protein levels was performed by coagulation with chitosan and adsorption of chloride ions was performed by ion exchange resins. In the process of protein adsorption and chloride ions, the maximum percentage extracted was about 74% and 79% respectively. The micrographs of the powders of Mo2C and Mo2N presented in the form of homogeneous clusters, whereas for the catalysts supported on activated carbon, microporous structure proved impregnated with small particles indicating the presence of metal. The results showed high conversion of lactose to lactitol 90% for the catalyst Ni (20%)/Activated Carbon at pH 6 and 46% for the carbide Mo2C pH 8 (after addition of NH4OH) using the commercial lactose. Monitoring the evolution of the constituents present in the reaction medium was made by liquid chromatography. A kinetic model of heterogeneous Langmuir Hinshelwood type was developed which showed that the estimated constants based catalysts promoted carbide and nitride with a certain speed the adsorption, desorption and production of lactitol

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research presents a methodology for prediction of building shadows cast on urban roads existing on high-resolution aerial imagery. Shadow elements can be used in the modeling of contextual information, whose use has become more and more common in image analysis complex processes. The proposed methodology consists in three sequential steps. First, the building roof contours are manually extracted from an intensity image generated by the transformation of a digital elevation model (DEM) obtained from airborne laser scanning data. In similarly, the roadside contours are extracted, now from the radiometric information of the laser scanning data. Second, the roof contour polygons are projected onto the adjacent roads by using the parallel projection straight lines, whose directions are computed from the solar ephemeris, which depends on the aerial image acquisition time. Finally, parts of shadow polygons that are free from building perspective obstructions are determined, given rise to new shadow polygons. The results obtained in the experimental evaluation of the methodology showed that the method works properly, since it allowed the prediction of shadow in high-resolution imagery with high accuracy and reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo deste trabalho foi estimar os ganhos genéticos de um teste de progênies de seringueira para a produção de borracha seca e, com base no maior tamanho efetivo populacional e maior ganho genético, obter os melhores indivíduos. Foram utilizadas 30 progênies de meios-irmãos, provenientes de sementes de polinização mista - alogamia e autogamia - de testes clonais no Estado de São Paulo. Utilizou-se o delineamento experimental de blocos ao acaso, com 30 tratamentos (progênies), 3 repetições e parcelas lineares de 10 plantas, em um espaçamento de 3x3 m, o que totalizou 900 plantas úteis. Aos três anos, o perímetro, a 50 cm do solo (PA50), e a produção de borracha seca (PBS) foram avaliadas por meio do teste precoce de produção Hamaker Morris-Mann (HMM). As variáveis foram analisadas pelo método de modelo linear misto, via procedimento REML/BLUP, em progênies com sistema reprodutivo misto e taxa de autofecundação de 22%. A identificação dos 20 melhores indivíduos quanto à PBS e ao PA50 proporcionou ganho genético de 67,96 e 16,48%, respectivamente, e um coeficiente de endogamia de aproximadamente 2,82%. O teste de progênies proporciona produção de sementes com melhor valor genético, grande variabilidade e baixa endogamia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Java Card technology allows the development and execution of small applications embedded in smart cards. A Java Card application is composed of an external card client and of an application in the card that implements the services available to the client by means of an Application Programming Interface (API). Usually, these applications manipulate and store important information, such as cash and confidential data of their owners. Thus, it is necessary to adopt rigor on developing a smart card application to improve its quality and trustworthiness. The use of formal methods on the development of these applications is a way to reach these quality requirements. The B method is one of the many formal methods for system specification. The development in B starts with the functional specification of the system, continues with the application of some optional refinements to the specification and, from the last level of refinement, it is possible to generate code for some programming language. The B formalism has a good tool support and its application to Java Card is adequate since the specification and development of APIs is one of the major applications of B. The BSmart method proposed here aims to promote the rigorous development of Java Card applications up to the generation of its code, based on the refinement of its formal specification described in the B notation. This development is supported by the BSmart tool, that is composed of some programs that automate each stage of the method; and by a library of B modules and Java Card classes that model primitive types, essential Java Card API classes and reusable data structures