849 resultados para simulated gravitational loading


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neste estudo, foram investigadas as densidades de carga adequadas para transporte de matrinxãs juvenis em sistema fechado com sacos plásticos. O transporte de 4h foi feito com peixes (23,5±0,4g; 11,6 (0,08cm) em jejum por 24h, em densidades de 83g L-1 (D1), 125g L-1 (D2), 168g L-1 (D3) e 206g L-1 (D4). Os peixes foram amostrados antes do transporte (AT), logo após o transporte (chegada) (DT) e 24h depois. A qualidade da água foi monitorada antes da captura dos peixes nos tanques de depuração, após o transporte nos sacos plásticos e nos tanques de recuperação. O oxigênio da água diminuiu para valores inferiores a 4mg L-1 em D2, D3 e D4, a temperatura esteve em torno de 32°C, pH 6,5-6,78, a amônia total foi de 1,09-1,7mg L-1, a amônia não-ionizada foi de 3,58-9,33 x 10³mg L-1 e alcalinidade 134-165mg CaCO3 L-1. O cortisol plasmático e a glicose sanguínea aumentaram após o transporte nos peixes em todas as densidades ensaiadas, voltando aos valores controle 24h depois. Os valores de osmolaridade não mudaram logo após o transporte, mas aumentaram 24h depois de modo igual em todas as densidades. O cloreto plasmático diminuiu na chegada, de modo inversamente proporcional à densidade de carga. O hematócrito diminuiu 24h depois da chegada dos peixes, em todas as densidades testadas, mas não houve diferença no número de eritrócitos. Não houve mortalidade até uma semana após o transporte. O matrinxã mostrou ser uma espécie tolerante a altas densidades de carga em embalagens para transporte além de suportar baixos níveis de oxigênio na água.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing demand in the use of composite materials necessitates a better understanding of its behavior related to many conditions of loading and service, as well as under several ways of connections involved in mechanisms of structural projects. Within these project conditions are highlighted the presence of geometrical discontinuities in the area of cross and longitudinal sections of structural elements and environmental conditions of work like UV radiation, moisture, heat, leading to a decrease in final mechanical response of the material. In this sense, this thesis aims to develop studies detailed (experimental and semi-empirical models) the effects caused by the presence of geometric discontinuity, more specifically, a central hole in the longitudinal section (with reduced cross section) and the influence of accelerated environmental aging on the mechanical properties and fracture mechanism of FGRP composite laminates under the action of uniaxial tensile loads. Studies on morphological behavior and structural degradation of composite laminates are performed by macroscopic and microscopic analysis of affected surfaces, in addition to evaluation by the Measurement technique for mass variation (TMVM). The accelerated environmental aging conditions are simulated by aging chamber. To study the simultaneous influence of aging/geometric discontinuity in the mechanical properties of composite laminates, a semiempirical model is proposed and called IE/FCPM Model. For the stress concentration due to the central hole, an analisys by failures criteria were performed by Average-Stress Criterion (ASC) and Point-Stress Criterion (PSC). Two polymeric composite laminates, manufactured industrially were studied: the first is only reinforced by short mats of fiberglass-E (LM) and the second where the reinforced by glass fiber/E comes in the form of bidirectional fabric (LT). In the conception configurations of laminates the anisotropy is crucial to the final mechanical response of the same. Finally, a comparative study of all parameters was performed for a better understanding of the results. How conclusive study, the characteristics of the final fracture of the laminate under all conditions that they were subjected, were analyzed. These analyzes were made at the macroscopic level (scanner) microscope (optical and scanning electron). At the end of the analyzes, it was observed that the degradation process occurs similarly for each composite researched, however, the LM composite compared to composite LT (configurations LT 0/90º and LT ±45º) proved to be more susceptible to loss of mechanical properties in both regarding with the central hole as well to accelerated environmental aging

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expanded Bed Adsorption (EBA) is an integrative process that combines concepts of chromatography and fluidization of solids. The many parameters involved and their synergistic effects complicate the optimization of the process. Fortunately, some mathematical tools have been developed in order to guide the investigation of the EBA system. In this work the application of experimental design, phenomenological modeling and artificial neural networks (ANN) in understanding chitosanases adsorption on ion exchange resin Streamline® DEAE have been investigated. The strain Paenibacillus ehimensis NRRL B-23118 was used for chitosanase production. EBA experiments were carried out using a column of 2.6 cm inner diameter with 30.0 cm in height that was coupled to a peristaltic pump. At the bottom of the column there was a distributor of glass beads having a height of 3.0 cm. Assays for residence time distribution (RTD) revelead a high degree of mixing, however, the Richardson-Zaki coefficients showed that the column was on the threshold of stability. Isotherm models fitted the adsorption equilibrium data in the presence of lyotropic salts. The results of experiment design indicated that the ionic strength and superficial velocity are important to the recovery and purity of chitosanases. The molecular mass of the two chitosanases were approximately 23 kDa and 52 kDa as estimated by SDS-PAGE. The phenomenological modeling was aimed to describe the operations in batch and column chromatography. The simulations were performed in Microsoft Visual Studio. The kinetic rate constant model set to kinetic curves efficiently under conditions of initial enzyme activity 0.232, 0.142 e 0.079 UA/mL. The simulated breakthrough curves showed some differences with experimental data, especially regarding the slope. Sensitivity tests of the model on the surface velocity, axial dispersion and initial concentration showed agreement with the literature. The neural network was constructed in MATLAB and Neural Network Toolbox. The cross-validation was used to improve the ability of generalization. The parameters of ANN were improved to obtain the settings 6-6 (enzyme activity) and 9-6 (total protein), as well as tansig transfer function and Levenberg-Marquardt training algorithm. The neural Carlos Eduardo de Araújo Padilha dezembro/2013 9 networks simulations, including all the steps of cycle, showed good agreement with experimental data, with a correlation coefficient of approximately 0.974. The effects of input variables on profiles of the stages of loading, washing and elution were consistent with the literature

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conventional Newton and fast decoupled power flow (FDPF) methods have been considered inadequate to obtain the maximum loading point of power systems due to ill-conditioning problems at and near this critical point. It is well known that the PV and Q-theta decoupling assumptions of the fast decoupled power flow formulation no longer hold in the vicinity of the critical point. Moreover, the Jacobian matrix of the Newton method becomes singular at this point. However, the maximum loading point can be efficiently computed through parameterization techniques of continuation methods. In this paper it is shown that by using either theta or V as a parameter, the new fast decoupled power flow versions (XB and BX) become adequate for the computation of the maximum loading point only with a few small modifications. The possible use of reactive power injection in a selected PV bus (Q(PV)) as continuation parameter (mu) for the computation of the maximum loading point is also shown. A trivial secant predictor, the modified zero-order polynomial which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used in predictor step. These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approach for the IEEE test systems (14, 30, 57 and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that parameters can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The parameterized fast decoupled power flow (PFDPF), versions XB and BX, using either theta or V as a parameter have been proposed by the authors in Part I of this paper. The use of reactive power injection of a selected PVbus (Q(PV)) as the continuation parameter for the computation of the maximum loading point (MLP) was also investigated. In this paper, the proposed versions obtained only with small modifications of the conventional one are used for the computation of the MLP of IEEE test systems (14, 30, 57 and 118 buses). These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approaches are presented and discussed. The results show that the characteristics of the conventional FDPF method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that these versions can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. A trivial secant predictor, the modified zero-order polynomial, which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used for the predictor step. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This letter presents an alternative approach for reducing the total real power losses by using a continuation method. Results for two simple test systems and for the IEEE 57-bus system show that this procedure results in larger voltage stability margin. Besides, the reduction of real power losses obtained with this procedure leads to significant money savings and, simultaneously, to voltage profile improvement. Comparison between the solution of an optimal power flow and the proposed method shows that the latter can provide near optimal results and so, it can be a reasonable alternative to power system voltage stability enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The separation methods are reduced applications as a result of the operational costs, the low output and the long time to separate the uids. But, these treatment methods are important because of the need for extraction of unwanted contaminants in the oil production. The water and the concentration of oil in water should be minimal (around 40 to 20 ppm) in order to take it to the sea. Because of the need of primary treatment, the objective of this project is to study and implement algorithms for identification of polynomial NARX (Nonlinear Auto-Regressive with Exogenous Input) models in closed loop, implement a structural identification, and compare strategies using PI control and updated on-line NARX predictive models on a combination of three-phase separator in series with three hydro cyclones batteries. The main goal of this project is to: obtain an optimized process of phase separation that will regulate the system, even in the presence of oil gushes; Show that it is possible to get optimized tunings for controllers analyzing the mesh as a whole, and evaluate and compare the strategies of PI and predictive control applied to the process. To accomplish these goals a simulator was used to represent the three phase separator and hydro cyclones. Algorithms were developed for system identification (NARX) using RLS(Recursive Least Square), along with methods for structure models detection. Predictive Control Algorithms were also implemented with NARX model updated on-line, and optimization algorithms using PSO (Particle Swarm Optimization). This project ends with a comparison of results obtained from the use of PI and predictive controllers (both with optimal state through the algorithm of cloud particles) in the simulated system. Thus, concluding that the performed optimizations make the system less sensitive to external perturbations and when optimized, the two controllers show similar results with the assessment of predictive control somewhat less sensitive to disturbances

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os Algoritmos Genético (AG) e o Simulated Annealing (SA) são algoritmos construídos para encontrar máximo ou mínimo de uma função que representa alguma característica do processo que está sendo modelado. Esses algoritmos possuem mecanismos que os fazem escapar de ótimos locais, entretanto, a evolução desses algoritmos no tempo se dá de forma completamente diferente. O SA no seu processo de busca trabalha com apenas um ponto, gerando a partir deste sempre um nova solução que é testada e que pode ser aceita ou não, já o AG trabalha com um conjunto de pontos, chamado população, da qual gera outra população que sempre é aceita. Em comum com esses dois algoritmos temos que a forma como o próximo ponto ou a próxima população é gerada obedece propriedades estocásticas. Nesse trabalho mostramos que a teoria matemática que descreve a evolução destes algoritmos é a teoria das cadeias de Markov. O AG é descrito por uma cadeia de Markov homogênea enquanto que o SA é descrito por uma cadeia de Markov não-homogênea, por fim serão feitos alguns exemplos computacionais comparando o desempenho desses dois algoritmos