913 resultados para Load flow with step size optimization
Resumo:
Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering
Resumo:
In this work, the impact of distributed generation in the transmission expansion planning will be simulated through the performance of an optimization process for three different scenarios: the first without distributed generation, the second with distributed generation equivalent to 1% of the load, and the third with 5% of distributed generation. For modeling the expanding problem the load flow linearized method using genetic algorithms for optimization has been chosen. The test circuit used is a simplification of the south eastern Brazilian electricity system with 46 buses.
Resumo:
The purpose of this study was to determine the prognostic accuracy of perfusion computed tomography (CT), performed at the time of emergency room admission, in acute stroke patients. Accuracy was determined by comparison of perfusion CT with delayed magnetic resonance (MR) and by monitoring the evolution of each patient's clinical condition. Twenty-two acute stroke patients underwent perfusion CT covering four contiguous 10mm slices on admission, as well as delayed MR, performed after a median interval of 3 days after emergency room admission. Eight were treated with thrombolytic agents. Infarct size on the admission perfusion CT was compared with that on the delayed diffusion-weighted (DWI)-MR, chosen as the gold standard. Delayed magnetic resonance angiography and perfusion-weighted MR were used to detect recanalization. A potential recuperation ratio, defined as PRR = penumbra size/(penumbra size + infarct size) on the admission perfusion CT, was compared with the evolution in each patient's clinical condition, defined by the National Institutes of Health Stroke Scale (NIHSS). In the 8 cases with arterial recanalization, the size of the cerebral infarct on the delayed DWI-MR was larger than or equal to that of the infarct on the admission perfusion CT, but smaller than or equal to that of the ischemic lesion on the admission perfusion CT; and the observed improvement in the NIHSS correlated with the PRR (correlation coefficient = 0.833). In the 14 cases with persistent arterial occlusion, infarct size on the delayed DWI-MR correlated with ischemic lesion size on the admission perfusion CT (r = 0.958). In all 22 patients, the admission NIHSS correlated with the size of the ischemic area on the admission perfusion CT (r = 0.627). Based on these findings, we conclude that perfusion CT allows the accurate prediction of the final infarct size and the evaluation of clinical prognosis for acute stroke patients at the time of emergency evaluation. It may also provide information about the extent of the penumbra. Perfusion CT could therefore be a valuable tool in the early management of acute stroke patients.
Resumo:
Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Resumo:
BACKGROUND: This study validates the use of phycoerythrin (PE) and allophycocyanin (APC) for fluorescence energy transfer (FRET) analyzed by flow cytometry. METHODS: FRET was detected when a pair of antibody conjugates directed against two noncompetitive epitopes on the same CD8alpha chain was used. FRET was also detected between antibody conjugate pairs specific for the two chains of the heterodimeric alpha (4)beta(1) integrin. Similarly, the association of T-cell receptor (TCR) with a soluble antigen ligand was detected by FRET when anti-TCR antibody and MHC class I/peptide complexes (<<tetramers>>) were used. RESULTS: FRET efficiency was always less than 10%, probably because of steric effects associated with the size and structure of PE and APC. Some suggestions are given to take into account this and other effects (e.g., donor and acceptor concentrations) for a better interpretation of FRET results obtained with this pair of fluorochromes. CONCLUSIONS: We conclude that FRET assays can be carried out easily with commercially available antibodies and flow cytometers to study arrays of multimolecular complexes.
Resumo:
Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.
Resumo:
Työssä tutkittiin Andritz-Ahlstrom toimittamien soodakattiloiden lämmönsiirtoa ANITA 2.20- suunnitteluohjelmalla feedback- laskentaa apuna käyttäen. Data laskentaan saatiin kattiloiden takuukokeissa mitatuista arvoista. Mittaukset on suoritettiin Andritz-Ahlstromin henkilökunnan toimesta tehdashenkilökunnan avustuksella. Feedback -laskenta tapahtui mittaustulosten perusteella, joten tiettyä virhettä luonnollisesti esiintyi. Aluksi laskettiin taseet molempien ekojen yli erikseen sekä molemmat yhdessä Excel-taulukkolaskentaohjelmalla. Täältä saatiin oletettu savukaasuvirtaus kattilassa. Tämän jälkeen lämpöpintoja muokattiin todellisuutta vastaaviksi yleislikaisuuskerrointa muuttamalla (overall fouling factor). Kertoimet ovat liikkuivat noin 0.4 ja 1.6 välillä riipuen kattilan tyypistä ja ANITAn oletuksesta lämpöpintojen likaisuudelle. Havaittin että yhtä varsinaista syytä lämpöpintojen eroavaisuuteen oletetusta ei saatu. Syitä toiminnan poikkeamiseen oli monia. Mm. etukammion koolla havaittiin olevan suurtakin vaikutusta tulistimien, etenkin savukaasuvirrassa ensimmäisen tulistimen toimintaan. Yleisesti todettiin muiden tulistimien vastaavan oletettua toimintaa. Keittopinnan ja ekonomiserien toimintaa tutkittiin hivenen suppeammin ja havaittiin niiden toimivan huomattavasti stabiilimmin kuin tulistimien. Likaisuus kertoimet oletetusta vaihtelivat noin ±20 %.
Resumo:
The objective of this thesis was to study the removal of gases from paper mill circulation waters experimentally and to provide data for CFD modeling. Flow and bubble size measurements were carried out in a laboratory scale open gas separation channel. Particle Image Velocimetry (PIV) technique was used to measure the gas and liquid flow fields, while bubble size measurements were conducted using digital imaging technique with back light illumination. Samples of paper machine waters as well as a model solution were used for the experiments. The PIV results show that the gas bubbles near the feed position have the tendency to escape from the circulation channel at a faster rate than those bubbles which are further away from the feed position. This was due to an increased rate of bubble coalescence as a result of the relatively larger bubbles near the feed position. Moreover, a close similarity between the measured slip velocities of the paper mill waters and that of literature values was obtained. It was found that due to dilution of paper mill waters, the observed average bubble size was considerably large as compared to the average bubble sizes in real industrial pulp suspension and circulation waters. Among the studied solutions, the model solution has the highest average drag coefficient value due to its relatively high viscosity. The results were compared to a 2D steady sate CFD simulation model. A standard Euler-Euler k-ε turbulence model was used in the simulations. The channel free surface was modeled as a degassing boundary. From the drag models used in the simulations, the Grace drag model gave velocity fields closest to the experimental values. In general, the results obtained from experiments and CFD simulations are in good qualitative agreement.
Resumo:
Airlift reactors are pneumatically agitated reactors that have been widely used in chemical, petrochemical, and bioprocess industries, such as fermentation and wastewater treatment. Computational Fluid Dynamics (CFD) has become more popular approach for design, scale-up and performance evaluation of such reactors. In the present work numerical simulations for internal-loop airlift reactors were performed using the transient Eulerian model with CFD package, ANSYS Fluent 12.1. The turbulence in the liquid phase is described using κ- ε the model. Global hydrodynamic parameters like gas holdup, gas velocity and liquid velocity have been investigated for a range of superficial gas velocities, both with 2D and 3D simulations. Moreover, the study of geometry and scale influence on the reactor have been considered. The results suggest that both, geometry and scale have significant effects on the hydrodynamic parameters, which may have substantial effects on the reactor performance. Grid refinement and time-step size effect have been discussed. Numerical calculations with gas-liquid-solid three-phase flow system have been carried out to investigate the effect of solid loading, solid particle size and solid density on the hydrodynamic characteristics of internal loop airlift reactor with different superficial gas velocities. It was observed that averaged gas holdup is significantly decreased with increasing slurry concentration. Simulations show that the riser gas holdup decreases with increase in solid particle diameter. In addition, it was found that the averaged solid holdup increases in the riser section with the increase of solid density. These produced results reveal that CFD have excellent potential to simulate two-phase and three-phase flow system.
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Globally there have been a number of concerns about the development of genetically modified crops many of which relate to the implications of gene flow at various levels. In Europe these concerns have led the European Union (EU) to promote the concept of 'coexistence' to allow the freedom to plant conventional and genetically modified (GM) varieties but to minimise the presence of transgenic material within conventional crops. Should a premium for non-GM varieties emerge on the market, the presence of transgenes would generate a 'negative externality' to conventional growers. The establishment of maximum tolerance level for the adventitious presence of GM material in conventional crops produces a threshold effect in the external costs. The existing literature suggests that apart from the biological characteristics of the plant under consideration (e.g. self-pollination rates, entomophilous species, anemophilous species, etc.), gene flow at the landscape level is affected by the relative size of the source and sink populations and the spatial arrangement of the fields in the landscape. In this paper, we take genetically modified herbicide tolerant oilseed rape (GM HT OSR) as a model crop. Starting from an individual pollen dispersal function, we develop a spatially explicit numerical model in order to assess the effect of the size of the source/sink populations and the degree of spatial aggregation on the extent of gene flow into conventional OSR varieties under two alternative settings. We find that when the transgene presence in conventional produce is detected at the field level, the external cost will increase with the size of the source area and with the level of spatial disaggregation. on the other hand when the transgene presence is averaged among all conventional fields in the landscape (e.g. because of grain mixing before detection), the external cost will only depend on the relative size of the source area. The model could readily be incorporated into an economic evaluation of policies to regulate adoption of GM HT OSR. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Field experiments were carried out to assess the effects of nitrogen fertilization and seed rate on the Hagberg falling number (HFN) of commercial wheat hybrids and their parents. Applying nitrogen (200 kg N ha(-1)) increased HFN in two successive years. The HFN of the hybrid Hyno Esta was lower than either of its parents (Estica and Audace), particularly when nitrogen was not applied. Treatment effects on HFN were negatively associated with a-amylase activity. Phadebas grain blotting suggested two populations of grains with different types of a-amylase activity: Estica appeared to have a high proportion of grains with low levels of late maturity endosperm a-amylase activity (LMEA); Audace had a few grains showing high levels of germination amylase; and the hybrid, Hyno Esta, combined the sources from both parents to show heterosis for a-amylase activity. Applying nitrogen reduced both apparent LMEA and germination amylase. The effects on LMEA were associated with the size and disruption of the grain cavity, which was greater in Hyno Esta and Estica and in zero-nitrogen treatments. External grain morphology failed to explain much of the variation in LMEA and cavity size, but there was a close negative correlation between cavity size and protein content. Applying nitrogen increased post-harvest dormancy of the grain. Dormancy was greatest in Estica and least in Audace. It is proposed that effects of seed rate, genotype and nitrogen fertilizer on HFN are mediated through factors affecting the size and disruption of the grain cavity and therefore LMEA, and through factors affecting dormancy and therefore germination amylase. (c) 2004 Society of Chemical Industry.
Resumo:
We estimate the body sizes of direct ancestors of extant carnivores, and examine selected aspects of life history as a function not only of species' current size, but also of recent changes in size. Carnivore species that have undergone marked recent evolutionary size change show life history characteristics typically associated with species closer to the ancestral body size. Thus, phyletic giants tend to mature earlier and have larger litters of smaller offspring at shorter intervals than do species of the same body size that are not phyletic giants. Phyletic dwarfs, by contrast, have slower life histories than nondwarf species of the same body size. We discuss two possible mechanisms for the legacy of recent size change: lag (in which life history variables cannot evolve as quickly as body size, leading to species having the 'wrong' life history for their body size) and body size optimization (in which life history and hence body size evolve in response to changes in energy availability); at present, we cannot distinguish between these alternatives. Our finding that recent body size changes help explain residual variation around life history allometries shows that a more dynamic view of character change enables comparative studies to make more precise predictions about species traits in the context of their evolutionary background.
Resumo:
Flowering time and seed size are traits related to domestication. However, identification of domestication-related loci/genes of controlling the traits in soybean is rarely reported. In this study, we identified a total of 48 domestication-related loci based on RAD-seq genotyping of a natural population comprising 286 accessions. Among these, four on chromosome 12 and additional two on chromosomes 11 and 15 were associated with flowering time, and four on chromosomes 11 and 16 were associated with seed size. Of the five genes associated with flowering time and the three genes associated with seed size, three genes Glyma11g18720, Glyma11g15480 and Glyma15g35080 were homologous to Arabidopsis genes, additional five genes were found for the first time to be associated with these two traits. Glyma11g18720 and Glyma05g28130 were co-expressed with five genes homologous to flowering time genes in Arabidopsis, and Glyma11g15480 was co-expressed with 24 genes homologous to seed development genes in Arabidopsis. This study indicates that integration of population divergence analysis, genome-wide association study and expression analysis is an efficient approach to identify candidate domestication-related genes.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.