913 resultados para Load flow with step size optimization
Resumo:
Current design procedures for Subsurface Flow (SSF) Wetlands are based on the simplifying assumptions of plug flow and first order decay of pollutants. These design procedures do yield functional wetlands but result in over-design and inadequate descriptions of the pollutant removal mechanisms which occur within them. Even though these deficiencies are often noted, few authors have attempted to improve modelling of either flow or pollutant removal in such systems. Consequently the Oxley Creek Wetland, a pilot scale SSF wetland designed to enable rigorous monitoring, has recently been constructed in Brisbane, Australia. Tracer studies have been carried out in order to determine the hydraulics of this wetland prior to commissioning it with sealed sewage. The tracer studies will continue during the wetland's commissioning and operational phases. These studies will improve our understanding of the hydraulics of newly built SSF wetlands and the changes brought on by operational factors such as biological films and wetland plant root structures. Results to date indicate that the flow through the gravel beds is not uniform and cannot be adequately modelled by a single parameter, plug flow with dispersion, model. We have developed a multiparameter model, incorporating four plug flow reactors, which provides a better approximation of our experimental data. With further development this model will allow improvements to current SSF wetland design procedures and operational strategies, and will underpin investigations into the pollutant removal mechanisms at the Oxley Creek Wetland. (C) 1997 IAWQ. Published by Elsevier Science Ltd.
Resumo:
Objective: To investigate the effects of the rate of airway pressure increase and duration of recruitment maneuvers on lung function and activation of inflammation, fibrogenesis, and apoptosis in experimental acute lung injury. Design: Prospective, randomized, controlled experimental study. Setting: University research laboratory. Subjects: Thirty-five Wistar rats submitted to acute lung injury induced by cecal ligation and puncture. Interventions: After 48 hrs, animals were randomly distributed into five groups (seven animals each): 1) nonrecruited (NR); 2) recruitment maneuvers (RMs) with continuous positive airway pressure (CPAP) for 15 secs (CPAP15); 3) RMs with CPAP for 30 secs (CPAP30); 4) RMs with stepwise increase in airway pressure (STEP) to targeted maximum within 15 secs (STEP15); and 5) RMs with STEP within 30 secs (STEP30). To perform STEP RMs, the ventilator was switched to a CPAP mode and positive end-expiratory pressure level was increased stepwise. At each step, airway pressure was held constant. RMs were targeted to 30 cm H(2)O. Animals were then ventilated for 1 hr with tidal volume of 6 mL/kg and positive end-expiratory pressure of 5 cm H(2)O. Measurements and Main Results: Blood gases, lung mechanics, histology (light and electronic microscopy), interleukin-6, caspase 3, and type 3 procollagen mRNA expressions in lung tissue. All RMs improved oxygenation and lung static elastance and reduced alveolar collapse compared to NR. STEP30 resulted in optimal performance, with: 1) improved lung static elastance vs. NR, CPAP15, and STEP15; 2) reduced alveolar-capillary membrane detachment and type 2 epithelial and endothelial cell injury scores vs. CPAP15 (p < .05); and 3) reduced gene expression of interleukin-6, type 3 procollagen, and caspase 3 in lung tissue vs. other RMs. Conclusions: Longer-duration RMs with slower airway pressure increase efficiently improved lung function, while minimizing the biological impact on lungs. (Crit Care Med 2011; 39:1074-1081)
Resumo:
This article examines the effects of commercialisation of agriculture on land use and work patterns by means of a case study in the Nyeri district in Kenya. The study uses cross sectional data collected from small-scale farmers in this district. We find that good quality land is allocated to non-food cash crops, which may lead to a reduction in non-cash food crops and expose some households to greater risks of possible famine. Also the proportion of land allocated to food crops declines as the farm size increases while the proportion of land allocated to non-food cash crops rises as the size of farm increases. Cash crops are also not bringing in as much revenue commensurate with the amount of land allocated to them. With growing commercialisation, women still work more hours than men. They not only work on non-cash food crops but also on cash crops including non-food cash crops. Evidence indicates that women living with husbands work longer hours than those married but living alone, and also longer than the unmarried women. Married women seem to lose their decision-making ability with growth of commercialisation, as husbands make most decisions to do with cash crops. Furthermore husbands appropriate family cash income. Husbands are less likely to use such income for the welfare of the family compared to wives due to different expenditure patterns. Married women in Kenya also have little or no power to change the way land is allocated between food and non-food cash crops. Due to deteriorating terms of trade for non-food cash crops, men have started cultivation of food cash crops with the potential of crowding out women. It is found that both the area of non-cash crops tends to rise with farm size but also the proportion of the farm area cash cropped rises in Central Kenya.
Resumo:
Introduction: This study assessed in vitro the physicochemical properties of 2 methacrylate resin-based sealers (Epiphany SE and Hybrid Root SEAL), comparing the results with a well-established epoxy resin-based sealer (AH Plus). Methods: Five samples of each material were used for each test (setting time, flow, radiopacity, dimensional change after setting, and solubility) according to American National Standards Institute/American Dental Association (ANSI/ADA) Specification 57. The samples were assigned to 3 groups: I, AH Plus; II, Epiphany SE; and III, Hybrid Root SEAL. The distilled and deionized water used at the solubility test was submitted to atomic absorption spectrometry to observe the presence of Ca2+, K+, Ni2+, and Zn2+ ions. In addition, the surface morphology of the specimens was analyzed by means of scanning electron microscopy (SEM). Statistical analysis was performed by using one-way analysis of variance and Tukey-Kramer test (P < .05). Results: Flow, radiopacity, and solubility of all sealers were in accordance with ANSI/ADA. The setting time of Hybrid Root SEAL did not agree with ANSUADA requirements. The dimensional change of all sealers was greater than the values considered acceptable by ANSI/ADA. The spectrometry analysis showed significant Ca2+ ions release for AH Plus. In SEM analysis, Hybrid Root SEAL presented spherical monomers with inferior size than AH Plus and Epiphany SE. Conclusions: It might be concluded that physicochemical properties of the tested sealers conformed to ANSI/ADA (2000) standardization, except for the setting time of Hybrid Root SEAL and the dimensional change of all sealers, which did not fulfill the ANSI/ADA requirements. (J Endod 2010;36:1531-1536)
Resumo:
A small disturbance in the axisymmetric, bathtub-like flow with strong vorticity is considered and the asymptotic representation of the solution is found. It is shown that if the disturbance is smaller than a certain critical scale, the conventional strong vortex approximation cannot describe the field generated by the disturbance not only in the vicinity of the disturbance but also at the distances much larger than the critical scale. (C) 2001 American Institute of Physics.
Resumo:
This paper describes a rainfall simulator developed for field and laboratory studies that gives great flexibility in plot size covered, that is highly portable and able to be used on steep slopes, and that is economical in its water use. The simulator uses Veejet 80100 nozzles mounted on a manifold, with the nozzles controlled to sweep to and from across a plot width of 1.5 m. Effective rainfall intensity is controlled by the frequency with which the nozzles sweep. Spatial uniformity of rainfall on the plots is high, with coefficients of variation (CV) on the body of the plot being 8-10%. Use of the simulator for erosion and infiltration measurements is discussed.
Resumo:
A simple method is provided for calculating transport rates of not too fine (d(50) greater than or equal to 0.20 mm) sand under sheet flow conditions. The method consists of a Meyer-Peter-type transport formula operating on a time-varying Shields parameter, which accounts for both acceleration-asymmetry and boundary layer streaming. While velocity moment formulae, e.g.., = Constant x calibrated against U-tube measurements, fail spectacularly under some real waves (Ribberink, J.S., Dohmen-Janssen, C.M., Hanes, D.M., McLean, S.R., Vincent, C., 2000. Near-bed sand transport mechanisms under waves. Proc. 27th Int. Conf. Coastal Engineering, Sydney, ASCE, New York, pp. 3263-3276, Fig. 12), the new method predicts the real wave observations equally well. The reason that the velocity moment formulae fail under these waves is partly the presence of boundary layer streaming and partly the saw-tooth asymmetry, i.e., the front of the waves being steeper than the back. Waves with saw-tooth asymmetry may generate a net landward sediment transport even if = 0, because of the more abrupt acceleration under the steep front. More abrupt accelerations are associated with thinner boundary layers and greater pressure gradients for a given velocity magnitude. The two real wave effects are incorporated in a model of the form Q(s)(t) = Q(s)[theta(t)] rather than Q(S)(t) = Q(S)[u(infinity)(t)], i.e., by expressing the transport rate in terms of an instantaneous Shields parameter rather than in terms of the free stream velocity, and accounting for both streaming and accelerations in the 0(t) calculations. The instantaneous friction velocities u(*)(t) and subsequently theta(t) are calculated as follows. Firstly, a linear filter incorporating the grain roughness friction factor f(2.5) and a phase angle phi(tau) is applied to u(infinity)(t). This delivers u(*)(t) which is used to calculate an instantaneous grain roughness Shields parameter theta(2.5)(t). Secondly, a constant bed shear stress is added which corresponds to the streaming related bed shear stress -rho ($) over bar((u) over tilde(w) over tilde)(infinity) . The method can be applied to any u(infinity)(t) time series, but further experimental validation is recommended before application to conditions that differ strongly from the ones considered below. The method is not recommended for rippled beds or for sheet flow with typical prototype wave periods and d(50) < 0.20 turn. In such scenarios, time lags related to vertical sediment movement become important, and these are not considered by the present model. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The Oscillatory baffled reactor (OBR) can be used to produce particles with controlled size and morphology, in batch or continuous flow. This is due to the effect of the superimposed oscillations that radially mixes fluid but still allows plug-flow (or close to plug flow) behaviour in a continuous system. This mixing, combined with a close to a constant level of turbulence intensity in the reactor, leads to tight droplet and subsequent product particle size distributions. By applying population balance equations together with experimental droplet size distributions, breakage rates of droplets can be determined and this is a useful tool for understanding the product engineering in OBRs. (C) 2002 Elsevier Science B.V All rights reserved.
Resumo:
Difference equations which discretely approximate boundary value problems for second-order ordinary differential equations are analysed. It is well known that the existence of solutions to the continuous problem does not necessarily imply existence of solutions to the discrete problem and, even if solutions to the discrete problem are guaranteed, they may be unrelated and inapplicable to the continuous problem. Analogues to theorems for the continuous problem regarding a priori bounds and existence of solutions are formulated for the discrete problem. Solutions to the discrete problem are shown to converge to solutions of the continuous problem in an aggregate sense. An example which arises in the study of the finite deflections of an elastic string under a transverse load is investigated. The earlier results are applied to show the existence of a solution; the sufficient estimates on the step size are presented. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.
Resumo:
This paper proposes a simulated annealing (SA) approach to address energy resources management from the point of view of a virtual power player (VPP) operating in a smart grid. Distributed generation, demand response, and gridable vehicles are intelligently managed on a multiperiod basis according to V2G user´s profiles and requirements. Apart from using the aggregated resources, the VPP can also purchase additional energy from a set of external suppliers. The paper includes a case study for a 33 bus distribution network with 66 generators, 32 loads, and 1000 gridable vehicles. The results of the SA approach are compared with a methodology based on mixed-integer nonlinear programming. A variation of this method, using ac load flow, is also used and the results are compared with the SA solution using network simulation. The proposed SA approach proved to be able to obtain good solutions in low execution times, providing VPPs with suitable decision support for the management of a large number of distributed resources.
Resumo:
The future scenarios for operation of smart grids are likely to include a large diversity of players, of different types and sizes. With control and decision making being decentralized over the network, intelligence should also be decentralized so that every player is able to play in the market environment. In the new context, aggregator players, enabling medium, small, and even micro size players to act in a competitive environment, will be very relevant. Virtual Power Players (VPP) and single players must optimize their energy resource management in order to accomplish their goals. This is relatively easy to larger players, with financial means to have access to adequate decision support tools, to support decision making concerning their optimal resource schedule. However, the smaller players have difficulties in accessing this kind of tools. So, it is required that these smaller players can be offered alternative methods to support their decisions. This paper presents a methodology, based on Artificial Neural Networks (ANN), intended to support smaller players’ resource scheduling. The used methodology uses a training set that is built using the energy resource scheduling solutions obtained with a reference optimization methodology, a mixed-integer non-linear programming (MINLP) in this case. The trained network is able to achieve good schedule results requiring modest computational means.
Resumo:
This paper present a methodology to choose the distribution networks reconfiguration that presents the lower power losses. The proposed methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modeling for system component outage parameters. The proposed hybrid method using fuzzy sets and Monte Carlo simulation based on the fuzzyprobabilistic models allows catching both randomness and fuzziness of component outage parameters. A logic programming algorithm is applied, once obtained the system states by Monte Carlo Simulation, to get all possible reconfigurations for each system state. To evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation an AC load flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 115 buses distribution network.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
We study the effects of product differentiation in a Stackelberg model with demand uncertainty for the first mover. We do an ex-ante and ex-post analysis of the profits of the leader and of the follower firms in terms of product differentiation and of the demand uncertainty. We show that even with small uncertainty about the demand, the follower firm can achieve greater profits than the leader, if their products are sufficiently differentiated. We also compute the probability of the second firm having higher profit than the leading firm, subsequently showing the advantages and disadvantages of being either the leader or the follower firm.