999 resultados para AC model
Resumo:
"21 January 1980."
Resumo:
"October 1968."
Resumo:
"February 1968."
Resumo:
"June 1977."
Resumo:
Includes index.
Resumo:
"November 1970."
Resumo:
"March 1979."
Resumo:
"May 1967."
Resumo:
"August 1963."
Resumo:
The presence of toxic cyanobacteria in drinking water reservoirs renders the need to develop treatment methods for the 'safe' removal of their associated toxins. Chlorine has been shown to successfully remove a range of cyanotoxins including microcystins, cylindrospermopsin and saxitoxins. Each cyanotoxin requires specific treatment parameters, particularly solution pH and free chlorine residual. However, currently there has not been any investigation into the toxicological effect of solutions treated for the removal of these cyanotoxins by chlorine. Using the P53(def) transgenic mouse model mate and female C57BL/6J hybrid mice were used to investigate potential cancer inducing effects from such oral dosing solutions. Both purified cyanotoxins and toxic cell-free extract cyanobacterial solutions were chlorinated and administered over 90 and 170 days (respectively) in drinking water. No increase in cancer was found in any treatment. The parent cyanotoxins, microcystins, cylindrospermopsin and saxitoxins were readily removed by chlorine. There was no significant increase in the disinfection byproducts trihalomethanes or haloacetic acids, levels found were well below guideline values. Histological examination identified no effect of treatment solutions except male mice treated with chlorinated cylindrospermopsin (as a cell free extract). In this instance 40% of males were found to have fatty vacuolation in their livers, cause unknown. It is recommended that further toxicology be undertaken on chlorinated cyanobacterial solutions, particularly for non-genotoxic carcinogenic compounds, for example the Tg. AC transgenic mouse model. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
This article describes the main approaches adopted in a study focused on planning industrial estates on a sub-regional scale. The study was supported by an agent-based model, using firms as agents to assess the attractiveness of industrial estates. The simulation was made by the NetLogo toolkit and the environment represents a geographical space. Three scenarios and four hypotheses were used in the simulation to test the impact of different policies on the attractiveness of industrial estates. Policies were distinguished by the level of municipal coordination at which they were implemented and by the type of intervention. In the model, the attractiveness of industrial estates was based on the level of facilities, amenities, accessibility and on the price of land in each industrial estate. Firms are able to move and relocate whenever they find an attractive estate. The relocating firms were selected by their size, location and distance to an industrial estate. Results show that a coordinated policy among municipalities is the most efficient policy to promote advanced-qualified estates. In these scenarios, it was observed that more industrial estates became attractive, more firms were relocated and more vacant lots were occupied. Furthermore, the results also indicate that the promotion of widespread industrial estates with poor-quality infrastructures and amenities is an inefficient policy to attract firms.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.