973 resultados para Distribution transformer modeling
Resumo:
We review and compare four broad categories of spatially-explicit modelling approaches currently used to understand and project changes in the distribution and productivity of living marine resources including: 1) statistical species distribution models, 2) physiology-based, biophysical models of single life stages or the whole life cycle of species, 3) food web models, and 4) end-to-end models. Single pressures are rare and, in the future, models must be able to examine multiple factors affecting living marine resources such as interactions between: i) climate-driven changes in temperature regimes and acidification, ii) reductions in water quality due to eutrophication, iii) the introduction of alien invasive species, and/or iv) (over-)exploitation by fisheries. Statistical (correlative) approaches can be used to detect historical patterns which may not be relevant in the future. Advancing predictive capacity of changes in distribution and productivity of living marine resources requires explicit modelling of biological and physical mechanisms. New formulations are needed which (depending on the question) will need to strive for more realism in ecophysiology and behaviour of individuals, life history strategies of species, as well as trophodynamic interactions occurring at different spatial scales. Coupling existing models (e.g. physical, biological, economic) is one avenue that has proven successful. However, fundamental advancements are needed to address key issues such as the adaptive capacity of species/groups and ecosystems. The continued development of end-to-end models (e.g., physics to fish to human sectors) will be critical if we hope to assess how multiple pressures may interact to cause changes in living marine resources including the ecological and economic costs and trade-offs of different spatial management strategies. Given the strengths and weaknesses of the various types of models reviewed here, confidence in projections of changes in the distribution and productivity of living marine resources will be increased by assessing model structural uncertainty through biological ensemble modelling.
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.
Resumo:
Chiasma and crossover are two related biological processes of great importance in the understanding genetic variation. The study of these processes is straightforward in organisms where all products of meiosis are recovered and can be observed. This is not the case in mammals. Our understanding of these processes depends on our ability to model them. In this study I describe the biological processes that underline chiasma and crossover as well as the two main inference problems associated with these processes: i) in mammals we only recover one of the four products of meiosis and, ii) in general, we do not observe where the crossovers actually happen, but we find an interval containing type-2 censored information. NPML estimate was proposed and used in this work and used to compare chromosome length and chromosome expansion through the crosses.
Resumo:
The objective of this study was to estimate the spatial distribution of work accident risk in the informal work market in the urban zone of an industrialized city in southeast Brazil and to examine concomitant effects of age, gender, and type of occupation after controlling for spatial risk variation. The basic methodology adopted was that of a population-based case-control study with particular interest focused on the spatial location of work. Cases were all casual workers in the city suffering work accidents during a one-year period; controls were selected from the source population of casual laborers by systematic random sampling of urban homes. The spatial distribution of work accidents was estimated via a semiparametric generalized additive model with a nonparametric bidimensional spline of the geographical coordinates of cases and controls as the nonlinear spatial component, and including age, gender, and occupation as linear predictive variables in the parametric component. We analyzed 1,918 cases and 2,245 controls between 1/11/2003 and 31/10/2004 in Piracicaba, Brazil. Areas of significantly high and low accident risk were identified in relation to mean risk in the study region (p < 0.01). Work accident risk for informal workers varied significantly in the study area. Significant age, gender, and occupational group effects on accident risk were identified after correcting for this spatial variation. A good understanding of high-risk groups and high-risk regions underpins the formulation of hypotheses concerning accident causality and the development of effective public accident prevention policies.
Resumo:
We used the results of the Spanish Otter Survey of 1994–1996, a Geographic Information System and stepwise multiple logistic regression to model otter presence/absence data in the continental Spanish UTM 10 10-km squares. Geographic situation, indicators of human activity such as highways and major urban centers, and environmental variables related with productivity, water availability, altitude, and environmental energy were included in a logistic model that correctly classified about 73% of otter presences and absences. We extrapolated the model to the adjacent territory of Portugal, and increased the model’s spatial resolution by extrapolating it to 1 1-km squares in the whole Iberian Peninsula. The model turned out to be rather flexible, predicting, for instance, the species to be very restricted to the courses of rivers in some areas, and more widespread in others. This allowed us to determine areas where otter populations may be more vulnerable to habitat changes or harmful human interventions. # 2003 Elsevier Ltd. All rights reserved.
Resumo:
Onion (Allium cepa) is one of the most cultivated and consumed vegetables in Brazil and its importance is due to the large laborforce involved. One of the main pests that affect this crop is the Onion Thrips (Thrips tabaci), but the spatial distribution of this insect, although important, has not been considered in crop management recommendations, experimental planning or sampling procedures. Our purpose here is to consider statistical tools to detect and model spatial patterns of the occurrence of the onion thrips. In order to characterize the spatial distribution pattern of the Onion Thrips a survey was carried out to record the number of insects in each development phase on onion plant leaves, on different dates and sample locations, in four rural properties with neighboring farms under different infestation levels and planting methods. The Mantel randomization test proved to be a useful tool to test for spatial correlation which, when detected, was described by a mixed spatial Poisson model with a geostatistical random component and parameters allowing for a characterization of the spatial pattern, as well as the production of prediction maps of susceptibility to levels of infestation throughout the area.
Resumo:
We describe an estimation technique for biomass burning emissions in South America based on a combination of remote-sensing fire products and field observations, the Brazilian Biomass Burning Emission Model (3BEM). For each fire pixel detected by remote sensing, the mass of the emitted tracer is calculated based on field observations of fire properties related to the type of vegetation burning. The burnt area is estimated from the instantaneous fire size retrieved by remote sensing, when available, or from statistical properties of the burn scars. The sources are then spatially and temporally distributed and assimilated daily by the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS) in order to perform the prognosis of related tracer concentrations. Three other biomass burning inventories, including GFEDv2 and EDGAR, are simultaneously used to compare the emission strength in terms of the resultant tracer distribution. We also assess the effect of using the daily time resolution of fire emissions by including runs with monthly-averaged emissions. We evaluate the performance of the model using the different emission estimation techniques by comparing the model results with direct measurements of carbon monoxide both near-surface and airborne, as well as remote sensing derived products. The model results obtained using the 3BEM methodology of estimation introduced in this paper show relatively good agreement with the direct measurements and MOPITT data product, suggesting the reliability of the model at local to regional scales.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. We have measured and characterized CCN at water vapor supersaturations in the range of S=0.10-0.82% in pristine tropical rainforest air during the AMAZE-08 campaign in central Amazonia. The effective hygroscopicity parameters describing the influence of chemical composition on the CCN activity of aerosol particles varied in the range of kappa approximate to 0.1-0.4 (0.16+/-0.06 arithmetic mean and standard deviation). The overall median value of kappa approximate to 0.15 was by a factor of two lower than the values typically observed for continental aerosols in other regions of the world. Aitken mode particles were less hygroscopic than accumulation mode particles (kappa approximate to 0.1 at D approximate to 50 nm; kappa approximate to 0.2 at D approximate to 200 nm), which is in agreement with earlier hygroscopicity tandem differential mobility analyzer (H-TDMA) studies. The CCN measurement results are consistent with aerosol mass spectrometry (AMS) data, showing that the organic mass fraction (f(org)) was on average as high as similar to 90% in the Aitken mode (D <= 100 nm) and decreased with increasing particle diameter in the accumulation mode (similar to 80% at D approximate to 200 nm). The kappa values exhibited a negative linear correlation with f(org) (R(2)=0.81), and extrapolation yielded the following effective hygroscopicity parameters for organic and inorganic particle components: kappa(org)approximate to 0.1 which can be regarded as the effective hygroscopicity of biogenic secondary organic aerosol (SOA) and kappa(inorg)approximate to 0.6 which is characteristic for ammonium sulfate and related salts. Both the size dependence and the temporal variability of effective particle hygroscopicity could be parameterized as a function of AMS-based organic and inorganic mass fractions (kappa(p)=kappa(org) x f(org)+kappa(inorg) x f(inorg)). The CCN number concentrations predicted with kappa(p) were in fair agreement with the measurement results (similar to 20% average deviation). The median CCN number concentrations at S=0.1-0.82% ranged from N(CCN,0.10)approximate to 35 cm(-3) to N(CCN,0.82)approximate to 160 cm(-3), the median concentration of aerosol particles larger than 30 nm was N(CN,30)approximate to 200 cm(-3), and the corresponding integral CCN efficiencies were in the range of N(CCN,0.10/NCN,30)approximate to 0.1 to N(CCN,0.82/NCN,30)approximate to 0.8. Although the number concentrations and hygroscopicity parameters were much lower in pristine rainforest air, the integral CCN efficiencies observed were similar to those in highly polluted megacity air. Moreover, model calculations of N(CCN,S) assuming an approximate global average value of kappa approximate to 0.3 for continental aerosols led to systematic overpredictions, but the average deviations exceeded similar to 50% only at low water vapor supersaturation (0.1%) and low particle number concentrations (<= 100 cm(-3)). Model calculations assuming aconstant aerosol size distribution led to higher average deviations at all investigated levels of supersaturation: similar to 60% for the campaign average distribution and similar to 1600% for a generic remote continental size distribution. These findings confirm earlier studies suggesting that aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the information and parameterizations presented in this paper should enable efficient description of the CCN properties of pristine tropical rainforest aerosols of Amazonia in detailed process models as well as in large-scale atmospheric and climate models.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper describes the development of an optimization model for the management and operation of a large-scale, multireservoir water supply distribution system with preemptive priorities. The model considers multiobjectives and hedging rules. During periods of drought, when water supply is insufficient to meet the planned demand, appropriate rationing factors are applied to reduce water supply. In this paper, a water distribution system is formulated as a network and solved by the GAMS modeling system for mathematical programming and optimization. A user-friendly interface is developed to facilitate the manipulation of data and to generate graphs and tables for decision makers. The optimization model and its interface form a decision support system (DSS), which can be used to configure a water distribution system to facilitate capacity expansion and reliability studies. Several examples are presented to demonstrate the utility and versatility of the developed DSS under different supply and demand scenarios, including applications to one of the largest water supply systems in the world, the Sao Paulo Metropolitan Area Water Supply Distribution System in Brazil.
Resumo:
One-way master-slave (OWMS) chain networks are widely used in clock distribution systems due to their reliability and low cost. As the network nodes are phase-locked loops (PLLs), double-frequency jitter (DFJ) caused by their phase detectors appears as an impairment to the performance of the clock recovering process found in communication systems and instrumentation applications. A nonlinear model for OWMS chain networks with P + 1 order PLLs as slave nodes is presented, considering the DFJ. Since higher order filters are more effective in filtering DFJ, the synchronous state stability conditions for an OWMS chain network with third-order nodes are derived, relating the loop gain and the filter coefficients. By using these conditions, design examples are discussed.
Resumo:
The double-frequency jitter is one of the main problems in clock distribution networks. In previous works, sonic analytical and numerical aspects of this phenomenon were studied and results were obtained for one-way master-slave (OWMS) architectures. Here, an experimental apparatus is implemented, allowing to measure the power of the double-frequency signal and to confirm the theoretical conjectures. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Background & Aims: An elevated transferrin saturation is the earliest phenotypic abnormality in hereditary hemochromatosis. Determination of transferrin saturation remains the most useful noninvasive screening test for affected individuals, but there is debate as to the appropriate screening level. The aims of this study were to estimate the mean transferrin saturation in hemochromatosis heterozygotes and normal individuals and to evaluate potential transferrin saturation screening levels. Methods: Statistical mixture modeling was applied to data from a survey of asymptomatic Australians to estimate the mean transferrin saturation in hemochromatosis heterozygotes and normal individuals. To evaluate potential transferrin saturation screening levels, modeling results were compared with data from identified hemochromatosis heterozygotes and homozygotes. Results: After removal of hemochromatosis homozygotes, two populations of transferrin saturation were identified in asymptomatic Australians (P < 0.01). In men, 88.2% of the truncated sample had a lower mean transferrin saturation of 24.1%, whereas 11.8% had an increased mean transferrin saturation of 37.3%. Similar results were found in women, A transferrin saturation threshold of 45% identified 98% of homozygotes without misidentifying any normal individuals. Conclusions: The results confirm that hemochromatosis heterozygotes form a distinct transferrin saturation subpopulation and support the use of transferrin saturation as an inexpensive screening test for hemochromatosis. In practice, a fasting transferrin saturation of greater than or equal to 45% identifies virtually all affected homozygous subjects without necessitating further investigation of unaffected normal individuals.
Resumo:
In order to effectively suppress the noise radiation from large electrical power transformers, both the structure-borne and air-borne sound fields need to be characterised. The characterisation can be made either from theoretical predictions or by in-situ measurements. This paper presents the study of the sound radiation from a large power transformer in a substation. The radiation pattern can be predicted from the measured acceleration distribution and the predicted value is not affected by other noise sources. Alternatively, the farfield sound pressure level can be predicted from the sound pressure level measured at NEMA locations. Both the near- and far-field power radiation can be in-situ measured using the sound intensity technique. It is shown that both the vibration of a transformer tank wall and the radiated noise consist of a series of tonal components mainly at the first few harmonic frequencies of 100 Hz. Also, the neglect of the noise radiation from the transformer (top and bottom) lids does not affects the accuracy of the transformer radiation characterisation. (C) 1998 Elsevier Science Ltd. All rights reserved.