54 resultados para System modeling
Resumo:
Thixocasting requires manufacturing of billets with non-dendritic microstructure. Aluminum alloy A356 billets were produced by rheocasting in a mould placed inside a linear electromagnetic stirrer. Subsequent heat treatment was used to produce a transition from rosette to globular microstructure. The current and the duration of stirring were explored as control parameters. Simultaneous induction heating of the billet during stirring was quantified using experimentally determined thermal profiles. The effect of processing parameters on the dendrite fragmentation was discussed. Corresponding computational modeling of the process was performed using phase-field modeling of alloy solidification in order to gain insight into the process of morphological changes of a solid during this process. A non-isothermal alloy solidification model was used for simulations. The morphological evolution under such imposed thermal cycles was simulated and compared with experimentally determined one. Suitable scaling using the thermosolutal diffusion distances was used to overcome computational difficulties in quantitative comparison at system scale. The results were interpreted in the light of existing theories of microstructure refinement and globularisation.
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The performance of the Advanced Regional Prediction System (ARPS) in simulating an extreme rainfall event is evaluated, and subsequently the physical mechanisms leading to its initiation and sustenance are explored. As a case study, the heavy precipitation event that led to 65 cm of rainfall accumulation in a span of around 6 h (1430 LT-2030 LT) over Santacruz (Mumbai, India), on 26 July, 2005, is selected. Three sets of numerical experiments have been conducted. The first set of experiments (EXP1) consisted of a four-member ensemble, and was carried out in an idealized mode with a model grid spacing of 1 km. In spite of the idealized framework, signatures of heavy rainfall were seen in two of the ensemble members. The second set (EXP2) consisted of a five-member ensemble, with a four-level one-way nested integration and grid spacing of 54, 18, 6 and 1 km. The model was able to simulate a realistic spatial structure with the 54, 18, and 6 km grids; however, with the 1 km grid, the simulations were dominated by the prescribed boundary conditions. The third and final set of experiments (EXP3) consisted of a five-member ensemble, with a four-level one-way nesting and grid spacing of 54, 18, 6, and 2 km. The Scaled Lagged Average Forecasting (SLAF) methodology was employed to construct the ensemble members. The model simulations in this case were closer to observations, as compared to EXP2. Specifically, among all experiments, the timing of maximum rainfall, the abrupt increase in rainfall intensities, which was a major feature of this event, and the rainfall intensities simulated in EXP3 (at 6 km resolution) were closest to observations. Analysis of the physical mechanisms causing the initiation and sustenance of the event reveals some interesting aspects. Deep convection was found to be initiated by mid-tropospheric convergence that extended to lower levels during the later stage. In addition, there was a high negative vertical gradient of equivalent potential temperature suggesting strong atmospheric instability prior to and during the occurrence of the event. Finally, the presence of a conducive vertical wind shear in the lower and mid-troposphere is thought to be one of the major factors influencing the longevity of the event.
Resumo:
A model of the precipitation process in reverse micelles has been developed to calculate the size of fine particles obtained therein. While the method shares several features of particle nucleation and growth common to precipitation in large systems, complexities arise in describing the processes of nucleation, due to the extremely small size of a micelle and of particle growth caused by fusion among the micelles. Occupancy of micelles by solubilized molecules is governed by Poisson statistics, implying most of them are empty and cannot nucleate of its own. The model therefore specifies the minimum number of solubilized molecules required to form a nucleus which is used to calculate the homogeneous nucleation rate. Simultaneously, interaction between micelles is assumed to occur by Brownian collision and instantaneous fusion. Analysis of time scales of various events shows growth of particles to be very fast compared to other phenomena occurring. This implies that nonempty micelles either are supersaturated or contain a single precipitated particle and allows application of deterministic population balance equations to describe the evolution of the system with time. The model successfully predicts the experimental measurements of Kandori ct al.(3) on the size of precipitated CaCO3 particles, obtained by carbonation of reverse micelles containing aqueous Ca(OH)(2) solution.
Resumo:
This paper deals with the system oriented analysis, design, modeling, and implementation of active clamp HF link three phase converter. The main advantage of the topology is reduced size, weight, and cost of the isolation transformer. However, violation of basic power conversion rules due to presence of the leakage inductance in the HF transformer causes over voltage stresses across the cycloconverter devices. It makes use of the snubber circuit necessary in such topologies. The conventional RCD snubbers are dissipative in nature and hence inefficient. The efficiency of the system is greatly improved by using regenerative snubber or active clamp circuit. It consists of an active switching device with an anti-parallel diode and one capacitor to absorb the energy stored in the leakage inductance of the isolation transformer and to regenerate the same without affecting circuit performance. The turn on instant and duration of the active device are selected such that it requires simple commutation requirements. The time domain expressions for circuit dynamics, design criteria of the snubber capacitor with two conflicting constrains (over voltage stress across the devices and the resonating current duration), the simulation results based on generalized circuit model and the experimental results based on laboratory prototype are presented.
Resumo:
FACTS controllers are emerging as viable and economic solutions to the problems of large interconnected ne networks, which can endanger the system security. These devices are characterized by their fast response, absence of inertia, and minimum maintenance requirements. Thyristor controlled equipment like Thyristor Controlled Series Capacitor (TCSC), Static Var Compensator (SVC), Thyristor Controlled Phase angle Regulator (TCPR) etc. which involve passive elements result in devices of large sizes with substantial cost and significant labour for installation. An all solid-state device using GTOs leads to reduction in equipment size and has improved performance. The Unified Power Flow Controller (UPFC) is a versatile controller which can be used to control the active and reactive power in the Line independently. The concept of UPFC makes it possible to handle practically all power flow control and transmission line compensation problems, using solid-state controllers, which provide functional flexibility, generally not attainable by conventional thyristor controlled systems. In this paper, we present the development of a control scheme for the series injected voltage of the UPFC to damp the power oscillations and improve transient stability in a power system. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
In a detailed model for reservoir irrigation taking into account the soil moisture dynamics in the root zone of the crops, the data set for reservoir inflow and rainfall in the command will usually be of sufficient length to enable their variations to be described by probability distributions. However, the potential evapotranspiration of the crop itself depends on the characteristics of the crop and the reference evaporation, the quantification of both being associated with a high degree of uncertainty. The main purpose of this paper is to propose a mathematical programming model to determine the annual relative yield of crops and to determine its reliability, for a single reservoir meant for irrigation of multiple crops, incorporating variations in inflow, rainfall in the command area, and crop consumptive use. The inflow to the reservoir and rainfall in the reservoir command area are treated as random variables, whereas potential evapotranspiration is modeled as a fuzzy set. The model's application is illustrated with reference to an existing single-reservoir system in Southern India.
Resumo:
An integrated reservoir operation model is presented for developing effective operational policies for irrigation water management. In arid and semi-arid climates, owing to dynamic changes in the hydroclimatic conditions within a season, the fixed cropping pattern with conventional operating policies, may have considerable impact on the performance of the irrigation system and may affect the economics of the farming community. For optimal allocation of irrigation water in a season, development of effective mathematical models may guide the water managers in proper decision making and consequently help in reducing the adverse effects of water shortage and crop failure problems. This paper presents a multi-objective integrated reservoir operation model for multi-crop irrigation system. To solve the multi-objective model, a recent swarm intelligence technique, namely elitist-mutated multi-objective particle swarm optimisation (EM-MOPSO) has been used and applied to a case study in India. The method evolves effective strategies for irrigation crop planning and operation policies for a reservoir system, and thereby helps farming community in improving crop benefits and water resource usage in the reservoir command area.
Resumo:
The prevalent virtualization technologies provide QoS support within the software layers of the virtual machine monitor(VMM) or the operating system of the virtual machine(VM). The QoS features are mostly provided as extensions to the existing software used for accessing the I/O device because of which the applications sharing the I/O device experience loss of performance due to crosstalk effects or usable bandwidth. In this paper we examine the NIC sharing effects across VMs on a Xen virtualized server and present an alternate paradigm that improves the shared bandwidth and reduces the crosstalk effect on the VMs. We implement the proposed hardwaresoftware changes in a layered queuing network (LQN) model and use simulation techniques to evaluate the architecture. We find that simple changes in the device architecture and associated system software lead to application throughput improvement of up to 60%. The architecture also enables finer QoS controls at device level and increases the scalability of device sharing across multiple virtual machines. We find that the performance improvement derived using LQN model is comparable to that reported by similar but real implementations.
Resumo:
In this paper we discuss the recent progresses in spectral finite element modeling of complex structures and its application in real-time structural health monitoring system based on sensor-actuator network and near real-time computation of Damage Force Indicator (DFI) vector. A waveguide network formalism is developed by mapping the original variational problem into the variational problem involving product spaces of 1D waveguides. Numerical convergence is studied using a h()-refinement scheme, where is the wavelength of interest. Computational issues towards successful implementation of this method with SHM system are discussed.
Resumo:
A two-dimensional finite difference model, which solves mixed type of Richards' equation, whose non-linearity is dealt with modified Picard's iteration and strongly implicit procedure to solve the resulting equations, is presented. Modeling of seepage flow through heterogeneous soils, which is common in the field is addressed in the present study. The present model can be applied to both unsaturated and saturated soils and can handle very dry initial condition and steep wetting fronts. The model is validated by comparing experimental results reported in the literature. Newness of this two dimensional model is its application on layered soils with transient seepage face development, which has not been reported in the literature. Application of the two dimensional model for studying unconfined drainage due to sudden drop of water table at seepage face in layered soils is demonstrated. In the present work different sizes of rectangular flow domain with different types of layering are chosen. Sensitivity of seepage height due to problem dimension of layered system is studied. The effect of aspect ratio on seepage face development in case of the flow through layered soil media is demonstrated. The model is also applied to random heterogeneous soils in which the randomness of the model parameters is generated using the turning band technique. The results are discussed in terms of phreatic surface and seepage height development and also flux across the seepage face. Such accurate modeling of seepage face development and quantification of flux moving across the seepage face becomes important while modeling transport problems in variably saturated media.
Resumo:
Joint experimental and theoretical work is presented on two quadrupolar D-pi-A-pi-D chromophores characterized by the same bulky donor (D) group and two different central cores. The first chromophore, a newly synthesized species with a malononitrile-based acceptor (A) group, has a V-shaped structure that makes its absorption spectrum very broad, covering most of the visible region. The second chromophore has a squaraine-based core and therefore a linear structure, as also evinced from its absorption spectra. Both chromophores show an anomalous red shift of the absorption band upon increasing solvent polarity, a feature that is ascribed to the large, bulky structure of the moleCules. For these molecules, the basic description of polar solvation in terms of a uniform reaction field fails. Indeed, a simple extension of the model to account for two independent reaction fields associated with the two molecular arms quantitatively reproduces the observed linear absorption and fluorescence as well as fluorescence anisotropy spectra, fully rationalizing their nontrivial dependence on solvent polarity. The model derived from the analysis of linear spectra is adopted to predict nonlinear spectra and specifically hyper-Rayleigh scattering and two-photon absorption spectra. In polar solvents, the V-shaped chromophore is predicted to have a large HRS response in a wide spectral region (approximately 600-1300 nm). Anomalously large and largely solvent-dependent HRS responses for the linear chromophores are ascribed to symmetry lowering induced by polar solvation and amplified in this bulky system by the presence of two reaction fields.
Resumo:
We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol as standardized in the IEEE 802.11 Distributed Coordination Function (DCF). The approximation is that, when n of the M queues are non-empty, the (transmission) attempt probability of each of the n non-empty nodes is given by the long-term (transmission) attempt probability of n saturated nodes. With the arrival of packets into the M queues according to independent Poisson processes, the SDAR approximation reduces a single cell with non-saturated nodes to a Markovian coupled queueing system. We provide a sufficient condition under which the joint queue length Markov chain is positive recurrent. For the symmetric case of equal arrival rates and finite and equal buffers, we develop an iterative method which leads to accurate predictions for important performance measures such as collision probability, throughput and mean packet delay. We replace the MAC layer with the SDAR model of contention by modifying the NS-2 source code pertaining to the MAC layer, keeping all other layers unchanged. By this model-based simulation technique at the MAC layer, we achieve speed-ups (w.r.t. MAC layer operations) up to 5.4. Through extensive model-based simulations and numerical results, we show that the SDAR model is an accurate model for the DCF MAC protocol in single cells. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) is a very effective tool for designing trade-offs between energy and performance. In this paper, we use a formal Petri net based program performance model that directly captures both the application and system properties, to find energy efficient DVFS settings for CMP systems, that satisfy a given performance constraint, for SPMD multithreaded programs. Experimental evaluation shows that we achieve significant energy savings, while meeting the performance constraints.
Resumo:
There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations.