87 resultados para Multi-model inference
Resumo:
Anthropogenic aerosols play a crucial role in our environment, climate, and health. Assessment of spatial and temporal variation in anthropogenic aerosols is essential to determine their impact. Aerosols are of natural and anthropogenic origin and together constitute a composite aerosol system. Information about either component needs elimination of the other from the composite aerosol system. In the present work we estimated the anthropogenic aerosol fraction (AF) over the Indian region following two different approaches and inter-compared the estimates. We espouse multi-satellite data analysis and model simulations (using the CHIMERE Chemical transport model) to derive natural aerosol distribution, which was subsequently used to estimate AF over the Indian subcontinent. These two approaches are significantly different from each other. Natural aerosol satellite-derived information was extracted in terms of optical depth while model simulations yielded mass concentration. Anthropogenic aerosol fraction distribution was studied over two periods in 2008: premonsoon (March-May) and winter (November-February) in regard to the known distinct seasonality in aerosol loading and type over the Indian region. Although both techniques have derived the same property, considerable differences were noted in temporal and spatial distribution. Satellite retrieval of AF showed maximum values during the pre-monsoon and summer months while lowest values were observed in winter. On the other hand, model simulations showed the highest concentration of AF in winter and the lowest during pre-monsoon and summer months. Both techniques provided an annual average AF of comparable magnitude (similar to 0.43 +/- 0.06 from the satellite and similar to 0.48 +/- 0.19 from the model). For winter months the model-estimated AF was similar to 0.62 +/- 0.09, significantly higher than that (0.39 +/- 0.05) estimated from the satellite, while during pre-monsoon months satellite-estimated AF was similar to 0.46 +/- 0.06 and the model simulation estimation similar to 0.53 +/- 0.14. Preliminary results from this work indicate that model-simulated results are nearer to the actual variation as compared to satellite estimation in view of general seasonal variation in aerosol concentrations.
Resumo:
This paper presents stylized models for conducting performance analysis of the manufacturing supply chain network (SCN) in a stochastic setting for batch ordering. We use queueing models to capture the behavior of SCN. The analysis is clubbed with an inventory optimization model, which can be used for designing inventory policies . In the first case, we model one manufacturer with one warehouse, which supplies to various retailers. We determine the optimal inventory level at the warehouse that minimizes total expected cost of carrying inventory, back order cost associated with serving orders in the backlog queue, and ordering cost. In the second model we impose service level constraint in terms of fill rate (probability an order is filled from stock at warehouse), assuming that customers do not balk from the system. We present several numerical examples to illustrate the model and to illustrate its various features. In the third case, we extend the model to a three-echelon inventory model which explicitly considers the logistics process.
Resumo:
We present a new, generic method/model for multi-objective design optimization of laminated composite components using a novel multi-objective optimization algorithm developed on the basis of the Quantum behaved Particle Swarm Optimization (QPSO) paradigm. QPSO is a co-variant of the popular Particle Swarm Optimization (PSO) and has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; Failure Mechanism based Failure criteria, Maximum stress failure criteria and the Tsai-Wu Failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences as well as fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Also, the performance of QPSO is compared with the conventional PSO.
Resumo:
In this paper, we propose an extension to the I/O device architecture, as recommended in the PCI-SIG IOV specification, for virtualizing network I/O devices. The aim is to enable fine-grained controls to a virtual machine on the I/O path of a shared device. The architecture allows native access of I/O devices to virtual machines and provides device level QoS hooks for controlling VM specific device usage. For evaluating the architecture we use layered queuing network (LQN) models. We implement the architecture and evaluate it using simulation techniques, on the LQN model, to demonstrate the benefits. With the architecture, the benefit for network I/O is 60% more than what can be expected on the existing architecture. Also, the proposed architecture improves scalability in terms of the number of virtual machines intending to share the I/O device.
Resumo:
In this paper a modified Heffron-Phillip's (K-constant) model is derived for the design of power system stabilizers. A knowledge of external system parameters, such as equivalent infinite bus voltage and external impedances or their equivalent estimated values is required for designing a conventional power system stabilizer. In the proposed method, information available at the secondary bus of the step-up transformer is used to set up a modified Heffron-Phillip's (ModHP) model. The PSS design based on this model utilizes signals available within the generating station. The efficacy of the proposed design technique and the performance of the stabilizer has been evaluated over a range of operating and system conditions. The simulation results have shown that the performance of the proposed stabilizer is comparable to that could be obtained by conventional design but without the need for the estimation and computation of external system parameters. The proposed design is thus well suited for practical applications to power system stabilization, including possibly the multi-machine applications where accurate system information is not readily available.
Resumo:
In this work, we evaluate the benefits of using Grids with multiple batch systems to improve the performance of multi-component and parameter sweep parallel applications by reduction in queue waiting times. Using different job traces of different loads, job distributions and queue waiting times corresponding to three different queuing policies(FCFS, conservative and EASY backfilling), we conducted a large number of experiments using simulators of two important classes of applications. The first simulator models Community Climate System Model (CCSM), a prominent multi-component application and the second simulator models parameter sweep applications. We compare the performance of the applications when executed on multiple batch systems and on a single batch system for different system and application configurations. We show that there are a large number of configurations for which application execution using multiple batch systems can give improved performance over execution on a single system.
Resumo:
Induction motor is a typical member of a multi-domain, non-linear, high order dynamic system. For speed control a three phase induction motor is modelled as a d–q model where linearity is assumed and non-idealities are ignored. Approximation of the physical characteristic gives a simulated behaviour away from the natural behaviour. This paper proposes a bond graph model of an induction motor that can incorporate the non-linearities and non-idealities thereby resembling the physical system more closely. The model is validated by applying the linearity and idealities constraints which shows that the conventional ‘abc’ model is a special case of the proposed generalised model.
Resumo:
Optimal allocation of water resources for various stakeholders often involves considerable complexity with several conflicting goals, which often leads to multi-objective optimization. In aid of effective decision-making to the water managers, apart from developing effective multi-objective mathematical models, there is a greater necessity of providing efficient Pareto optimal solutions to the real world problems. This study proposes a swarm-intelligence-based multi-objective technique, namely the elitist-mutated multi-objective particle swarm optimization technique (EM-MOPSO), for arriving at efficient Pareto optimal solutions to the multi-objective water resource management problems. The EM-MOPSO technique is applied to a case study of the multi-objective reservoir operation problem. The model performance is evaluated by comparing with results of a non-dominated sorting genetic algorithm (NSGA-II) model, and it is found that the EM-MOPSO method results in better performance. The developed method can be used as an effective aid for multi-objective decision-making in integrated water resource management.
Resumo:
The paper presents a novel slicing based method for computation of volume fractions in multi-material solids given as a B-rep whose faces are triangulated and shared by either one or two materials. Such objects occur naturally in geoscience applications and the said computation is necessary for property estimation problems and iterative forward modeling. Each facet in the model is cut by the planes delineating the given grid structure or grid cells. The method, instead of classifying the points or cells with respect to the solid, exploits the convexity of triangles and the simple axis-oriented disposition of the cutting surfaces to construct a novel intermediate space enumeration representation called slice-representation, from which both the cell containment test and the volume-fraction computation are done easily. Cartesian and cylindrical grids with uniform and non-uniform spacings have been dealt with in this paper. After slicing, each triangle contributes polygonal facets, with potential elliptical edges, to the grid cells through which it passes. The volume fractions of different materials in a grid cell that is in interaction with the material interfaces are obtained by accumulating the volume contributions computed from each facet in the grid cell. The method is fast, accurate, robust and memory efficient. Examples illustrating the method and performance are included in the paper.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
The Hybrid approach introduced by the authors for at-site modeling of annual and periodic streamflows in earlier works is extended to simulate multi-site multi-season streamflows. It bears significance in integrated river basin planning studies. This hybrid model involves: (i) partial pre-whitening of standardized multi-season streamflows at each site using a parsimonious linear periodic model; (ii) contemporaneous resampling of the resulting residuals with an appropriate block size, using moving block bootstrap (non-parametric, NP) technique; and (iii) post-blackening the bootstrapped innovation series at each site, by adding the corresponding parametric model component for the site, to obtain generated streamflows at each of the sites. It gains significantly by effectively utilizing the merits of both parametric and NP models. It is able to reproduce various statistics, including the dependence relationships at both spatial and temporal levels without using any normalizing transformations and/or adjustment procedures. The potential of the hybrid model in reproducing a wide variety of statistics including the run characteristics, is demonstrated through an application for multi-site streamflow generation in the Upper Cauvery river basin, Southern India. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Masonry strength is dependent upon characteristics of the masonry unit,the mortar and the bond between them. Empirical formulae as well as analytical and finite element (FE) models have been developed to predict structural behaviour of masonry. This paper is focused on developing a three dimensional non-linear FE model based on micro-modelling approach to predict masonry prism compressive strength and crack pattern. The proposed FE model uses multi-linear stress-strain relationships to model the non-linear behaviour of solid masonry unit and the mortar. Willam-Warnke's five parameter failure theory developed for modelling the tri-axial behaviour of concrete has been adopted to model the failure of masonry materials. The post failure regime has been modelled by applying orthotropic constitutive equations based on the smeared crack approach. Compressive strength of the masonry prism predicted by the proposed FE model has been compared with experimental values as well as the values predicted by other failure theories and Eurocode formula. The crack pattern predicted by the FE model shows vertical splitting cracks in the prism. The FE model predicts the ultimate failure compressive stress close to 85 of the mean experimental compressive strength value.
Resumo:
Different seismic hazard components pertaining to Bangalore city,namely soil overburden thickness, effective shear-wave velocity, factor of safety against liquefaction potential, peak ground acceleration at the seismic bedrock, site response in terms of amplification factor, and the predominant frequency, has been individually evaluated. The overburden thickness distribution, predominantly in the range of 5-10 m in the city, has been estimated through a sub-surface model from geotechnical bore-log data. The effective shear-wave velocity distribution, established through Multi-channel Analysis of Surface Wave (MASW) survey and subsequent data interpretation through dispersion analysis, exhibits site class D (180-360 m/s), site class C (360-760 m/s), and site class B (760-1500 m/s) in compliance to the National Earthquake Hazard Reduction Program (NEHRP) nomenclature. The peak ground acceleration has been estimated through deterministic approach, based on the maximum credible earthquake of M-W = 5.1 assumed to be nucleating from the closest active seismic source (Mandya-Channapatna-Bangalore Lineament). The 1-D site response factor, computed at each borehole through geotechnical analysis across the study region, is seen to be ranging from around amplification of one to as high as four times. Correspondingly, the predominant frequency estimated from the Fourier spectrum is found to be predominantly in range of 3.5-5.0 Hz. The soil liquefaction hazard assessment has been estimated in terms of factor of safety against liquefaction potential using standard penetration test data and the underlying soil properties that indicates 90% of the study region to be non-liquefiable. The spatial distributions of the different hazard entities are placed on a GIS platform and subsequently, integrated through analytical hierarchal process. The accomplished deterministic hazard map shows high hazard coverage in the western areas. The microzonation, thus, achieved is envisaged as a first-cut assessment of the site specific hazard in laying out a framework for higher order seismic microzonation as well as a useful decision support tool in overall land-use planning, and hazard management. (C) 2010 Elsevier Ltd. All rights reserved.