55 resultados para ES-SAGD. pressure drop. heavy oil. reservoir modeling and simulation
Resumo:
In contemporary wideband orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE) and WiMAX, different subcarriers over which a codeword is transmitted may experience different signal-to-noise-ratios (SNRs). Thus, adaptive modulation and coding (AMC) in these systems is driven by a vector of subcarrier SNRs experienced by the codeword, and is more involved. Exponential effective SNR mapping (EESM) simplifies the problem by mapping this vector into a single equivalent fiat-fading SNR. Analysis of AMC using EESM is challenging owing to its non-linear nature and its dependence on the modulation and coding scheme. We first propose a novel statistical model for the EESM, which is based on the Beta distribution. It is motivated by the central limit approximation for random variables with a finite support. It is simpler and as accurate as the more involved ad hoc models proposed earlier. Using it, we develop novel expressions for the throughput of a point-to-point OFDM link with multi-antenna diversity that uses EESM for AMC. We then analyze a general, multi-cell OFDM deployment with co-channel interference for various frequency-domain schedulers. Extensive results based on LTE and WiMAX are presented to verify the model and analysis, and gain new insights.
Resumo:
Stability of a fracture toughness testing geometry is important to determine the crack trajectory and R-curve behavior of the specimen. Few configurations provide for inherent geometric stability, especially when the specimen being tested is brittle. We propose a new geometrical construction called the single edge notched clamped bend specimen (SENCB), a modified form of three point bending, yielding stable cracking under load control. It is shown to be particularly suitable for small-scale structures which cannot be made free-standing, (e.g., thin films, coatings). The SENCB is elastically clamped at the two ends to its parent material. A notch is inserted at the bottom center and loaded in bending, to fracture. Numerical simulations are carried out through extended finite element method to derive the geometrical factor f(a/W) and for different beam dimensions. Experimental corroborations of the FEM results are carried out on both micro-scale and macro-scale brittle specimens. A plot of vs a/W, is shown to rise initially and fall off, beyond a critical a/W ratio. The difference between conventional SENB and SENCB is highlighted in terms of and FEM simulated stress contours across the beam cross-section. The `s of bulk NiAl and Si determined experimentally are shown to match closely with literature values. Crack stability and R-curve effect is demonstrated in a PtNiAl bond coat sample and compared with predicted crack trajectories from the simulations. The stability of SENCB is shown for a critical range of a/W ratios, proving that it can be used to get controlled crack growth even in brittle samples under load control.
Resumo:
The prime movers and refrigerators based on thermoacoustics have gained considerable importance toward practical applications in view of the absence of moving components, reasonable efficiency, use of environmental friendly working fluids, etc. Devices such as twin Standing Wave ThermoAcoustic Prime Mover (SWTAPM), Traveling Wave ThermoAcoustic Prime Mover (TWTAPM) and thermoacoustically driven Standing Wave ThermoAcoustic Refrigerator (SWTAR) have been studied by researchers. The numerical modeling and simulation play a vital role in their development. In our efforts to build the above thermoacoustic systems, we have carried out numerical analysis using the procedures of CFD on the above systems. The results of the analysis are compared with those of DeltaEC (freeware from LANL, USA) simulations and the experimental results wherever possible. For the CFD analysis commercial code Fluent 6.3.26 has been used along with the necessary boundary conditions for different working fluids at various average pressures. The results of simulation indicate that choice of the working fluid and the average pressure are critical to the performance of the above thermoacoustic devices. Also it is observed that the predictions through the CFD analysis are closer to the experimental results in most cases, compared to those of DeltaEC simulations. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The problem of estimation of the time-variant reliability of actively controlled structural dynamical systems under stochastic excitations is considered. Monte Carlo simulations, reinforced with Girsanov transformation-based sampling variance reduction, are used to tackle the problem. In this approach, the external excitations are biased by an additional artificial control force. The conflicting objectives of the two control forces-one designed to reduce structural responses and the other to promote limit-state violations (but to reduce sampling variance)-are noted. The control for variance reduction is fashioned after design-point oscillations based on a first-order reliability method. It is shown that for structures that are amenable to laboratory testing, the reliability can be estimated experimentally with reduced testing times by devising a procedure based on the ideas of the Girsanov transformation. Illustrative examples include studies on a building frame with a magnetorheologic damper-based isolation system subject to nonstationary random earthquake excitations. (C) 2014 American Society of Civil Engineers.
Resumo:
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10 degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four post Lest rig. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Structural Health Monitoring (SHM) systems require integration of non-destructive technologies into structural design and operational processes. Modeling and simulation of complex NDE inspection processes are important aspects in the development and deployment of SHM technologies. Ray tracing techniques are vital simulation tools to visualize the wave path inside a material. These techniques also help in optimizing the location of transducers and their orientation with respect to the zone of interrogation. It helps in increasing the chances of detection and identification of a flaw in that zone. While current state-of-the-art techniques such as ray tracing based on geometric principle help in such visualization, other information such as signal losses due to spherical or cylindrical shape of wave front are rarely taken into consideration. The problem becomes a little more complicated in the case of dispersive guided wave propagation and near-field defect scattering. We review the existing models and tools to perform ultrasonic NDE simulation in structural components. As an initial step, we develop a ray-tracing approach, where phase and spectral information are preserved. This enables one to study wave scattering beyond simple time of flight calculation of rays. Challenges in terms of theory and modelling of defects of various kinds are discussed. Various additional considerations such as signal decay and physics of scattering are reviewed and challenges involved in realistic computational implementation are discussed. Potential application of this approach to SHM system design is highlighted and by applying this to complex structural components such as airframe structures, SHM is demonstrated to provide additional value in terms of lighter weight and/or longevity enhancement resulting from an extension of the damage tolerance design principle not compromising safety and reliability.
Resumo:
Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.
Resumo:
REDEFINE is a reconfigurable SoC architecture that provides a unique platform for high performance and low power computing by exploiting the synergistic interaction between coarse grain dynamic dataflow model of computation (to expose abundant parallelism in applications) and runtime composition of efficient compute structures (on the reconfigurable computation resources). We propose and study the throttling of execution in REDEFINE to maximize the architecture efficiency. A feature specific fast hybrid (mixed level) simulation framework for early in design phase study is developed and implemented to make the huge design space exploration practical. We do performance modeling in terms of selection of important performance criteria, ranking of the explored throttling schemes and investigate effectiveness of the design space exploration using statistical hypothesis testing. We find throttling schemes which give appreciable (24.8%) overall performance gain in the architecture and 37% resource usage gain in the throttling unit simultaneously.
Resumo:
Flexible constraint length channel decoders are required for software defined radios. This paper presents a novel scalable scheme for realizing flexible constraint length Viterbi decoders on a de Bruijn interconnection network. Architectures for flexible decoders using the flattened butterfly and shuffle-exchange networks are also described. It is shown that these networks provide favourable substrates for realizing flexible convolutional decoders. Synthesis results for the three networks are provided and a comparison is performed. An architecture based on a 2D-mesh, which is a topology having a nominally lesser silicon area requirement, is also considered as a fourth point for comparison. It is found that of all the networks considered, the de Bruijn network offers the best tradeoff in terms of area versus throughput.
Resumo:
Since the end of second world war, extra high voltage ac transmission has seen its development. The distances between generating and load centres as well as the amount of power to be handled increased tremendously for last 50 years. The highest commercial voltage has increased to 765 kV in India and 1,200 kV in many other countries. The bulk power transmission has been mostly performed by overhead transmission lines. The dual task of mechanically supporting and electrically isolating the live phase conductors from the support tower is performed by string insulators. Whether in clean condition or under polluted conditions, the electrical stress distribution along the insulators governs the possible flashover, which is quite detrimental to the system. Hence the present investigation aims to study accurately, the field distribution for various types of porcelain/ceramic insulators (Normal and Antifog discs) used for high-voltage transmission. The surface charge simulation method is employed for the field computation. A comparison on normalised surface resistance, which is an indicator for the stress concentration under polluted condition, is also attempted.
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.
Resumo:
In this paper we present HyperCell as a reconfigurable datapath for Instruction Extensions (IEs). HyperCell comprises an array of compute units laid over a switch network. We present an IE synthesis methodology that enables post-silicon realization of IE datapaths on HyperCell. The synthesis methodology optimally exploits hardware resources in HyperCell to enable software pipelined execution of IEs. Exploitation of temporal reuse of data in HyperCell results in significant reduction of input/output bandwidth requirements of HyperCell.