927 resultados para ES-SAGD. Heavy oil. Recovery factor. Reservoir modeling and simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many organizations realize that increasing amounts of data (“Big Data”) need to be dealt with intelligently in order to compete with other organizations in terms of efficiency, speed and services. The goal is not to collect as much data as possible, but to turn event data into valuable insights that can be used to improve business processes. However, data-oriented analysis approaches fail to relate event data to process models. At the same time, large organizations are generating piles of process models that are disconnected from the real processes and information systems. In this chapter we propose to manage large collections of process models and event data in an integrated manner. Observed and modeled behavior need to be continuously compared and aligned. This results in a “liquid” business process model collection, i.e. a collection of process models that is in sync with the actual organizational behavior. The collection should self-adapt to evolving organizational behavior and incorporate relevant execution data (e.g. process performance and resource utilization) extracted from the logs, thereby allowing insightful reports to be produced from factual organizational data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

REDEFINE is a reconfigurable SoC architecture that provides a unique platform for high performance and low power computing by exploiting the synergistic interaction between coarse grain dynamic dataflow model of computation (to expose abundant parallelism in applications) and runtime composition of efficient compute structures (on the reconfigurable computation resources). We propose and study the throttling of execution in REDEFINE to maximize the architecture efficiency. A feature specific fast hybrid (mixed level) simulation framework for early in design phase study is developed and implemented to make the huge design space exploration practical. We do performance modeling in terms of selection of important performance criteria, ranking of the explored throttling schemes and investigate effectiveness of the design space exploration using statistical hypothesis testing. We find throttling schemes which give appreciable (24.8%) overall performance gain in the architecture and 37% resource usage gain in the throttling unit simultaneously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flexible constraint length channel decoders are required for software defined radios. This paper presents a novel scalable scheme for realizing flexible constraint length Viterbi decoders on a de Bruijn interconnection network. Architectures for flexible decoders using the flattened butterfly and shuffle-exchange networks are also described. It is shown that these networks provide favourable substrates for realizing flexible convolutional decoders. Synthesis results for the three networks are provided and a comparison is performed. An architecture based on a 2D-mesh, which is a topology having a nominally lesser silicon area requirement, is also considered as a fourth point for comparison. It is found that of all the networks considered, the de Bruijn network offers the best tradeoff in terms of area versus throughput.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the end of second world war, extra high voltage ac transmission has seen its development. The distances between generating and load centres as well as the amount of power to be handled increased tremendously for last 50 years. The highest commercial voltage has increased to 765 kV in India and 1,200 kV in many other countries. The bulk power transmission has been mostly performed by overhead transmission lines. The dual task of mechanically supporting and electrically isolating the live phase conductors from the support tower is performed by string insulators. Whether in clean condition or under polluted conditions, the electrical stress distribution along the insulators governs the possible flashover, which is quite detrimental to the system. Hence the present investigation aims to study accurately, the field distribution for various types of porcelain/ceramic insulators (Normal and Antifog discs) used for high-voltage transmission. The surface charge simulation method is employed for the field computation. A comparison on normalised surface resistance, which is an indicator for the stress concentration under polluted condition, is also attempted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The prime movers and refrigerators based on thermoacoustics have gained considerable importance toward practical applications in view of the absence of moving components, reasonable efficiency, use of environmental friendly working fluids, etc. Devices such as twin Standing Wave ThermoAcoustic Prime Mover (SWTAPM), Traveling Wave ThermoAcoustic Prime Mover (TWTAPM) and thermoacoustically driven Standing Wave ThermoAcoustic Refrigerator (SWTAR) have been studied by researchers. The numerical modeling and simulation play a vital role in their development. In our efforts to build the above thermoacoustic systems, we have carried out numerical analysis using the procedures of CFD on the above systems. The results of the analysis are compared with those of DeltaEC (freeware from LANL, USA) simulations and the experimental results wherever possible. For the CFD analysis commercial code Fluent 6.3.26 has been used along with the necessary boundary conditions for different working fluids at various average pressures. The results of simulation indicate that choice of the working fluid and the average pressure are critical to the performance of the above thermoacoustic devices. Also it is observed that the predictions through the CFD analysis are closer to the experimental results in most cases, compared to those of DeltaEC simulations. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present HyperCell as a reconfigurable datapath for Instruction Extensions (IEs). HyperCell comprises an array of compute units laid over a switch network. We present an IE synthesis methodology that enables post-silicon realization of IE datapaths on HyperCell. The synthesis methodology optimally exploits hardware resources in HyperCell to enable software pipelined execution of IEs. Exploitation of temporal reuse of data in HyperCell results in significant reduction of input/output bandwidth requirements of HyperCell.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D thermo-electro-mechanical device simulations are presented of a novel fully CMOS-compatible MOSFET gas sensor operating in a SOI membrane. A comprehensive stress analysis of a Si-SiO2-based multilayer membrane has been performed to ensure a high degree of mechanical reliability at a high operating temperature (e.g. up to 400°C). Moreover, optimisation of the layout dimensions of the SOI membrane, in particular the aspect ratio between the membrane length and membrane thickness, has been carried out to find the best trade-off between minimal device power consumption and acceptable mechanical stress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiscale coupling attracts broad interests from mechanics, physics and chemistry to biology. The diversity and coupling of physics at different scales are two essential features of multiscale problems in far-from-equilibrium systems. The two features present fundamental difficulties and are great challenges to multiscale modeling and simulation. The theory of dynamical system and statistical mechanics provide fundamental tools for the multiscale coupling problems. The paper presents some closed multiscale formulations, e.g., the mapping closure approximation, multiscale large-eddy simulation and statistical mesoscopic damage mechanics, for two typical multiscale coupling problems in mechanics, that is, turbulence in fluids and failure in solids. It is pointed that developing a tractable, closed nonequilibrium statistical theory may be an effective approach to deal with the multiscale coupling problems. Some common characteristics of the statistical theory are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spallation in heterogeneous media is a complex, dynamic process. Generally speaking, the spallation process is relevant to multiple scales and the diversity and coupling of physics at different scales present two fundamental difficulties for spallation modeling and simulation. More importantly, these difficulties can be greatly enhanced by the disordered heterogeneity on multi-scales. In this paper, a driven nonlinear threshold model for damage evolution in heterogeneous materials is presented and a trans-scale formulation of damage evolution is obtained. The damage evolution in spallation is analyzed with the formulation. Scaling of the formulation reveals that some dimensionless numbers govern the whole process of deformation and damage evolution. The effects of heterogeneity in terms of Weibull modulus on damage evolution in spallation process are also investigated.