88 resultados para Real Plan
Resumo:
Fiber bragg grating (FBG) sensors have been widely used for number of sensing applications like temperature, pressure, acousto-ultrasonic, static and dynamic strain, refractive index change measurements and so on. Present work demonstrates the use of FBG sensors in in-situ measurement of vacuum process with simultaneous leak detection capability. Experiments were conducted in a bell jar vacuum chamber facilitated with conventional Pirani gauge for vacuum measurement. Three different experiments have been conducted to validate the performance of FBG sensor in monitoring vacuum creating process and air bleeding. The preliminary results of FBG sensors in vacuum monitoring have been compared with that of commercial Pirani gauge sensor. This novel technique offers a simple alternative to conventional method for real time monitoring of evacuation process. Proposed FBG based vacuum sensor has potential applications in vacuum systems involving hazardous environment such as chemical and gas plants, automobile industries, aeronautical establishments and leak monitoring in process industries, where the electrical or MEMS based sensors are prone to explosion and corrosion.
Resumo:
We consider the following question: Let S (1) and S (2) be two smooth, totally-real surfaces in C-2 that contain the origin. If the union of their tangent planes is locally polynomially convex at the origin, then is S-1 boolean OR S-2 locally polynomially convex at the origin? If T (0) S (1) a (c) T (0) S (2) = {0}, then it is a folk result that the answer is yes. We discuss an obstruction to the presumed proof, and provide a different approach. When dim(R)(T0S1 boolean AND T0S2) = 1, we present a geometric condition under which no consistent answer to the above question exists. We then discuss conditions under which we can expect local polynomial convexity.
Resumo:
A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.
Resumo:
Biomedical engineering solutions like surgical simulators need High Performance Computing (HPC) to achieve real-time performance. Graphics Processing Units (GPUs) offer HPC capabilities at low cost and low power consumption. In this work, it is demonstrated that a liver which is discretized by about 2500 finite element nodes, can be graphically simulated in realtime, by making use of a GPU. Present work takes into consideration the time needed for the data transfer from CPU to GPU and back from GPU to CPU. Although behaviour of liver is very complicated, present computer simulation assumes linear elastostatics. One needs to use the commercial software ANSYS to obtain the global stiffness matrix of the liver. Results show that GPUs are useful for the real-time graphical simulation of liver, which in turn is needed in simulators that are used for training surgeons in laparoscopic surgery. Although the computer simulation should involve rendering also, neither rendering, nor the time needed for rendering and displaying the liver on a screen, is considered in the present work. The present work is just a demonstration of a concept; the concept is not really implemented and validated. Future work is to develop software which can accomplish real-time and very realistic graphical simulation of liver, with rendered image of liver on the screen changing in real-time according to the position of the surgical tool tip approximated as the mouse cursor in 3D.
Resumo:
Real-time simulation of deformable solids is essential for some applications such as biological organ simulations for surgical simulators. In this work, deformable solids are approximated to be linear elastic, and an easy and straight forward numerical technique, the Finite Point Method (FPM), is used to model three dimensional linear elastostatics. Graphics Processing Unit (GPU) is used to accelerate computations. Results show that the Finite Point Method, together with GPU, can compute three dimensional linear elastostatic responses of solids at rates suitable for real-time graphics, for solids represented by reasonable number of points.
Resumo:
Let D denote the open unit disk in C centered at 0. Let H-R(infinity) denote the set of all bounded and holomorphic functions defined in D that also satisfy f(z) = <(f <(z)over bar>)over bar> for all z is an element of D. It is shown that H-R(infinity) is a coherent ring.
Resumo:
A scheme to apply the rate-1 real orthogonal designs (RODs) in relay networks with single real-symbol decodability of the symbols at the destination for any arbitrary number of relays is proposed. In the case where the relays do not have any information about the channel gains from the source to themselves, the best known distributed space time block codes (DSTBCs) for k relays with single real-symbol decodability offer an overall rate of complex symbols per channel use. The scheme proposed in this paper offers an overall rate of 2/2+k complex symbol per channel use, which is independent of the number of relays. Furthermore, in the scenario where the relays have partial channel information in the form of channel phase knowledge, the best known DSTBCs with single real-symbol decodability offer an overall rate of 1/3 complex symbols per channel use. In this paper, making use of RODs, a scheme which achieves the same overall rate of 1/3 complex symbols per channel use but with a decoding delay that is 50 percent of that of the best known DSTBCs, is presented. Simulation results of the symbol error rate performance for 10 relays, which show the superiority of the proposed scheme over the best known DSTBC for 10 relays with single real-symbol decodability, are provided.
Resumo:
A framework based on the notion of "conflict-tolerance" was proposed in as a compositional methodology for developing and reasoning about systems that comprise multiple independent controllers. A central notion in this framework is that of a "conflict-tolerant" specification for a controller. In this work we propose a way of defining conflict-tolerant real-time specifications in Metric Interval Temporal Logic (MITL). We call our logic CT-MITL for Conflict-Tolerant MITL. We then give a clock optimal "delay-then-extend" construction for building a timed transition system for monitoring past-MITL formulas. We show how this monitoring transition system can be used to solve the associated verification and synthesis problems for CT-MITL.
Resumo:
Real-time kinetics of ligand-ligate interaction has predominantly been studied by either fluorescence or surface plasmon resonance based methods. Almost all such studies are based on association between the ligand and the ligate. This paper reports our analysis of dissociation data of monoclonal antibody-antigen (hCG) system using radio-iodinated hCG as a probe and nitrocellulose as a solid support to immobilize mAb. The data was analyzed quantitatively for a one-step and a two-step model. The data fits well into the two-step model. We also found that a fraction of what is bound is non-dissociable (tight-binding portion (TBP)). The TBP was neither an artifact of immobilization nor does it interfere with analysis. It was present when the reaction was carried out in homogeneous solution in liquid phase. The rate constants obtained from the two methods were comparable. The work reported here shows that real-time kinetics of other ligand-ligate interaction can be studied using nitrocellulose as a solid support. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.