7 resultados para TPC
em Indian Institute of Science - Bangalore - Índia
Resumo:
X-ray polarimeters based on Time Projection Chamber (TPC) geometry are currently being studied and developed to make sensitive measurement of polarization in 2-10keV energy range. TPC soft X-ray polarimeters exploit the fact that emission direction of the photoelectron ejected via photoelectric effect in a gas proportional counter carries the information of the polarization of the incident X-ray photon. Operating parameters such as pressure, drift field and drift-gap affect the performance of a TPC polarimeter. Simulations presented here showcase the effect of these operating parameters on the modulation factor of the TPC polarimeter. Models of Garfield are used to study photoelectron interaction in gas and drift of electron cloud towards Gas Electron Multiplier (GEM). The emission direction is reconstructed from the image and modulation factor is computed. Our study has shown that Ne/DME (50/50) at lower pressure and drift field can be used for a TPC polarimeter with modulation factor of 50-65%.
Resumo:
Time Projection Chamber (TPC) based X-ray polarimeters using Gas Electron Multiplier (GEM) are currently being developed to make sensitive measurement of polarization in 2-10 keV energy range. The emission direction of the photoelectron ejected via photoelectric effect carries the information of the polarization of the incident X-ray photon. Performance of a gas based polarimeter is affected by the operating drift parameters such as gas pressure, drift field and drift-gap. We present simulation studies carried out in order to understand the effect of these operating parameters on the modulation factor of a TPC polarimeter. Models of Garfield are used to study photoelectron interaction in gas and drift of electron cloud towards GEM. Our study is aimed at achieving higher modulation factors by optimizing drift parameters. Study has shown that Ne/DME (50/50) at lower pressure and drift field can lead to desired performance of a TPC polarimeter.
Resumo:
An understanding of application I/O access patterns is useful in several situations. First, gaining insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces.In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enable it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.
Resumo:
Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
Workstation clusters equipped with high performance interconnect having programmable network processors facilitate interesting opportunities to enhance the performance of parallel application run on them. In this paper, we propose schemes where certain application level processing in parallel database query execution is performed on the network processor. We evaluate the performance of TPC-H queries executing on a high end cluster where all tuple processing is done on the host processor, using a timed Petri net model, and find that tuple processing costs on the host processor dominate the execution time. These results are validated using a small cluster. We therefore propose 4 schemes where certain tuple processing activity is offloaded to the network processor. The first 2 schemes offload the tuple splitting activity - computation to identify the node on which to process the tuples, resulting in an execution time speedup of 1.09 relative to the base scheme, but with I/O bus becoming the bottleneck resource. In the 3rd scheme in addition to offloading tuple processing activity, the disk and network interface are combined to avoid the I/O bus bottleneck, which results in speedups up to 1.16, but with high host processor utilization. Our 4th scheme where the network processor also performs apart of join operation along with the host processor, gives a speedup of 1.47 along with balanced system resource utilizations. Further we observe that the proposed schemes perform equally well even in a scaled architecture i.e., when the number of processors is increased from 2 to 64
Resumo:
Lateral appendages often show allometric growth with a specific growth polarity along the proximo-distal axis. Studies on leaf growth in model plants have identified a basipetal growth direction with the highest growth rate at the proximal end and progressively lower rates toward the distal end. Although the molecular mechanisms governing such a growth pattern have been studied recently, variation in leaf growth polarity and, therefore, its evolutionary origin remain unknown. By surveying 75 eudicot species, here we report that leaf growth polarity is divergent. Leaf growth in the proximo-distal axis is polar, with more growth arising from either the proximal or the distal end; dispersed with no apparent polarity; or bidirectional, with more growth contributed by the central region and less growth at either end. We further demonstrate that the expression gradient of the miR396-GROWTH-REGULATING FACTOR module strongly correlates with the polarity of leaf growth. Altering the endogenous pattern of miR396 expression in transgenic Arabidopsis thaliana leaves only partially modified the spatial pattern of cell expansion, suggesting that the diverse growth polarities might have evolved via concerted changes in multiple gene regulatory networks.