166 resultados para execution traces
Resumo:
This paper sets out the motivation for carrying out an observational experiment on the atmospheric boundary layer along the monsoon trough, in the light of earlier studies of the atmospheric boundary layer in India and elsewhere, and the significant role that the trough has been shown to play as a key semi-permanent feature of the southwest monsoon. The scientific objectives of the experiment are set out, and its planning and execution are touched upon. Some of the gains resulting from the experiment are mentioned, and lessons for the future about the conduct of such programmes are drawn.
Resumo:
Tower platforms, with instrumentation at six levels above the surface to a height of 30 m, were used to record various atmospheric parameters in the surface layer. Sensors for measuring both mean and fluctuating quantities were used, with the majority of them indigenously built. Soil temperature sensors up to a depth of 30 cm from the surface were among the variables connected to the mean data logger. A PC-based data acquisition system built at the Centre for Atmospheric Sciences, IISc, was used to acquire the data from fast response sensors. This paper reports the various components of a typical MONTBLEX tower observatory and describes the actual experiments carried out in the surface layer at four sites over the monsoon trough region as a part of the MONTBLEX programme. It also describes and discusses several checks made on randomly selected tower data-sets acquired during the experiment. Checks made include visual inspection of time traces from various sensors, comparative plots of sensors measuring the same variable, wind and temperature profile plots calculation of roughness lengths, statistical and stability parameters, diurnal variation of stability parameters, and plots of probability density and energy spectrum for the different sensors. Results from these checks are found to be very encouraging and reveal the potential for further detailed analysis to understand more about surface layer characteristics.
Resumo:
Trace processors rely on hierarchy, replication, and prediction to dramatically increase the execution speed of ordinary sequential programs. The authors describe some of the processors will meet future technology demands.
Resumo:
Detailed Fourier line shape analysis has been performed on three different compositions of the composite matrix of Al-Si-Mg and SiC. The alloy composition in wt% is Al-7%Si, 0.35%Mg, 0.14%Fe and traces of copper and titanium (similar to 0.01%) with SiC varying from 0 to 30wt% in three steps i.e., 0, 10 and 30wt%. The line shift analysis has been performed by considering 111, 200, 220, 311 and 222 reflections after estimating their relative shift. Peak asymmetry analysis has been performed considering neighbouring 111 and 200 reflections and Fourier line shape analysis has been performed after considering the multiple orders 111 and 222, 200 and 400 reflections. Combining all these three analyses it has been found that the deformation stacking faults both intrinsic alpha' and extrinsic alpha " are absent in this alloy system whereas the deformation twin beta has been found to be positive and increases with the increase of SiC concentration. So, like other Al-base alloys this ternary alloy also shows high stacking fault energy, and the addition of SiC introduces deformation twin which increases with its concentration in the deformed lattices.
Resumo:
Simulation is an important means of evaluating new microarchitectures. With the invention of multi-core (CMP) platforms, simulators are becoming larger and more complex. However, with the availability of CMPs with larger caches and higher operating frequency, the wall clock time required for simulating an application has become comparatively shorter. Reducing this simulation time further is a great challenge, especially in the case of multi-threaded workload due to indeterminacy introduced due to simultaneously executing various threads. In this paper, we propose a technique for speeding multi-core simulation. The model of the processor core and cache are replaced with functional models, to achieve speedup. A timed Petri net model is used to estimate the execution time of the processor and the memory access latencies are estimated using hit/miss information obtained from the functional model of the cache. This model can be used to predict performance of data parallel applications or multiprogramming workload on CMP platform with various cache hierarchies and shared bus interconnect. The error in estimation of the execution time of an application is within 6%. The speedup achieved ranges between an average of 2x--4x over the cycle accurate simulator.
Resumo:
Parallel execution of computational mechanics codes requires efficient mesh-partitioning techniques. These mesh-partitioning techniques divide the mesh into specified number of submeshes of approximately the same size and at the same time, minimise the interface nodes of the submeshes. This paper describes a new mesh partitioning technique, employing Genetic Algorithms. The proposed algorithm operates on the deduced graph (dual or nodal graph) of the given finite element mesh rather than directly on the mesh itself. The algorithm works by first constructing a coarse graph approximation using an automatic graph coarsening method. The coarse graph is partitioned and the results are interpolated onto the original graph to initialise an optimisation of the graph partition problem. In practice, hierarchy of (usually more than two) graphs are used to obtain the final graph partition. The proposed partitioning algorithm is applied to graphs derived from unstructured finite element meshes describing practical engineering problems and also several example graphs related to finite element meshes given in the literature. The test results indicate that the proposed GA based graph partitioning algorithm generates high quality partitions and are superior to spectral and multilevel graph partitioning algorithms.
Resumo:
We have investigated the local electronic properties and the spatially resolved magnetoresistance of a nanostructured film of a colossal magnetoresistive (CMR) material by local conductance mapping (LCMAP) using a variable temperature Scanning Tunneling Microscope (STM) operating in a magnetic field. The nanostructured thin films (thickness ≈500nm) of the CMR material La0.67Sr0.33MnO3 (LSMO) on quartz substrates were prepared using chemical solution deposition (CSD) process. The CSD grown films were imaged by both STM and atomic force microscopy (AFM). Due to the presence of a large number of grain boundaries (GB's), these films show low field magnetoresistance (LFMR) which increases at lower temperatures. The measurement of spatially resolved electronic properties reveal the extent of variation of the density of states (DOS) at and close to the Fermi level (EF) across the grain boundaries and its role in the electrical resistance of the GB. Measurement of the local conductance maps (LCMAP) as a function of magnetic field as well as temperature reveals that the LFMR occurs at the GB. While it was known that LFMR in CMR films originates from the GB, this is the first investigation that maps the local electronic properties at a GB in a magnetic field and traces the origin of LFMR at the GB.
Resumo:
Very Long Instruction Word (VLIW) architectures exploit instruction level parallelism (ILP) with the help of the compiler to achieve higher instruction throughput with minimal hardware. However, control and data dependencies between operations limit the available ILP, which not only hinders the scalability of VLIW architectures, but also result in code size expansion. Although speculation and predicated execution mitigate ILP limitations due to control dependencies to a certain extent, they increase hardware cost and exacerbate code size expansion. Simultaneous multistreaming (SMS) can significantly improve operation throughput by allowing interleaved execution of operations from multiple instruction streams. In this paper we study SMS for VLIW architectures and quantify the benefits associated with it using a case study of the MPEG-2 video decoder. We also propose the notion of virtual resources for VLIW architectures, which decouple architectural resources (resources exposed to the compiler) from the microarchitectural resources, to limit code size expansion. Our results for a VLIW architecture demonstrate that: (1) SMS delivers much higher throughput than that achieved by speculation and predicated execution, (2) the increase in performance due to the addition of speculation and predicated execution support over SMS averages around 12%. The minor increase in performance might not warrant the additional hardware complexity involved, and (3) the notion of virtual resources is very effective in reducing no-operations (NOPs) and consequently reduce code size with little or no impact on performance.
Resumo:
Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems.A sensor network operating system is a kind of embedded operating system, but unlike a typical embedded operating system, sensor network operatin g system may not be real time, and is constrained by memory and energy constraints. Most sensor network operating systems are based on event-driven approach. Event-driven approach is efficient in terms of time and space.Also this approach does not require a separate stack for each execution context. But using this model, it is difficult to implement long running tasks, like cryptographic operations. A thread based computation requires a separate stack for each execution context, and is less efficient in terms of time and space. In this paper, we propose a thread based execution model that uses only a fixed number of stacks. In this execution model, the number of stacks at each priority level are fixed. It minimizes the stack requirement for multi-threading environment and at the same time provides ease of programming. We give an implementation of this model in Contiki OS by separating thread implementation from protothread implementation completely. We have tested our OS by implementing a clock synchronization protocol using it.
Resumo:
As computational Grids are increasingly used for executing long running multi-phase parallel applications, it is important to develop efficient rescheduling frameworks that adapt application execution in response to resource and application dynamics. In this paper, three strategies or algorithms have been developed for deciding when and where to reschedule parallel applications that execute on multi-cluster Grids. The algorithms derive rescheduling plans that consist of potential points in application execution for rescheduling and schedules of resources for application execution between two consecutive rescheduling points. Using large number of simulations, it is shown that the rescheduling plans developed by the algorithms can lead to large decrease in application execution times when compared to executions without rescheduling on dynamic Grid resources. The rescheduling plans generated by the algorithms are also shown to be competitive when compared to the near-optimal plans generated by brute-force methods. Of the algorithms, genetic algorithm yielded the most efficient rescheduling plans with 9-12% smaller average execution times than the other algorithms.
Resumo:
The Java Memory Model (JMM) provides a semantics of Java multithreading for any implementation platform. The JMM is defined in a declarative fashion with an allowed program execution being defined in terms of existence of "commit sequences" (roughly, the order in which actions in the execution are committed). In this work, we develop OpMM, an operational under-approximation of the JMM. The immediate motivation of this work lies in integrating a formal specification of the JMM with software model checkers. We show how our operational memory model description can be integrated into a Java Path Finder (JPF) style model checker for Java programs.
Resumo:
Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
Computational grids are increasingly being used for executing large multi-component scientific applications. The most widely reported advantages of application execution on grids are the performance benefits, in terms of speeds, problem sizes or quality of solutions, due to increased number of processors. We explore the possibility of improved performance on grids without increasing the application’s processor space. For this, we consider grids with multiple batch systems. We explore the challenges involved in and the advantages of executing long-running multi-component applications on multiple batch sites with a popular multi-component climate simulation application, CCSM, as the motivation.We have performed extensive simulation studies to estimate the single and multi-site execution rates of the applications for different system characteristics.Our experiments show that in many cases, multiple batch executions can have better execution rates than a single site execution.
Resumo:
Soft error has become one of the major areas of attention with the device scaling and large scale integration. Lot of variants for superscalar architecture were proposed with focus on program re-execution, thread re-execution and instruction re-execution. In this paper we proposed a fault tolerant micro-architecture of pipelined RISC. The proposed architecture, Floating Resources Extended pipeline (FREP), re-executes the instructions using extended pipeline stages. The instructions are re-executed by hybrid architecture with a suitable combination of space and time redundancy.