27 resultados para Year 2000 date conversion (Computer systems)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the world of high performance computing huge efforts have been put to accelerate Numerical Linear Algebra (NLA) kernels like QR Decomposition (QRD) with the added advantage of reconfigurability and scalability. While popular custom hardware solution in form of systolic arrays can deliver high performance, they are not scalable, and hence not commercially viable. In this paper, we show how systolic solutions of QRD can be realized efficiently on REDEFINE, a scalable runtime reconfigurable hardware platform. We propose various enhancements to REDEFINE to meet the custom need of accelerating NLA kernels. We further do the design space exploration of the proposed solution for any arbitrary application of size n × n. We determine the right size of the sub-array in accordance with the optimal pipeline depth of the core execution units and the number of such units to be used per sub-array.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we report on the outcomes of a research and demonstration project on human intrusion detection in a large secure space using an ad hoc wireless sensor network. This project has been a unique experience in collaborative research, involving ten investigators (with expertise in areas such as sensors, circuits, computer systems,communication and networking, signal processing and security) to execute a large funded project that spanned three to four years. In this paper we report on the specific engineering solution that was developed: the various architectural choices and the associated specific designs. In addition to developing a demonstrable system, the various problems that arose have given rise to a large amount of basic research in areas such as geographical packet routing, distributed statistical detection, sensors and associated circuits, a low power adaptive micro-radio, and power optimising embedded systems software. We provide an overview of the research results obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various logical formalisms with the freeze quantifier have been recently considered to model computer systems even though this is a powerful mechanism that often leads to undecidability. In this article, we study a linear-time temporal logic with past-time operators such that the freeze operator is only used to express that some value from an infinite set is repeated in the future or in the past. Such a restriction has been inspired by a recent work on spatio-temporal logics that suggests such a restricted use of the freeze operator. We show decidability of finitary and infinitary satisfiability by reduction into the verification of temporal properties in Petri nets by proposing a symbolic representation of models. This is a quite surprising result in view of the expressive power of the logic since the logic is closed under negation, contains future-time and past-time temporal operators and can express the nonce property and its negation. These ingredients are known to lead to undecidability with a more liberal use of the freeze quantifier. The article also contains developments about the relationships between temporal logics with the freeze operator and counter automata as well as reductions into first-order logics over data words.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seismic hazard value of any region depends upon three important components such as probable earthquake location, maximum earthquake magnitude and the attenuation equation. This paper presents a representative way of estimating these three important components considering region specific seismotectonic features. Rupture Based Seismic Hazard Analysis (RBSHA) given by Anbazhagan et al. (2011) is used to determine the probable future earthquake locations. This approach is verified on the earthquake data of Bhuj region. The probable earthquake location for this region is identified considering earthquake data till the year 2000. These identified locations match well with the reported locations after 2000. The further Coimbatore City is selected as the study area to develop a representative seismic hazard map using RBSHA approach and to compare with deterministic seismic hazard analysis. Probable future earthquake zones for Coimbatore are located considering the rupture phenomenon as per energy release theory discussed by Anbazhagan et at (2011). Rupture character of the region has been established by estimating the subsurface rupture length of each source and normalized with respect to the length of the source. Average rupture length of the source with respect to its total length is found to be similar for most of the sources in the region, which is called as the rupture character of the region. Maximum magnitudes of probable zones are estimated considering seismic sources close by and regional rupture character established. Representative GMPEs for the study area have been selected by carrying out efficacy test through an average log likelihood value (LLH) as ranking estimator and considering the Isoseismal map. New seismic hazard map of Coimbatore has been developed using the above regional representative parameters of probable earthquake locations, maximum earthquake magnitude and best suitable GMPEs. The new hazard map gives acceleration values at bedrock for maximum possible earthquakes. These results are compared with deterministic seismic hazard map and recently published probabilistic seismic hazard values. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impact of future climate change on the glaciers in the Karakoram and Himalaya (KH) is investigated using CMIP5 multi-model temperature and precipitation projections, and a relationship between glacial accumulation-area ratio and mass balance developed for the region based on the last 30 to 40 years of observational data. We estimate that the current glacial mass balance (year 2000) for the entire KH region is -6.6 +/- 1 Gta(-1), which decreases about sixfold to -35 +/- 2 Gta(-1) by the 2080s under the high emission scenario of RCP8.5. However, under the low emission scenario of RCP2.6 the glacial mass loss only doubles to -12 +/- 2 Gta(-1) by the 2080s. We also find that 10.6 and 27 % of the glaciers could face `eventual disappearance' by the end of the century under RCP2.6 and RCP8.5 respectively, underscoring the threat to water resources under high emission scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present HyperCell as a reconfigurable datapath for Instruction Extensions (IEs). HyperCell comprises an array of compute units laid over a switch network. We present an IE synthesis methodology that enables post-silicon realization of IE datapaths on HyperCell. The synthesis methodology optimally exploits hardware resources in HyperCell to enable software pipelined execution of IEs. Exploitation of temporal reuse of data in HyperCell results in significant reduction of input/output bandwidth requirements of HyperCell.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently, Brownian networks have emerged as an effective stochastic model to approximate multiclass queueing networks with dynamic scheduling capability, under conditions of balanced heavy loading. This paper is a tutorial introduction to dynamic scheduling in manufacturing systems using Brownian networks. The article starts with motivational examples. It then provides a review of relevant weak convergence concepts, followed by a description of the limiting behaviour of queueing systems under heavy traffic. The Brownian approximation procedure is discussed in detail and generic case studies are provided to illustrate the procedure and demonstrate its effectiveness. This paper places emphasis only on the results and aspires to provide the reader with an up-to-date understanding of dynamic scheduling based on Brownian approximations.