226 resultados para size estimation

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The size of the shear transformation zone (STZ) that initiates the elastic to plastic transition in a Zr-based bulk metallic glass was estimated by conducting a statistical analysis of the first pop-in event during spherical nanoindentation. A series of experiments led us to a successful description of the distribution of shear strength for the transition and its dependence on the loading rate. From the activation volume determined by statistical analysis the STZ size was estimated based on a cooperative shearing model. (C) 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article presents the details of estimation of fracture parameters for high strength concrete (HSC, HSC1) and ultra high strength concrete (UHSC). Brief details about characterization of ingredients of HSC, HSC1 and UHSC have been provided. Experiments have been carried out on beams made up of HSC, HSC1 and UHSC considering various sizes and notch depths. Fracture characteristics such as size independent fracture energy (G(f)), size of fracture process zone (C-f), fracture toughness (K-IC) and crack tip opening displacement (CTODc) have been estimated based on the experimental observations. From the studies, it is observed that (i) UHSC has high fracture energy and ductility inspite of having a very low value of C-f; (ii) relatively much more homogeneous than other concretes, because of absence of coarse aggregates and well-graded smaller size particles; (iii) the critical SIF (K-IC) values are increasing with increase of beam depth and decreasing with increase of notch depth. Generally, it can be noted that there is significant increase in fracture toughness and CTODc. They are about 7 times in HSC1 and about 10 times in UHSC compared to those in HSC; (iv) for notch-to-depth ratio 0.1, Bazant's size effect model slightly overestimates the maximum failure loads compared to experimental observations and Karihaloo's model slightly underestimates the maximum failure loads. For the notch-to-depth ratio ranging from 0.2 to 0.4 for the case of UHSC, it can be observed that, both the size effect models predict more or less similar maximum failure loads compared to corresponding experimental values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the site classification of Bangalore Mahanagar Palike (BMP) area using geophysical data and the evaluation of spectral acceleration at ground level using probabilistic approach. Site classification has been carried out using experimental data from the shallow geophysical method of Multichannel Analysis of Surface wave (MASW). One-dimensional (1-D) MASW survey has been carried out at 58 locations and respective velocity profiles are obtained. The average shear wave velocity for 30 m depth (Vs(30)) has been calculated and is used for the site classification of the BMP area as per NEHRP (National Earthquake Hazards Reduction Program). Based on the Vs(30) values major part of the BMP area can be classified as ``site class D'', and ``site class C'. A smaller portion of the study area, in and around Lalbagh Park, is classified as ``site class B''. Further, probabilistic seismic hazard analysis has been carried out to map the seismic hazard in terms spectral acceleration (S-a) at rock and the ground level considering the site classes and six seismogenic sources identified. The mean annual rate of exceedance and cumulative probability hazard curve for S. have been generated. The quantified hazard values in terms of spectral acceleration for short period and long period are mapped for rock, site class C and D with 10% probability of exceedance in 50 years on a grid size of 0.5 km. In addition to this, the Uniform Hazard Response Spectrum (UHRS) at surface level has been developed for the 5% damping and 10% probability of exceedance in 50 years for rock, site class C and D These spectral acceleration and uniform hazard spectrums can be used to assess the design force for important structures and also to develop the design spectrum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work an attempt has been made to evaluate the seismic hazard of South India (8.0 degrees N-20 degrees N; 72 degrees E-88 degrees E) based on the probabilistic seismic hazard analysis (PSHA). The earthquake data obtained from different sources were declustered to remove the dependent events. A total of 598 earthquakes of moment magnitude 4 and above were obtained from the study area after declustering, and were considered for further hazard analysis. The seismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones in the study area which are associated with earthquakes of magnitude 4 and above. For assessing theseismic hazard, the study area was divided into small grids of size 0.1 degrees x0.1 degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources with in a radius of 300 km. Rock level peak horizontal acceleration (PHA) and spectral acceleration (SA) values at 1 corresponding to 10% and 2% probability of exceedance in 50 years have been calculated for all the grid points. The contour maps showing the spatial variation of these values are presented here. Uniform hazard response spectrum (UHRS) at rock level for 5% damping and 10% and 2% probability of exceedance in 50 years were also developed for all the grid points. The peak ground acceleration (PGA) at surface level was calculated for the entire South India for four different site classes. These values can be used to find the PGA values at any site in South India based on site class at that location. Thus, this method can be viewed as a simplified method to evaluate the PGA values at any site in the study area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical solution for the transient temperature distribution in a cylindrical disc heated on its top surface by a circular source is presented. A finite difference form of the governing equations is solved by the Alternating Direction Implicit (ADI) time marching scheme. This solution has direct applications in analyzing transient electron beam heating of target materials as encountered in the prebreakdown current enhancement and consequent breakdown in high voltage vacuum gaps stressed by alternating and pulsed voltages. The solution provides an estimate of the temperature for pulsed electron beam heating and the size of thermally activated microparticles originating from anode hot spots. The calculated results for a typical 45kV (a.c.) electron beam of radius 2.5 micron indicate that the temperature of such spots can reach melting point and could give rise to microparticles which could initiate breakdown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior work on modeling interconnects has focused on optimizing the wire and repeater design for trading off energy and delay, and is largely based on low level circuit parameters. Hence these models are hard to use directly to make high level microarchitectural trade-offs in the initial exploration phase of a design. In this paper, we propose INTACTE, a tool that can be used by architects toget reasonably accurate interconnect area, delay, and power estimates based on a few architecture level parameters for the interconnect such as length, width (in number of bits), frequency, and latency for a specified technology and voltage. The tool uses well known models of interconnect delay and energy taking into account the wire pitch, repeater size, and spacing for a range of voltages and technologies.It then solves an optimization problem of finding the lowest energy interconnect design in terms of the low level circuit parameters, which meets the architectural constraintsgiven as inputs. In addition, the tool also provides the area, energy, and delay for a range of supply voltages and degrees of pipelining, which can be used for micro-architectural exploration of a chip. The delay and energy models used by the tool have been validated against low level circuit simulations. We discuss several potential applications of the tool and present an example of optimizing interconnect design in the context of clustered VLIW architectures. Copyright 2007 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Monte Carlo model of ultrasound modulation of multiply scattered coherent light in a highly scattering media has been carried out for estimating the phase shift experienced by a photon beam on its transit through US insonified region. The phase shift is related to the tissue stiffness, thereby opening an avenue for possible breast tumor detection. When the scattering centers in the tissue medium is exposed to a deterministic forcing with the help of a focused ultrasound (US) beam, due to the fact that US-induced oscillation is almost along particular direction, the direction defined by the transducer axis, the scattering events increase, thereby increasing the phase shift experienced by light that traverses through the medium. The phase shift is found to increase with increase in anisotropy g of the medium. However, as the size of the focused region which is the region of interest (ROI) increases, a large number of scattering events take place within the ROI, the ensemble average of the phase shift (Delta phi) becomes very close to zero. The phase of the individual photon is randomly distributed over 2 pi when the scattered photon path crosses a large number of ultrasound wavelengths in the focused region. This is true at high ultrasound frequency (1 MHz) when mean free path length of photon l(s) is comparable to wavelength of US beam. However, at much lower US frequencies (100 Hz), the wavelength of sound is orders of magnitude larger than l(s), and with a high value of g (g 0.9), there is a distinct measurable phase difference for the photon that traverses through the insonified region. Experiments are carried out for validation of simulation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bubble size in a gas liquid ejector has been measured using the image technique and analysed for estimation of Sauter mean diameter. The individual bubble diameter is estimated by considering the two dimensional contour of the ellipse, for the actual three dimensional ellipsoid in the system by equating the volume of the ellipsoid to that of the sphere. It is observed that the bubbles are of oblate and prolate shaped ellipsoid in this air water system. The bubble diameter is calculated based on this concept and the Sauter mean diameter is estimated. The error between these considerations is reported. The bubble size at different locations from the nozzle of the ejector is presented along with their percentage error which is around 18%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The formation of the helical morphology in monolayers and bilayers of chiral amphiphilic assemblies is believed to be driven at least partly by the interactions at the chiral centers of the amphiphiles. However, a detailed microscopic understanding of these interactions and their relation with the helix formation is still not clear. In this article a study of the molecular origin of the chirality-driven helix formation is presented by calculating, for the first time, the effective pair potential between a pair of chiral molecules. This effective potential depends on the relative sizes of the groups attached to the two chiral centers, on the orientation of the amphiphile molecules, and also on the distance between them. We find that for the mirror-image isomers (in the racemic modification) the minimum energy conformation is a nearly parallel alignment of the molecules. On the other hand, the same for a pair of molecules of one kind of enantiomer favors a tilt angle between them, thus leading to the formation of a helical morphology of the aggregate. The tilt angle is determined by the size of the groups attached to the chiral centers of the pair of molecules considered and in many cases predicted it to be close to 45 degrees. The present study, therefore, provides a molecular origin of the intrinsic bending force, suggested by Helfrich (J. Chem. Phys. 1986, 85, 1085-1087), to be responsible for the formation of helical structure. This effective potential may explain many of the existing experimental results, such as the size and the concentration dependence of the formation of helical morphology. It is further found that the elastic forces can significantly modify the pitch predicted by the chiral interactions alone and that the modified real pitch is close to the experimentally observed value. The present study is expected to provide a starting point for future microscopic studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical analysis of cracked structures often involves numerical estimation of stress intensity factors (SIFs) at a crack tip/front. A newly developed formulation called universal crack closure integral (UCCI) for the evaluation of potential energy release rates (PERRs) and the corresponding SIFs is presented in this paper. Unlike the existing element dedicated forms of crack closure integrals (MCCI, VCCI) with application limited to finite element analysis, this new numerical SIF/PERR estimation technique is independent of the basic stress analysis procedure, making it universally applicable. The second merit of this procedure is that it avoids the generally error-producing zones close to the crack tip/front singularity. The UCCI procedure, based on Irwin's original CCI, is formulated and explored using a simple 2D problem of a straight crack in an infinite sheet. It is then applied to some three-dimensional crack geometries with the stresses and displacements obtained from a boundary element program.