15 resultados para sensitive data
em Indian Institute of Science - Bangalore - Índia
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.
Resumo:
We revise and extend the extreme value statistic, introduced in Gupta et al., to study direction dependence in the high-redshift supernova data, arising either from departures, from the cosmological principle or due to direction-dependent statistical systematics in the data. We introduce a likelihood function that analytically marginalizes over the,Hubble constant and use it to extend our previous statistic. We also introduce a new statistic that is sensitive to direction dependence arising from living off-centre inside a large void as well as from previously mentioned reasons for anisotropy. We show that for large data sets, this statistic has a limiting form that can be computed analytically. We apply our statistics to the gold data sets from Riess et al., as in our previous work. Our revision and extension of the previous statistic show that the effect of marginalizing over the Hubble constant instead of using its best-fitting value on our results is only marginal. However, correction of errors in our previous work reduces the level of non-Gaussianity in the 2004 gold data that were found in our earlier work. The revised results for the 2007 gold data show that the data are consistent with isotropy and Gaussianity. Our second statistic confirms these results.
Resumo:
Sensitive soils, in general, are prone to mechanical disturbances while sampling, handling, and testing. This necessitates the prediction of true field behavior. The compressibility response of such soils is typical of having three zones, mechanistically explained as nonparticulate, transitional, and particulate. Such zoning has enabled the development of a simple method to predict the field compressibility response of the sample. The field compression curve with sigmact act as the most probable yield stress is considered to reflect 0% disturbance. By a comparison of experimentally determined sigmac and sigmact, it is possible to estimate the degree of sample disturbance. When the value of sigmac is closer to sigmact, the sampling disturbance approaches zero. As the value of sigmac reduces, the degree of sampling disturbance increases. The possibility of using this degree of sample disturbance from compressibility data to obtain other true properties from laboratory results of the sampled specimens has been examined.
Resumo:
A graphical display of the frequency content of background,electroencephalogram (EEG) activity is obtained by calculating the spectral estimates using autocorrelation autoregressive method and the classical Fourier transform method, Display of spectral content of consecutive data segments is made using hidden-line suppression technique so as to get a spectral array, The autoregressive spectral array (ASA) is found to be sensitive to baseline drift, Following baseline correction the autoregressive technique is found to be superior to the Fourier method of compressed spectral array (CSA) in detecting the transitions in the frequencies of the signal. The smoothed ASA gives a better picture of transitions and changes in the background activity, The ASA can be made to adapt to specific changes of dominant frequencies while eliminating unnecessary peaks in the spectrum. The utility,of the ASA for background EEG analysis is discussed,
Resumo:
Observations from moored buoys during spring of 1998-2000 suggest that the warming of the mixed layer (similar to20 m deep) of the north Indian Ocean warm pool is a response to net surface heat flux Q(net) (similar to100 W m(-2)) minus penetrative solar radiation Q(pen) (similar to45 W m(-2)). A residual cooling due to vertical mixing and advection is indirectly estimated to be about 25 W m(-2). The rate of warming due to typical values of Q(net) minus Q(pen) is not very sensitive to the depth of the mixed layer if it lies between 10 m and 30 m.
Resumo:
An attempt has been made to study the film-substrate interface by using a sensitive, non- conventional tool. Because of the prospective use of gate oxide in MOSFET devices, we have chosen to study alumina films grown on silicon. Film-substrate interface of alumina grown by MOCVD on Si(100) was studied systematically using spectroscopic ellipsometry in the range 1.5-5.0 eV, supported by cross-sectional SEM, and SIMS. The (ε1,ε2) versus energy data obtained for films grown at 600°C, 700°C, and 750°C were modeled to fit a substrate/interface/film “sandwich”. The experimental results reveal (as may be expected) that the nature of the substrate -film interface depends strongly on the growth temperature. The simulated (ε1,ε2) patterns are in excellent agreement with observed ellipsometric data. The MOCVD precursors results the presence of carbon in the films. Theoretical simulation was able to account for the ellipsometry data by invoking the presence of “free” carbon in the alumina films.
Resumo:
Two new statistics, namely Delta(chi 2) and Delta(chi), based on the extreme value theory, were derived by Gupta et al. We use these statistics to study the direction dependence in the HST Key Project data, which provides one of the most precise measurements of the Hubble constant. We also study the non-Gaussianity in this data set using these statistics. Our results for Delta(chi 2) show that the significance of direction-dependent systematics is restricted to well below the 1 sigma confidence limit; however, the presence of non-Gaussian features is subtle. On the other hand, the Delta(chi). statistic, which is more sensitive to direction dependence, shows direction dependence systematics to be at a slightly higher confidence level, and the presence of non-Gaussian features at a level similar to the Delta(chi 2) statistic.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
As the gap between processor and memory continues to grow Memory performance becomes a key performance bottleneck for many applications. Compilers therefore increasingly seek to modify an application’s data layout to improve cache locality and cache reuse. Whole program Structure Layout [WPSL] transformations can significantly increase the spatial locality of data and reduce the runtime of programs that use link-based data structures, by increasing the cache line utilization. However, in production compilers WPSL transformations do not realize the entire performance potential possible due to a number of factors. Structure layout decisions made on the basis of whole program aggregated affinity/hotness of structure fields, can be sub optimal for local code regions. WPSL is also restricted in applicability in production compilers for type unsafe languages like C/C++ due to the extensive legality checks and field sensitive pointer analysis required over the entire application. In order to overcome the issues associated with WPSL, we propose Region Based Structure Layout (RBSL) optimization framework, using selective data copying. We describe our RBSL framework, implemented in the production compiler for C/C++ on HP-UX IA-64. We show that acting in complement to the existing and mature WPSL transformation framework in our compiler, RBSL improves application performance in pointer intensive SPEC benchmarks ranging from 3% to 28% over WPSL
Resumo:
Urban population is growing at around 2.3 percent per annum in India. This is leading to urbanisation and often fuelling the dispersed development in the outskirts of urban and village centres with impacts such as loss of agricultural land, open space, and ecologically sensitive habitats. This type of upsurge is very much prevalent and persistent in most places, often inferred as sprawl. The direct implication of such urban sprawl is the change in land use and land cover of the region and lack of basic amenities, since planners are unable to visualise this type of growth patterns. This growth is normally left out in all government surveys (even in national population census), as this cannot be grouped under either urban or rural centre. The investigation of patterns of growth is very crucial from regional planning point of view to provide basic amenities in the region. The growth patterns of urban sprawl can be analysed and understood with the availability of temporal multi-sensor, multi-resolution spatial data. In order to optimise these spectral and spatial resolutions, image fusion techniques are required. This aids in integrating a lower spatial resolution multispectral (MSS) image (for example, IKONOS MSS bands of 4m spatial resolution) with a higher spatial resolution panchromatic (PAN) image (IKONOS PAN band of 1m spatial resolution) based on a simple spectral preservation fusion technique - the Smoothing Filter-based Intensity Modulation (SFIM). Spatial details are modulated to a co-registered lower resolution MSS image without altering its spectral properties and contrast by using a ratio between a higher resolution image and its low pass filtered (smoothing filter) image. The visual evaluation and statistical analysis confirms that SFIM is a superior fusion technique for improving spatial detail of MSS images with the preservation of spectral properties.
Resumo:
The Reeb graph of a scalar function represents the evolution of the topology of its level sets. This paper describes a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds or non-manifolds in any dimension. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Critical points correspond to nodes in the Reeb graph. Arcs connecting the nodes are computed in the second step by a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The paper also describes a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. Finally, the Reeb graph is employed in four different applications-surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data.
Resumo:
The paper outlines a technique for sensitive measurement of conduction phenomena in liquid dielectrics. The special features of this technique are the simplicity of the electrical system, the inexpensive instrumentation and the high accuracy. Detection, separation and analysis of a random function of current that is superimposed on the prebreakdown direct current forms the basis of this investigation. In this case, prebreakdown direct current is the output data of a test cell with large electrodes immersed in a liquid medium subjected to high direct voltages. Measurement of the probability-distribution function of a random fluctuating component of current provides a method that gives insight into the mechanism of conduction in a liquid medium subjected to high voltages and the processes that are responsible for the existence of the fluctuating component of the current.
Resumo:
The sensing of relative humidity (RH) at room temperature has potential applications in several areas ranging from biomedical to horticulture, paper, and textile industries. In this paper, a highly sensitive humidity sensor based on carbon nanotubes (CNTs) coated on the surface of an etched fiber Bragg grating (EFBG) sensor has been demonstrated, for detecting RH over a wide range of 20%-90% at room temperature. When water molecules interact with the CNT coated EFBG, the effective refractive index of the fiber core changes, resulting in a shift in the Bragg wavelength. It has been possible to achieve a high sensitivity of similar to 31 pm/% RH, which is the highest compared with many of the existing FBG-based humidity sensors. The limit of detection in the CNT coated EFBG has been found to be similar to 0.03 RH. The experimental data shows a linear response of Bragg wavelength shift with increase in humidity. This novel method of incorporating CNTs on to the FBG sensor for humidity sensing has not been reported before.
Resumo:
An accurate and highly sensitive sensor platform has been demonstrated for the detection of C-reactive protein (CRP) using optical fiber Bragg gratings (FBGs). The CRP detection has been carried out by monitoring the shift in Bragg wavelength (Delta lambda(B)) of an etched FBG (eFBG) coated with an anti-CRP antibody (aCRP)-graphene oxide (GO) complex. The complex is characterized by Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy and atomic force microscopy. A limit of detection of 0.01 mg/L has been achieved with a linear range of detection from 0.01 mg/L to 100 mg/L which includes clinical range of CRP. The eFBG sensor coated with only aCRP (without GO) show much less sensitivity than that of aCRP-GO complex coated eFBG. The eFBG sensors show high specificity to CRP even in the presence of other interfering factors such as urea, creatinine and glucose. The affinity constant of similar to 1.1 x 10(10) M-1 has been extracted from the data of normalized shift (Delta lambda(B)/lambda(B)) as a function of CRP concentration. (C) 2014 Elsevier B.V. All rights reserved.