13 resultados para Microsoft Excel®
em Indian Institute of Science - Bangalore - Índia
Resumo:
A user friendly interactive computer program, CIRDIC, is developed which calculates the molar ellipticity and molar circular dichroic absorption coefficients from the CD spectrum. This, in combination with LOTUS 1-2-3 spread sheet, will give the spectra of above parameters vs wavelength. The code is implemented in MicroSoft FORTRAN 77 which runs on any IBM compatible PC under MSDOS environment.
Resumo:
Titanium(III) tetrahydroborate formed by the reaction of titanium tetrachloride and benzyltriethylammonium borohydride (1:4) reacts with alkenes in dichloromethane (-20-degrees-C) very readily to yield directly the corresponding alcohols in excel lent yields after a simple aqueous work up.
Resumo:
Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up- to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat- TM/ETM+, IRS-1C/D LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (~ 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 m), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end- members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications.
Resumo:
This paper presents a new application of two dimensional Principal Component Analysis (2DPCA) to the problem of online character recognition in Tamil Script. A novel set of features employing polynomial fits and quartiles in combination with conventional features are derived for each sample point of the Tamil character obtained after smoothing and resampling. These are stacked to form a matrix, using which a covariance matrix is constructed. A subset of the eigenvectors of the covariance matrix is employed to get the features in the reduced sub space. Each character is modeled as a separate subspace and a modified form of the Mahalanobis distance is derived to classify a given test character. Results indicate that the recognition accuracy using the 2DPCA scheme shows an approximate 3% improvement over the conventional PCA technique.
Resumo:
Displacement-amplifying compliant mechanisms (DaCMs) reported in literature are mostly used for actuator applications. This paper considers them for sensor applications that rely on displacement measurement, and evaluates them objectively. The main goal is to increase the sensitivity under constraints imposed by several secondary requirements and practical constraints. A spring-mass-lever model that effectively captures the addition of a DaCM to a sensor is used in comparing eight DaCMs. We observe that they significantly differ in performance criteria such as geometric advantage, stiffness, natural frequency, mode amplification, factor of safety against failure, cross-axis stiffness, etc., but none excel in all. Thus, a combined figure of merit is proposed using which the most suitable DaCM could be selected for a sensor application. A case-study of a micro machined capacitive accelerometer and another case-study of a vision-based force sensor are included to illustrate the general evaluation and selection procedure of DaCMs with specific applications. Some other insights gained with the analysis presented here were the optimum size-scale for a DaCM, the effect on its natural frequency, limits on its stiffness, and working range of the sensor.
Resumo:
Microsoft Windows uses the notion of registry to store all configuration information. The registry entries have associations and dependencies. For example, the paths to executables may be relative to some home directories. The registry being designed with faster access as one of the objectives does not explicitly capture these relations. In this paper, we explore a representation that captures the dependencies more explicitly using shared and unifying variables. This representation, called mRegistry exploits the tree-structured hierarchical nature of the registry, is concept-based and obtained in multiple stages. mRegistry captures intra-block, inter-block and ancestor-children dependencies (all leaf entries of a parent key in a registry put together as an entity constitute a block thereby making the block as the only child of the parent). In addition, it learns the generalized concepts of dependencies in the form of rules. We show that mRegistry has several applications: fault diagnosis, prediction, comparison, compression etc.
Resumo:
In this paper we consider the process of discovering frequent episodes in event sequences. The most computationally intensive part of this process is that of counting the frequencies of a set of candidate episodes. We present two new frequency counting algorithms for speeding up this part. These, referred to as non-overlapping and non-inteleaved frequency counts, are based on directly counting suitable subsets of the occurrences of an episode. Hence they are different from the frequency counts of Mannila et al [1], where they count the number of windows in which the episode occurs. Our new frequency counts offer a speed-up factor of 7 or more on real and synthetic datasets. We also show how the new frequency counts can be used when the events in episodes have time-durations as well.
Resumo:
Discovering patterns in temporal data is an important task in Data Mining. A successful method for this was proposed by Mannila et al. [1] in 1997. In their framework, mining for temporal patterns in a database of sequences of events is done by discovering the so called frequent episodes. These episodes characterize interesting collections of events occurring relatively close to each other in some partial order. However, in this framework(and in many others for finding patterns in event sequences), the ordering of events in an event sequence is the only allowed temporal information. But there are many applications where the events are not instantaneous; they have time durations. Interesting episodesthat we want to discover may need to contain information regarding event durations etc. In this paper we extend Mannila et al.’s framework to tackle such issues. In our generalized formulation, episodes are defined so that much more temporal information about events can be incorporated into the structure of an episode. This significantly enhances the expressive capability of the rules that can be discovered in the frequent episode framework. We also present algorithms for discovering such generalized frequent episodes.
Resumo:
India's energy challenges are three pronged: presence of majority energy poor lacking access to modern energy; need for expanding energy system to bridge this access gap as well as to meet the requirements of fast-growing economy; and the desire to partner with global economies in mitigating the threat of climate change. The presence of 364 million people without access to electricity and 726 million relying on biomass for cooking out of a total rural population of 809 million indicate the seriousness of challenge. In this paper, we discuss an innovative approach to address this challenge, which intends to take advantage of recent global developments and untapped capabilities possessed by India. Intention is to use climate change mitigation imperative as a stimulus and adopt a public-private-partnership-driven ‘business model' with innovative institutional, regulatory, financing, and delivery mechanisms. Some of the innovations are: creation of rural energy access authorities within the government system as leadership institutions; establishment of energy access funds to enable transitions from the regime of "investment/fuel subsidies" to "incentive-linked" delivery of energy services; integration of business principles to facilitate affordable and equitable energy sales and carbon trade; and treatment of entrepreneurs as implementation targets. This proposal targets 100% access to modern energy carriers by 2030 through a judicious mix of conventional and biomass energy systems with an investment of US$35 billion over 20 years. The estimated annual cost of universal energy access is about US$9 billion for a GHG mitigation potential of 213Tg CO2e at an abatement cost of US$41/tCO2e. It is a win-win situation for all stakeholders. Households benefit from modern energy carriers at affordable cost; entrepreneurs run profitable energy enterprises; carbon markets have access to CERs; the government has the satisfaction of securing energy access to rural people; and globally, there is a benefit of climate change mitigation.
Resumo:
Following rising demands in positioning with GPS, low-cost receivers are becoming widely available; but their energy demands are still too high. For energy efficient GPS sensing in delay-tolerant applications, the possibility of offloading a few milliseconds of raw signal samples and leveraging the greater processing power of the cloud for obtaining a position fix is being actively investigated. In an attempt to reduce the energy cost of this data offloading operation, we propose Sparse-GPS(1): a new computing framework for GPS acquisition via sparse approximation. Within the framework, GPS signals can be efficiently compressed by random ensembles. The sparse acquisition information, pertaining to the visible satellites that are embedded within these limited measurements, can subsequently be recovered by our proposed representation dictionary. By extensive empirical evaluations, we demonstrate the acquisition quality and energy gains of Sparse-GPS. We show that it is twice as energy efficient than offloading uncompressed data, and has 5-10 times lower energy costs than standalone GPS; with a median positioning accuracy of 40 m.
Resumo:
We show here a 2(Omega(root d.log N)) size lower bound for homogeneous depth four arithmetic formulas. That is, we give an explicit family of polynomials of degree d on N variables (with N = d(3) in our case) with 0, 1-coefficients such that for any representation of a polynomial f in this family of the form f = Sigma(i) Pi(j) Q(ij), where the Q(ij)'s are homogeneous polynomials (recall that a polynomial is said to be homogeneous if all its monomials have the same degree), it must hold that Sigma(i,j) (Number of monomials of Q(ij)) >= 2(Omega(root d.log N)). The above mentioned family, which we refer to as the Nisan-Wigderson design-based family of polynomials, is in the complexity class VNP. Our work builds on the recent lower bound results 1], 2], 3], 4], 5] and yields an improved quantitative bound as compared to the quasi-polynomial lower bound of 6] and the N-Omega(log log (N)) lower bound in the independent work of 7].