11 resultados para Disclosure of Information
em Indian Institute of Science - Bangalore - Índia
Resumo:
Autonomous mission control, unlike automatic mission control which is generally pre-programmed to execute an intended mission, is guided by the philosophy of carrying out a complete mission on its own through online sensing, information processing, and control reconfiguration. A crucial cornerstone of this philosophy is the capability of intelligence and of information sharing between unmanned aerial vehicles (UAVs) or with a central controller through secured communication links. Though several mission control algorithms, for single and multiple UAVs, have been discussed in the literature, they lack a clear definition of the various autonomous mission control levels. In the conventional system, the ground pilot issues the flight and mission control command to a UAV through a command data link and the UAV transmits intelligence information, back to the ground pilot through a communication link. Thus, the success of the mission depends entirely on the information flow through a secured communication link between ground pilot and the UAV In the past, mission success depended on the continuous interaction of ground pilot with a single UAV, while present day applications are attempting to define mission success through efficient interaction of ground pilot with multiple UAVs. However, the current trend in UAV applications is expected to lead to a futuristic scenario where mission success would depend only on interaction among UAV groups with no interaction with any ground entity. However, to reach this capability level, it is necessary to first understand the various levels of autonomy and the crucial role that information and communication plays in making these autonomy levels possible. This article presents a detailed framework of UAV autonomous mission control levels in the context of information flow and communication between UAVs and UAV groups for each level of autonomy.
Resumo:
One influential image that is popular among scientists is the view that mathematics is the language of nature. The present article discusses another possible way to approach the relation between mathematics and nature, which is by using the idea of information and the conceptual vocabulary of cryptography. This approach allows us to understand the possibility that secrets of nature need not be written in mathematics and yet mathematics is necessary as a cryptographic key to unlock these secrets. Various advantages of such a view are described in this article.
Resumo:
An attempt is made to present some challenging problems (mainly to the technically minded researchers) in the development of computational models for certain (visual) processes which are executed with, apparently, deceptive ease by the human visual system. However, in the interest of simplicity (and with a nonmathematical audience in mind), the presentation is almost completely devoid of mathematical formalism. Some of the findings in biological vision are presented in order to provoke some approaches to their computational models, The development of ideas is not complete, and the vast literature on biological and computational vision cannot be reviewed here. A related but rather specific aspect of computational vision (namely, detection of edges) has been discussed by Zucker, who brings out some of the difficulties experienced in the classical approaches.Space limitations here preclude any detailed analysis of even the elementary aspects of information processing in biological vision, However, the main purpose of the present paper is to highlight some of the fascinating problems in the frontier area of modelling mathematically the human vision system.
Resumo:
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The disclosure of information and its misuse in Privacy Preserving Data Mining (PPDM) systems is a concern to the parties involved. In PPDM systems data is available amongst multiple parties collaborating to achieve cumulative mining accuracy. The vertically partitioned data available with the parties involved cannot provide accurate mining results when compared to the collaborative mining results. To overcome the privacy issue in data disclosure this paper describes a Key Distribution-Less Privacy Preserving Data Mining (KDLPPDM) system in which the publication of local association rules generated by the parties is published. The association rules are securely combined to form the combined rule set using the Commutative RSA algorithm. The combined rule sets established are used to classify or mine the data. The results discussed in this paper compare the accuracy of the rules generated using the C4. 5 based KDLPPDM system and the CS. 0 based KDLPPDM system using receiver operating characteristics curves (ROC).
Resumo:
In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.
Resumo:
Seismic hazard and microzonation of cities enable to characterize the potential seismic areas that need to be taken into account when designing new structures or retrofitting the existing ones. Study of seismic hazard and preparation of geotechnical microzonation maps has been attempted using Geographical Information System (GIS). GIS will provide an effective solution for integrating different layers of information thus providing a useful input for city planning and in particular input to earthquake resistant design of structures in an area. Seismic hazard is the study of expected earthquake ground motions at any point on the earth. Microzonation is the process of sub division of region in to number of zones based on the earthquake effects in the local scale. Seismic microzonation is the process of estimating response of soil layers under earthquake excitation and thus the variation of ground motion characteristic on the ground surface. For the seismic microzonation, geotechnical site characterization need to be assessed at local scale (micro level), which is further used to assess of the site response and liquefaction susceptibility of the sites. Seismotectonic atlas of the area having a radius of 350km around Bangalore has been prepared with all the seismogenic sources and historic earthquake events (a catalogue of about 1400 events since 1906). We have attempted to carryout the site characterization of Bangalore by collating conventional geotechnical boreholes data (about 900 borehole data with depth) and integrated in GIS. 3-D subsurface model of Bangalore prepared using GIS is shown in Figure 1.Further, Shear wave velocity survey based on geophysical method at about 60 locations in the city has been carried out in 220 square Kms area. Site response and local site effects have been evaluated using 1-dimensional ground response analysis. Spatial variability of soil overburden depths, ground surface Peak Ground Acceleration’s(PGA), spectral acceleration for different frequencies, liquefaction susceptibility have been mapped in the 220 sq km area using GIS.ArcInfo software has been used for this purpose. These maps can be used for the city planning and risk & vulnerability studies. Figure 2 shows a map of peak ground acceleration at rock level for Bangalore city. Microtremor experiments were jointly carried out with NGRI scientists at about 55 locations in the city and the predominant frequency of the overburden soil columns were evaluated.
Resumo:
Sensory receptors determine the type and the quantity of information available for perception. Here, we quantified and characterized the information transferred by primary afferents in the rat whisker system using neural system identification. Quantification of ``how much'' information is conveyed by primary afferents, using the direct method (DM), a classical information theoretic tool, revealed that primary afferents transfer huge amounts of information (up to 529 bits/s). Information theoretic analysis of instantaneous spike-triggered kinematic stimulus features was used to gain functional insight on ``what'' is coded by primary afferents. Amongst the kinematic variables tested-position, velocity, and acceleration-primary afferent spikes encoded velocity best. The other two variables contributed to information transfer, but only if combined with velocity. We further revealed three additional characteristics that play a role in information transfer by primary afferents. Firstly, primary afferent spikes show preference for well separated multiple stimuli (i.e., well separated sets of combinations of the three instantaneous kinematic variables). Secondly, neurons are sensitive to short strips of the stimulus trajectory (up to 10 ms pre-spike time), and thirdly, they show spike patterns (precise doublet and triplet spiking). In order to deal with these complexities, we used a flexible probabilistic neuron model fitting mixtures of Gaussians to the spike triggered stimulus distributions, which quantitatively captured the contribution of the mentioned features and allowed us to achieve a full functional analysis of the total information rate indicated by the DM. We found that instantaneous position, velocity, and acceleration explained about 50% of the total information rate. Adding a 10 ms pre-spike interval of stimulus trajectory achieved 80-90%. The final 10-20% were found to be due to non-linear coding by spike bursts.
Resumo:
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a `footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by similar to 50% in generator potentials, to similar to 3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.