12 resultados para Motor Vehicle Information and Cost Savings Acts.
em Indian Institute of Science - Bangalore - Índia
Resumo:
Autonomous mission control, unlike automatic mission control which is generally pre-programmed to execute an intended mission, is guided by the philosophy of carrying out a complete mission on its own through online sensing, information processing, and control reconfiguration. A crucial cornerstone of this philosophy is the capability of intelligence and of information sharing between unmanned aerial vehicles (UAVs) or with a central controller through secured communication links. Though several mission control algorithms, for single and multiple UAVs, have been discussed in the literature, they lack a clear definition of the various autonomous mission control levels. In the conventional system, the ground pilot issues the flight and mission control command to a UAV through a command data link and the UAV transmits intelligence information, back to the ground pilot through a communication link. Thus, the success of the mission depends entirely on the information flow through a secured communication link between ground pilot and the UAV In the past, mission success depended on the continuous interaction of ground pilot with a single UAV, while present day applications are attempting to define mission success through efficient interaction of ground pilot with multiple UAVs. However, the current trend in UAV applications is expected to lead to a futuristic scenario where mission success would depend only on interaction among UAV groups with no interaction with any ground entity. However, to reach this capability level, it is necessary to first understand the various levels of autonomy and the crucial role that information and communication plays in making these autonomy levels possible. This article presents a detailed framework of UAV autonomous mission control levels in the context of information flow and communication between UAVs and UAV groups for each level of autonomy.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Production scheduling in a flexible manufacturing system (FMS) is a real-time combinatorial optimization problem that has been proved to be NP-complete. Solving this problem needs on-line monitoring of plan execution and requires real-time decision-making in selecting alternative routings, assigning required resources, and rescheduling when failures occur in the system. Expert systems provide a natural framework for solving this kind of NP-complete problems.In this paper an expert system with a novel parallel heuristic approach is implemented for automatic short-term dynamic scheduling of FMS. The principal features of the expert system presented in this paper include easy rescheduling, on-line plan execution, load balancing, an on-line garbage collection process, and the use of advanced knowledge representational schemes. Its effectiveness is demonstrated with two examples.
Resumo:
Amorphous SiO2 thin films were prepared on glass and silicon substrates by cost effective sol-gel method. Tetra ethyl ortho silicate (TEOS) was used as the precursor material, ethanol as solvent and concentrated HCl as a catalyst. The films were characterized at different annealing temperatures. The optical transmittance was slightly increased with increase of annealing temperature. The refractive index was found to be 1.484 at 550 nm. The formation of SiO2 film was analyzed from FT-IR spectra. The MOS capacitors were designed using silicon (1 0 0) substrates. The current-voltage (I-V), capacitance-voltage (C-V) and dissipation-voltage (D-V) measurements were taken for all the annealed films deposited on Si (1 0 0). The variation of current density, resistivity and dielectric constant of SiO2 films with different annealing temperatures was investigated and discussed for its usage in applications like MOS capacitor. The results revealed the decrease of dielectric constant and increase of resistivity of SiO2 films with increasing annealing temperature. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Assembly is an important part of the product development process. To avoid potential issues during assembly in specialized domains such as aircraft assembly, expert knowledge to predict such issues is helpful. Knowledge based systems can act as virtual experts to provide assistance. Knowledge acquisition for such systems however, is a challenge, and this paper describes one part of an ongoing research to acquire knowledge through a dialog between an expert and a knowledge acquisition system. In particular this paper discusses the use of a situation model for assemblies to present experts with a virtual assembly and help them locate the specific context of the knowledge they provide to the system.
Resumo:
In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components-like genetic circuits, biochemical cascades, and ion channels, among others-enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode-with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.
Resumo:
In this paper, we present a machine learning approach for subject independent human action recognition using depth camera, emphasizing the importance of depth in recognition of actions. The proposed approach uses the flow information of all 3 dimensions to classify an action. In our approach, we have obtained the 2-D optical flow and used it along with the depth image to obtain the depth flow (Z motion vectors). The obtained flow captures the dynamics of the actions in space time. Feature vectors are obtained by averaging the 3-D motion over a grid laid over the silhouette in a hierarchical fashion. These hierarchical fine to coarse windows capture the motion dynamics of the object at various scales. The extracted features are used to train a Meta-cognitive Radial Basis Function Network (McRBFN) that uses a Projection Based Learning (PBL) algorithm, referred to as PBL-McRBFN, henceforth. PBL-McRBFN begins with zero hidden neurons and builds the network based on the best human learning strategy, namely, self-regulated learning in a meta-cognitive environment. When a sample is used for learning, PBLMcRBFN uses the sample overlapping conditions, and a projection based learning algorithm to estimate the parameters of the network. The performance of PBL-McRBFN is compared to that of a Support Vector Machine (SVM) and Extreme Learning Machine (ELM) classifiers with representation of every person and action in the training and testing datasets. Performance study shows that PBL-McRBFN outperforms these classifiers in recognizing actions in 3-D. Further, a subject-independent study is conducted by leave-one-subject-out strategy and its generalization performance is tested. It is observed from the subject-independent study that McRBFN is capable of generalizing actions accurately. The performance of the proposed approach is benchmarked with Video Analytics Lab (VAL) dataset and Berkeley Multimodal Human Action Database (MHAD). (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
An exciting application of crowdsourcing is to use social networks in complex task execution. In this paper, we address the problem of a planner who needs to incentivize agents within a network in order to seek their help in executing an atomic task as well as in recruiting other agents to execute the task. We study this mechanism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner's goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We identify a set of desirable properties that should ideally be satisfied by a crowdsourcing mechanism. In particular, sybil-proofness and collapse-proofness are two complementary properties in our desiderata. We prove that no mechanism can satisfy all the desirable properties simultaneously. This leads us naturally to explore approximate versions of the critical properties. We focus our attention on approximate sybil-proofness and our exploration leads to a parametrized family of payment mechanisms which satisfy collapse-proofness. We characterize the approximate versions of the desirable properties in cost critical and time critical domain.
Resumo:
We consider a discrete time partially observable zero-sum stochastic game with average payoff criterion. We study the game using an equivalent completely observable game. We show that the game has a value and also we present a pair of optimal strategies for both the players.
Resumo:
Zinc oxide nanorods (ZnO NRs) have been synthesized on flexible substrates by adopting a new and novel three-step process. The as-grown ZnO NRs are vertically aligned and have excellent chemical stoichiometry between its constituents. The transmission electron microscopic studies show that these NR structures are single crystalline and grown along the < 001 > direction. The optical studies show that these nanostructures have a direct optical band gap of about 3.34 eV. Therefore, the proposed methodology for the synthesis of vertically aligned NRs on flexible sheets launches a new route in the development of low-cost flexible devices. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Identification of residue-residue contacts from primary sequence can be used to guide protein structure prediction. Using Escherichia coli CcdB as the test case, we describe an experimental method termed saturation-suppressor mutagenesis to acquire residue contact information. In this methodology, for each of five inactive CcdB mutants, exhaustive screens for suppressors were performed. Proximal suppressors were accurately discriminated from distal suppressors based on their phenotypes when present as single mutants. Experimentally identified putative proximal pairs formed spatial constraints to recover >98% of native-like models of CcdB from a decoy dataset. Suppressor methodology was also applied to the integral membrane protein, diacylglycerol kinase A where the structures determined by X-ray crystallography and NMR were significantly different. Suppressor as well as sequence co-variation data clearly point to the Xray structure being the functional one adopted in vivo. The methodology is applicable to any macromolecular system for which a convenient phenotypic assay exists.