64 resultados para limited resources

em Indian Institute of Science - Bangalore - Índia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multiple UAVs are deployed to carry out a search and destroy mission in a bounded region. The UAVs have limited sensor range and can carry limited resources which reduce with use. The UAVs perform a search task to detect targets. When a target is detected which requires different type and quantities of resources to completely destroy, then a team of UAVs called as a coalition is formed to attack the target. The coalition members have to modify their route to attack the target, in the process, the search task is affected, as search and destroy tasks are coupled. The performance of the mission is a function of the search and the task allocation strategies. Therefore, for a given task allocation strategy, we need to devise search strategies that are efficient. In this paper, we propose three different search strategies namely; random search strategy, lanes based search strategy and grid based search strategy and analyze their performance through Monte-Carlo simulations. The results show that the grid based search strategy performs the best but with high information overhead.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Unmanned aerial vehicles (UAVs) have the potential to carry resources in support of search and prosecute operations. Often to completely prosecute a target, UAVs may have to simultaneously attack the target with various resources with different capacities. However, the UAVs are capable of carrying only limited resources in small quantities, hence, a group of UAVs (coalition) needs to be assigned that satisfies the target resource requirement. The assigned coalition must be such that it minimizes the target prosecution delay and the size of the coalition. The problem of forming coalitions is computationally intensive due to the combinatorial nature of the problem, but for real-time applications computationally cheap solutions are required. In this paper, we propose decentralized sub-optimal (polynomial time) and decentralized optimal coalition formation algorithms that generate coalitions for a single target with low computational complexity. We compare the performance of the proposed algorithms to that of a global optimal solution for which we need to solve a centralized combinatorial optimization problem. This problem is computationally intensive because the solution has to (a) provide a coalition for each target, (b) design a sequence in which targets need to be prosecuted, and (c) take into account reduction of UAV resources with usage. To solve this problem we use the Particle Swarm Optimization (PSO) technique. Through simulations, we study the performance of the proposed algorithms in terms of mission performance, complexity of the algorithms and the time taken to form the coalition. The simulation results show that the solution provided by the proposed algorithms is close to the global optimal solution and requires far less computational resources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A team of unmanned aerial vehicles (UAVs) with limited communication ranges and limited resources are deployed in a region to search and destroy stationary and moving targets. When a UAV detects a target, depending on the target resource requirement, it is tasked to form a coalition over the dynamic network formed by the UAVs. In this paper, we develop a mechanism to find potential coalition members over the network using principles from internet protocol and introduce an algorithm using Particle Swarm Optimization to generate a coalition that destroys the target is minimum time. Monte-Carlo simulations are carried out to study how coalition are formed and the effects of coalition process delays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information spreading in a population can be modeled as an epidemic. Campaigners (e.g., election campaign managers, companies marketing products or movies) are interested in spreading a message by a given deadline, using limited resources. In this paper, we formulate the above situation as an optimal control problem and the solution (using Pontryagin's Maximum Principle) prescribes an optimal resource allocation over the time of the campaign. We consider two different scenarios-in the first, the campaigner can adjust a direct control (over time) which allows her to recruit individuals from the population (at some cost) to act as spreaders for the Susceptible-Infected-Susceptible (SIS) epidemic model. In the second case, we allow the campaigner to adjust the effective spreading rate by incentivizing the infected in the Susceptible-Infected-Recovered (SIR) model, in addition to the direct recruitment. We consider time varying information spreading rate in our formulation to model the changing interest level of individuals in the campaign, as the deadline is reached. In both the cases, we show the existence of a solution and its uniqueness for sufficiently small campaign deadlines. For the fixed spreading rate, we show the effectiveness of the optimal control strategy against the constant control strategy, a heuristic control strategy and no control. We show the sensitivity of the optimal control to the spreading rate profile when it is time varying. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During the early stages of operation, high-tech startups need to overcome the liability of newness and manage high degree of uncertainty. Several high-tech startups fail due to inability to deal with skeptical customers, underdeveloped markets and limited resources in selling an offering that has no precedent. This paper leverages the principles of effectuation (a logic of entrepreneurial decision making under uncertainty) to explain the journey from creation to survival of high-tech startups in an emerging economy. Based on the 99tests.com case study, this paper suggests that early stage high-tech startups in emerging economies can increase their probability of survival by adopting the principles of effectuation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The cybernetic modeling framework for the growth of microorganisms provides for an elegant methodology to account for the unknown regulatory phenomena through the use of cybernetic variables for enzyme induction and activity. In this paper, we revisit the assumption of limited resources for enzyme induction (Sigma u(i) = 1) used in the cybernetic modeling framework by presenting a methodology for inferring the individual cybernetic variables u(i) from experimental data. We use this methodology to infer u(i) during the simultaneous consumption of glycerol and lactose by Escherichia coli and then model the fitness trade-offs involved in the recently discovered predictive regulation strategy of microorganisms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The nodes with dynamicity, and management without administrator are key features of mobile ad hoc networks (1VIANETs). Increasing resource requirements of nodes running different applications, scarcity of resources, and node mobility in MANETs are the important issues to be considered in allocation of resources. Moreover, management of limited resources for optimal allocation is a crucial task. In our proposed work we discuss a design of resource allocation protocol and its performance evaluation. The proposed protocol uses both static and mobile agents. The protocol does the distribution and parallelization of message propagation (mobile agent with information) in an efficient way to achieve scalability and speed up message delivery to the nodes in the sectors of the zones of a MANET. The protocol functionality has been simulated using Java Agent Development Environment (JADE) Framework for agent generation, migration and communication. A mobile agent migrates from central resource rich node with message and navigate autonomously in the zone of network until the boundary node. With the performance evaluation, it has been concluded that the proposed protocol consumes much less time to allocate the required resources to the nodes under requirement, utilize less network resources and increase the network scalability. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high level of public accountability attached to Public Sector Enterprises as a result of public ownership makes them socially responsible. The Committee of Public Undertakings in 1992 examined the issue relating to social obligations of Central Public Sector Enterprises and observed that ``being part of the `State', every Public Sector enterprise has a moral responsibility to play an active role in discharging the social obligations endowed on a welfare state, subject to the financial health of the enterprise''. It issued the Corporate Social Responsibility Guidelines in 2010 where all Central Public Enterprises, through a Board Resolution, are mandated to create a CSR budget as a specified percentage of net profit of the previous year. This paper examines the CSR activities of the biggest engineering public sector organization in India, Bharath Heavy Electricals Limited. The objectives are twofold, one, to develop a case study of the organization about the funds allocated and utilized for various CSR activities, and two, to examine its status with regard to other organizations, the 2010 guidelines, and the local socio-economic development. Secondary data analysis results show three interesting trends. One, it reveals increasing organizational social orientation with the formal guidelines in place. Two, Firms can no longer continue to exploit environmental resources and escape from their responsibilities by acting separate entities regardless of the interest of the society and Three the thrust of CSR in public sector is on inclusive growth, sustainable development and capacity building with due attention to the socio-economic needs of the neglected and marginalized sections of the society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nanoporous structures with high active surface areas are critical for a variety of applications. Here, we present a general templateless strategy to produce such porous structures by controlled aggregation of nanostructured subunits and apply the principles for synthesizing nanoporous Pt for electrocatalytic oxidation of methanol. The nature of the aggregate produced is controlled by tuning the electrostatic interaction between surfactant-free nanoparticles in the solution phase. When the repulsive force between the particles is very large, the particles are stabilized in the solution while instantaneous aggregation leading to fractal-like structures results when the repulsive force is very low. Controlling the repulsive interaction to an optimum, intermediate value results in the formation of compact structures with very large surface areas. In the case of Pt, nanoporous clusters with an extremely high specific surface area (39 m(2)/g) and high activity for methanol oxidation have been produced. Preliminary investigations indicate that the method is general and can be easily extended to produce nanoporous structures of many inorganic materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are several areas in the plywood industry where Operations Research techniques have greatly assisted in better decision-making. These have resulted in improved profits, reduction of wood losses and better utilization of resources. Realizing these, some of the plywood manufacturing firms in the developed countries have established separate Operations Research departments or divisions. In the face of limited raw-material resources, raising costs and a competitive environment, the benefits attributable to the use of these techniques are becoming more and more significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The coalescence of nearly rigid liquid droplets in a turbulent flow field is viewed as the drainage of a thin film of liquid under the action of a stochastic force representing the effect of turbulence. The force squeezing the drop pair is modelled as a correlated random function of time. The drops are assumed to coalesce once the film thickness becomes smaller than a critical thickness while they are regarded as separated if their distance of separation is larger than a prescribed distance. A semi-analytical solution is derived to determine the coalescence efficiency. The veracity of the solution procedure is established via a Monte-Carlo solution scheme. The model predicts a reversing trend of the dependence of the coalescence efficiency on the drop radii, the film liquid viscosity and the turbulence energy dissipation per unit mass, as the relative fluctuation increases. However, the dependence on physical parameters is weak (especially at high relative fluctuation) so that for the smallest droplets (which are nearly rigid) the coalescence efficiency may be treated as an empirical constant. The predictions of this model are compared with those of a white-noise force model. The results of this paper and those in Muralidhar and Ramkrishna (1986, Ind. Engng Chem. Fundam. 25, 554-56) suggest that dynamic drop deformation is the key factor that influences the coalescence efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new clustering technique, based on the concept of immediato neighbourhood, with a novel capability to self-learn the number of clusters expected in the unsupervized environment, has been developed. The method compares favourably with other clustering schemes based on distance measures, both in terms of conceptual innovations and computational economy. Test implementation of the scheme using C-l flight line training sample data in a simulated unsupervized mode has brought out the efficacy of the technique. The technique can easily be implemented as a front end to established pattern classification systems with supervized learning capabilities to derive unified learning systems capable of operating in both supervized and unsupervized environments. This makes the technique an attractive proposition in the context of remotely sensed earth resources data analysis wherein it is essential to have such a unified learning system capability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.