974 resultados para Sistemi multi-agente, TuCSoN, ReSpecT, coordinazione semantica


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider single-source single-sink (ss-ss) multi-hop relay networks, with slow-fading links and single-antenna half-duplex relay nodes. While two-hop cooperative relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this paper, we identify two families of networks that are multi-hop generalizations of the two-hop network: K-Parallel-Path (KPP)networks and layered networks.KPP networks, can be viewed as the union of K node-disjoint parallel relaying paths, each of length greater than one. KPP networks are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the DMT of these families of networks completely for K > 3. Layered networks are networks comprising of layers of relays with edges existing only between adjacent layers, with more than one relay in each layer. We prove that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4.For multiple-antenna KPP and layered networks, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks.For arbitrary multi-terminal wireless networks with multiple source-sink pairs, the maximum achievable diversity is shown to be equal to the min-cut between the corresponding source and the sink, irrespective of whether the network has half-duplex or full-duplex relays. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable.Along the way, we derive the optimal DMT of a generalized parallel channel and derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. We also give alternative and often simpler proofs of several existing results and show that codes achieving full diversity on a MIMO Rayleigh fading channel achieve full diversity on arbitrary fading channels. All protocols in this paper are explicit and use only amplify-and-forward (AF) relaying. We also construct codes with short block-lengths based on cyclic division algebras that achieve the optimal DMT for all the proposed schemes.Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple AF protocols are often sufficient to attain the optimal DMT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The poor performance of TCP over multi-hop wireless networks is well known. In this paper we explore to what extent network coding can help to improve the throughput performance of TCP controlled bulk transfers over a chain topology multi-hop wireless network. The nodes use a CSMA/ CA mechanism, such as IEEE 802.11’s DCF, to perform distributed packet scheduling. The reverse flowing TCP ACKs are sought to be X-ORed with forward flowing TCP data packets. We find that, without any modification to theMAC protocol, the gain from network coding is negligible. The inherent coordination problem of carrier sensing based random access in multi-hop wireless networks dominates the performance. We provide a theoretical analysis that yields a throughput bound with network coding. We then propose a distributed modification of the IEEE 802.11 DCF, based on tuning the back-off mechanism using a feedback approach. Simulation studies show that the proposed mechanism when combined with network coding, improves the performance of a TCP session by more than 100%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An integrated reservoir operation model is presented for developing effective operational policies for irrigation water management. In arid and semi-arid climates, owing to dynamic changes in the hydroclimatic conditions within a season, the fixed cropping pattern with conventional operating policies, may have considerable impact on the performance of the irrigation system and may affect the economics of the farming community. For optimal allocation of irrigation water in a season, development of effective mathematical models may guide the water managers in proper decision making and consequently help in reducing the adverse effects of water shortage and crop failure problems. This paper presents a multi-objective integrated reservoir operation model for multi-crop irrigation system. To solve the multi-objective model, a recent swarm intelligence technique, namely elitist-mutated multi-objective particle swarm optimisation (EM-MOPSO) has been used and applied to a case study in India. The method evolves effective strategies for irrigation crop planning and operation policies for a reservoir system, and thereby helps farming community in improving crop benefits and water resource usage in the reservoir command area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with the dynamic analysis of flexible,non-linear multi-body beam systems. The focus is on problems where the strains within each elastic body (beam) remain small. Based on geometrically non-linear elasticity theory, the non-linear 3-D beam problem splits into either a linear or non-linear 2-D analysis of the beam cross-section and a non-linear 1-D analysis along the beam reference line. The splitting of the three-dimensional beam problem into two- and one-dimensional parts, called dimensional reduction,results in a tremendous savings of computational effort relative to the cost of three-dimensional finite element analysis,the only alternative for realistic beams. The analysis of beam-like structures made of laminated composite materials requires a much more complicated methodology. Hence, the analysis procedure based on Variational Asymptotic Method (VAM), a tool to carry out the dimensional reduction, is used here.The analysis methodology can be viewed as a 3-step procedure. First, the sectional properties of beams made of composite materials are determined either based on an asymptotic procedure that involves a 2-D finite element nonlinear analysis of the beam cross-section to capture trapeze effect or using strip-like beam analysis, starting from Classical Laminated Shell Theory (CLST). Second, the dynamic response of non-linear, flexible multi-body beam systems is simulated within the framework of energy-preserving and energy-decaying time integration schemes that provide unconditional stability for non-linear beam systems. Finally,local 3-D responses in the beams are recovered, based on the 1-D responses predicted in the second step. Numerical examples are presented and results from this analysis are compared with those available in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we address the problem of multi-agent search. We formulate two deploy and search strategies based on optimal deployment of agents in search space so as to maximize the search effectiveness in a single step. We show that a variation of centroidal Voronoi configuration is the optimal deployment. When the agents have sensors with different capabilities, the problem will be heterogeneous in nature. We introduce a new concept namely, generalized Voronoi partition in order to formulate and solve the heterogeneous multi-agent search problem. We address a few theoretical issues such as optimality of deployment, convergence and spatial distributedness of the control law and the search strategies. Simulation experiments are carried out to compare performances of the proposed strategies with a few simple search strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational grids are increasingly being used for executing large multi-component scientific applications. The most widely reported advantages of application execution on grids are the performance benefits, in terms of speeds, problem sizes or quality of solutions, due to increased number of processors. We explore the possibility of improved performance on grids without increasing the application’s processor space. For this, we consider grids with multiple batch systems. We explore the challenges involved in and the advantages of executing long-running multi-component applications on multiple batch sites with a popular multi-component climate simulation application, CCSM, as the motivation.We have performed extensive simulation studies to estimate the single and multi-site execution rates of the applications for different system characteristics.Our experiments show that in many cases, multiple batch executions can have better execution rates than a single site execution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Community-based natural resource management (CBNRM) is the joint management of natural resources by a community based on a community strategy, through a participatory mechanism involving all legitimate stakeholders. The approach is community-based in that the communities managing the resources have the legal rights, the local institutions and the economic incentives to take substantial responsibility for sustained use of these resources. This implies that the community plays an active role in the management of natural resources, not because it asserts sole ownership over them, but because it can claim participation in their management and benefits for practical and technical reasons1–4. This approach emerged as the dominant conservation concept in the late 1970s and early 1980s, of the disillusionment with the developmental state. Governments across South and South East Asia, Africa and Latin America have adopted and implemented CBNRM in various ways, viz. through sectoral programmes such as forestry, irrigation or wildlife management, multisectoral programmes such as watershed development and efforts towards political devolution. In India, the principle of decentralization through ‘gram swaraj’ was introduced by Mahatma Gandhi. The 73rd and 74th constitution amendments in 1992 gave impetus to the decentralized planning at panchayat levels through the creation of a statutory three-level local self-government structure5,6. The strength of this book is that it includes chapters by CBNRM advocates based on six seemingly innovative initiatives being implemented by nongovernmental organizations (NGOs) in ecologically vulnerable regions of South Asia: two in the Himalayas (watershed development programme in Lingmutechhu, Bhuthan and Thalisain tehsil, Paudi Grahwal District, Uttarakhand), three in semi-arid parts of western India (watershed development in Hivre Bazar, Maharashtra and Nathugadh village, Gujarat and water-harvesting structures in Gopalapura, Rajasthan) and one in the flood-plains of the Brahmaputra–Jamuna (Char land, Galibanda and Jamalpur districts, Bangladesh). Watersheds in semi-arid regions fall in the low-rainfall region (500–700 mm) and suffer the vagaries of drought 2–3 years in every five-year cycle. In all these locations, the major occupation is agriculture, most of which is rainfed or dry. The other two cases (in Uttarakhand) fall in the Himalayan region (temperate/sub-temperate climate), which has witnessed extensive deforestation in the last century and is now considered as one of the most vulnerable locations in South Asia. Terraced agriculture is being practised in these locations for a long time. The last case (Gono Chetona) falls in the Brahmaputra–Jamuna charlands which are the most ecologically vulnerable regions in the sub-continent with constantly changing landscape. Agriculture and livestock rearing are the main occupations, and there is substantial seasonal emigration for wage labour by the adult males. River erosion and floods force the people to adopt a semi-migratory lifestyle. The book attempts to analyse the potential as well as limitations of NGOdriven CBNRM endeavours across agroclimatic regions of South Asia with emphasis on four intrinsically linked normative concerns, namely sustainability, livelihood enhancement, equity and demographic decentralization in chapters 2–7. Comparative analysis of these case studies done in chapter 8, highlights the issues that require further research while portraying the strengths and limits of NGO-driven CBNRM. In Hivre Bazar, the post-watershed intervention scenario is such that farmers often grow three crops in a year – kharif bajra, rabi jowar and summer vegetable crops. Productivity has increased in the dry lands due to improvement in soil moisture levels. The revival of johads in Gopalpura has led to the proliferation of wheat and increased productivity. In Lingmuteychhu, productivity gains have also arisen, but more due to the introduction of both local and high-yielding, new varieties as opposed to increased water availability. In the case of Gono Chetona, improvements have come due to diversification of agriculture; for example, the promotion of vegetable gardens. CBNRM interventions in most cases have also led to new avenues of employment and income generation. The synthesis shows that CBNRM efforts have made significant contributions to livelihood enhancement and only limited gains in terms of collective action for sustainable and equitable access to benefits and continuing resource use, and in terms of democratic decentralization, contrary to the objectives of the programme. Livelihood benefits include improvements in availability of livelihood support resources (fuelwood, fodder, drinking water), increased productivity (including diversification of cropping pattern) in agriculture and allied activities, and new sources of livelihood. However, NGO-driven CBNRM has not met its goal of providing ‘alternative’ forms of ‘development’ due to impediments of state policy, short-sighted vision of implementers and confrontation with the socio-ecological reality of the region, which almost always are that of fragmented communities (or communities in flux) with unequal dependence and access to land and other natural resources along with great gender imbalances. Appalling, however, is the general absence of recognition of the importance of and the will to explore practical ways to bring about equitable resource transfer or benefit-sharing and the consequent innovations in this respect that are evident in the pioneering community initiatives such as pani panchayat, etc. Pertaining to the gains on the ecological sustainability front, Hivre Bazar and Thalisain initiatives through active participation of villagers have made significant regeneration of the water table within the village, and mechanisms such as ban on number of bore wells, the regulation of cropping pattern, restrictions on felling of trees and free grazing to ensure that in the future, the groundwater is neither over-exploited nor its recharge capability impaired. Nevertheless, the longterm sustainability of the interventions in the case of Ghoga and Gopalpura initiatives as the focus has been mostly on regeneration of resources, and less on regulating the use of regenerated resources. Further, in Lingmuteychhu and Gono Chetona, the interventions are mainly household-based and the focus has been less explicit on ecological components. The studies demonstrate the livelihood benefits to all of the interventions and significant variation in achievements with reference to sustainability, equity and democratic decentralization depending on the level and extent of community participation apart from the vision of implementers, strategy (or nature of intervention shaped by the question of community formation), the centrality of community formation and also the State policy. Case studies show that the influence of State policy is multi-faceted and often contradictory in nature. This necessitates NGOs to engage with the State in a much more purposeful way than in an ‘autonomous space’. Thus the role of NGOs in CBNRM is complementary, wherein they provide innovative experiments that the State can learn. This helps in achieving the goals of CBNRM through democratic decentralization. The book addresses the vital issues related to natural resource management and interests of the community. Key topics discussed throughout the book are still at the centre of the current debate. This compilation consists of well-written chapters based on rigorous synthesis of CBNRM case studies, which will serve as good references for students, researchers and practitioners in the years to come.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract—A method of testing for parametric faults of analog circuits based on a polynomial representaion of fault-free function of the circuit is presented. The response of the circuit under test (CUT) is estimated as a polynomial in the applied input voltage at relevant frequencies apart from DC. Classification of CUT is based on a comparison of the estimated polynomial coefficients with those of the fault free circuit. The method needs very little augmentation of circuit to make it testable as only output parameters are used for classification. This procedure is shown to uncover several parametric faults causing smaller than 5 % deviations the nominal values. Fault diagnosis based upon sensitivity of polynomial coefficients at relevant frequencies is also proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.