35 resultados para interest-based negotiation
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper presents the new trend of FPGA (Field programmable Gate Array) based digital platform for the control of power electronic systems. There is a rising interest in using digital controllers in power electronic applications as they provide many advantages over their analog counterparts. A board comprising of Cyclone device EP1C12Q240C8 of Altera is used for developing this platform. The details of this board are presented. This developed platform can be used for the controller applications such as UPS, Induction Motor drives and front end converters. A real time simulation of a system can also be done. An open-loop induction motor drive has been implemented using this board and experimental results are presented.
Resumo:
Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.
Resumo:
Because of limited sensor and communication ranges, designing efficient mechanisms for cooperative tasks is difficult. In this article, several negotiation schemes for multiple agents performing a cooperative task are presented. The negotiation schemes provide suboptimal solutions, but have attractive features of fast decision-making, and scalability to large number of agents without increasing the complexity of the algorithm. A software agent architecture of the decision-making process is also presented. The effect of the magnitude of information flow during the negotiation process is studied by using different models of the negotiation scheme. The performance of the various negotiation schemes, using different information structures, is studied based on the uncertainty reduction achieved for a specified number of search steps. The negotiation schemes perform comparable to that of optimal strategy in terms of uncertainty reduction and also require very low computational time, similar to 7 per cent to that of optimal strategy. Finally, analysis on computational and communication requirement for the negotiation schemes is carried out.
Resumo:
The goal of this study is the multi-mode structural vibration control in the composite fin-tip of an aircraft. Structural model of the composite fin-tip with surface bonded piezoelectric actuators is developed using the finite element method. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes accurately. A model order reduction technique is employed for reducing the finite element structural matrices before developing the controller. Particle swarm based evolutionary optimization technique is used for optimal placement of piezoelectric patch actuators and accelerometer sensors to suppress vibration. H{infty} based active vibration controllers are designed directly in the discrete domain and implemented using dSpace® (DS-1005) electronic signal processing boards. Significant vibration suppression in the multiple bending modes of interest is experimentally demonstrated for sinusoidal and band limited white noise forcing functions.
Resumo:
The element-based piecewise smooth functional approximation in the conventional finite element method (FEM) results in discontinuous first and higher order derivatives across element boundaries Despite the significant advantages of the FEM in modelling complicated geometries, a motivation in developing mesh-free methods has been the ease with which higher order globally smooth shape functions can be derived via the reproduction of polynomials There is thus a case for combining these advantages in a so-called hybrid scheme or a `smooth FEM' that, whilst retaining the popular mesh-based discretization, obtains shape functions with uniform C-p (p >= 1) continuity One such recent attempt, a NURBS based parametric bridging method (Shaw et al 2008b), uses polynomial reproducing, tensor-product non-uniform rational B-splines (NURBS) over a typical FE mesh and relies upon a (possibly piecewise) bijective geometric map between the physical domain and a rectangular (cuboidal) parametric domain The present work aims at a significant extension and improvement of this concept by replacing NURBS with DMS-splines (say, of degree n > 0) that are defined over triangles and provide Cn-1 continuity across the triangle edges This relieves the need for a geometric map that could precipitate ill-conditioning of the discretized equations Delaunay triangulation is used to discretize the physical domain and shape functions are constructed via the polynomial reproduction condition, which quite remarkably relieves the solution of its sensitive dependence on the selected knotsets Derivatives of shape functions are also constructed based on the principle of reproduction of derivatives of polynomials (Shaw and Roy 2008a) Within the present scheme, the triangles also serve as background integration cells in weak formulations thereby overcoming non-conformability issues Numerical examples involving the evaluation of derivatives of targeted functions up to the fourth order and applications of the method to a few boundary value problems of general interest in solid mechanics over (non-simply connected) bounded domains in 2D are presented towards the end of the paper
Resumo:
A method to reliably extract object profiles even with height discontinuities (that leads to 2n pi phase jumps) is proposed. This method uses Fourier transform profilometry to extract wrapped phase, and an additional image formed by illuminating the object of interest by a novel gray coded pattern for phase unwrapping. Simulation results suggest that the proposed approach not only retains the advantages of the original method, but also contributes significantly in the enhancement of its performance. Fundamental advantage of this method stems from the fact that both extraction of wrapped phase and unwrapping the same were done by gray scale images. Hence, unlike the methods that use colors, proposed method doesn't demand a color CCD camera and is ideal for profiling objects with multiple colors.
Resumo:
The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.
Resumo:
Service discovery is vital in ubiquitous applications, where a large number of devices and software components collaborate unobtrusively and provide numerous services without user intervention. Existing service discovery schemes use a service matching process in order to offer services of interest to the users. Potentially, the context information of the users and surrounding environment can be used to improve the quality of service matching. To make use of context information in service matching, a service discovery technique needs to address certain challenges. Firstly, it is required that the context information shall have unambiguous representation. Secondly, the devices in the environment shall be able to disseminate high level and low level context information seamlessly in the different networks. And thirdly, dynamic nature of the context information be taken into account. We propose a C-IOB(Context-Information, Observation and Belief) based service discovery model which deals with the above challenges by processing the context information and by formulating the beliefs based on the observations. With these formulated beliefs the required services will be provided to the users. The method has been tested with a typical ubiquitous museum guide application over different cases. The simulation results are time efficient and quite encouraging.
Resumo:
Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
Advertisements(Ads) are the main revenue earner for Television (TV) broadcasters. As TV reaches a large audience, it acts as the best media for advertisements of products and services. With the emergence of digital TV, it is important for the broadcasters to provide an intelligent service according to the various dimensions like program features, ad features, viewers’ interest and sponsors’ preference. We present an automatic ad recommendation algorithm that selects a set of ads by considering these dimensions and semantically match them with programs. Features of the ad video are captured interms of annotations and they are grouped into number of predefined semantic categories by using a categorization technique. Fuzzy categorical data clustering technique is applied on categorized data for selecting better suited ads for a particular program. Since the same ad can be recommended for more than one program depending upon multiple parameters, fuzzy clustering acts as the best suited method for ad recommendation. The relative fuzzy score called “degree of membership” calculated for each ad indicates the membership of a particular ad to different program clusters. Subjective evaluation of the algorithm is done by 10 different people and rated with a high success score.
Resumo:
This paper addresses a search problem with multiple limited capability search agents in a partially connected dynamical networked environment under different information structures. A self assessment-based decision-making scheme for multiple agents is proposed that uses a modified negotiation scheme with low communication overheads. The scheme has attractive features of fast decision-making and scalability to large number of agents without increasing the complexity of the algorithm. Two models of the self assessment schemes are developed to study the effect of increase in information exchange during decision-making. Some analytical results on the maximum number of self assessment cycles, effect of increasing communication range, completeness of the algorithm, lower bound and upper bound on the search time are also obtained. The performance of the various self assessment schemes in terms of total uncertainty reduction in the search region, using different information structures is studied. It is shown that the communication requirement for self assessment scheme is almost half of the negotiation schemes and its performance is close to the optimal solution. Comparisons with different sequential search schemes are also carried out. Note to Practitioners-In the futuristic military and civilian applications such as search and rescue, surveillance, patrol, oil spill, etc., a swarm of UAVs can be deployed to carry out the mission for information collection. These UAVs have limited sensor and communication ranges. In order to enhance the performance of the mission and to complete the mission quickly, cooperation between UAVs is important. Designing cooperative search strategies for multiple UAVs with these constraints is a difficult task. Apart from this, another requirement in the hostile territory is to minimize communication while making decisions. This adds further complexity to the decision-making algorithms. In this paper, a self-assessment-based decision-making scheme, for multiple UAVs performing a search mission, is proposed. The agents make their decisions based on the information acquired through their sensors and by cooperation with neighbors. The complexity of the decision-making scheme is very low. It can arrive at decisions fast with low communication overheads, while accommodating various information structures used for increasing the fidelity of the uncertainty maps. Theoretical results proving completeness of the algorithm and the lower and upper bounds on the search time are also provided.
Resumo:
Solar cells on thin conformable substrates require conventional plastics such asPS and PMMA that provide better mechanical and environmental stability with cost reduction. We can also tune charge transfer between PPV derivatives and fullerene derivatives via morphology control of the plastics in the solar cells. Our group has conducted morphology evolution studies in nano- and microscale light emitting domains in poly (2-methoxy, 5-(2'-ethyl-hexyloxy)-p-phenylenevinylene) (MEH-PPV) and poly (methyl methacrylate) (PMMA) blends. Our current research has been focused on tricomponent-photoactive solar cells which comprise MEH-PPV, PMMA, and [6,6]-phenyl C61-butyric acid methyl ester (PCBM, Figure 1) in the photoactive layer. Morphology control of the photoactive materials and fine tuning of photovoltaic properties for the solar cells are our primary interest. Similar work has been done by the Sariciftci research group. Additionally, a study on inter- and intramolecular photoinduced charge transfer using MEH-PPV derivatives that have different conjugation lengths (Figure 1, n=1 and 0.85) has been performed.
Resumo:
Wave propagation in graphene sheet embedded in elastic medium (polymer matrix) has been a topic of great interest in nanomechanics of graphene sheets, where the equivalent continuum models are widely used. In this manuscript, we examined this issue by incorporating the nonlocal theory into the classical plate model. The influence of the nonlocal scale effects has been investigated in detail. The results are qualitatively different from those obtained based on the local/classical plate theory and thus, are important for the development of monolayer graphene-based nanodevices. In the present work, the graphene sheet is modeled as an isotropic plate of one-atom thick. The chemical bonds are assumed to be formed between the graphene sheet and the elastic medium. The polymer matrix is described by a Pasternak foundation model, which accounts for both normal pressure and the transverse shear deformation of the surrounding elastic medium. When the shear effects are neglected, the model reduces to Winkler foundation model. The normal pressure or Winkler elastic foundation parameter is approximated as a series of closely spaced, mutually independent, vertical linear elastic springs where the foundation modulus is assumed equivalent to stiffness of the springs. For this model, the nonlocal governing differential equations of motion are derived from the minimization of the total potential energy of the entire system. An ultrasonic type of flexural wave propagation model is also derived and the results of the wave dispersion analysis are shown for both local and nonlocal elasticity calculations. From this analysis we show that the elastic matrix highly affects the flexural wave mode and it rapidly increases the frequency band gap of flexural mode. The flexural wavenumbers obtained from nonlocal elasticity calculations are higher than the local elasticity calculations. The corresponding wave group speeds are smaller in nonlocal calculation as compared to local elasticity calculation. The effect of y-directional wavenumber (eta(q)) on the spectrum and dispersion relations of the graphene embedded in polymer matrix is also observed. We also show that the cut-off frequencies of flexural wave mode depends not only on the y-direction wavenumber but also on nonlocal scaling parameter (e(0)a). The effect of eta(q) and e(0)a on the cut-off frequency variation is also captured for the cases of with and without elastic matrix effect. For a given nanostructure, nonlocal small scale coefficient can be obtained by matching the results from molecular dynamics (MD) simulations and the nonlocal elasticity calculations. At that value of the nonlocal scale coefficient, the waves will propagate in the nanostructure at that cut-off frequency. In the present paper, different values of e(0)a are used. One can get the exact e(0)a for a given graphene sheet by matching the MD simulation results of graphene with the results presented in this article. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.