932 resultados para A priori
Resumo:
This paper addresses the problem of automated multiagent search in an unknown environment. Autonomous agents equipped with sensors carry out a search operation in a search space, where the uncertainty, or lack of information about the environment, is known a priori as an uncertainty density distribution function. The agents are deployed in the search space to maximize single step search effectiveness. The centroidal Voronoi configuration, which achieves a locally optimal deployment, forms the basis for the proposed sequential deploy and search strategy. It is shown that with the proposed control law the agent trajectories converge in a globally asymptotic manner to the centroidal Voronoi configuration. Simulation experiments are provided to validate the strategy. Note to Practitioners-In this paper, searching an unknown region to gather information about it is modeled as a problem of using search as a means of reducing information uncertainty about the region. Moreover, multiple automated searchers or agents are used to carry out this operation optimally. This problem has many applications in search and surveillance operations using several autonomous UAVs or mobile robots. The concept of agents converging to the centroid of their Voronoi cells, weighted with the uncertainty density, is used to design a search strategy named as sequential deploy and search. Finally, the performance of the strategy is validated using simulations.
Resumo:
Lamination-dependent shear corrective terms in the analysis of bending of laminated plates are derived from a priori assumed linear thicknesswise distributions for gradients of transverse shear stresses by using CLPT inplane stresses in the two in-plane equilibrium equations of elasticity in each ply. In the development of a general model for angle-ply laminated plates, special cases like cylindrical bending of laminates in either direction, symmetric laminates, cross-ply laminates, antisymmetric angle-ply laminates, homogeneous plates are taken into consideration. Adding these corrective terms to the assumed displacements in (i) Classical Laminate Plate Theory (CLPT) and (ii) Classical Laminate Shear Deformation Theory (CLSDT), two new refined lamination-dependent shear deformation models are developed. Closed form solutions from these models are obtained for antisymmetric angle-ply laminates under sinusoidal load for a type of simply supported boundary conditions. Results obtained from the present models and also from Ren's model (1987) are compared with each other.
Resumo:
Lamination-dependent shear corrective terms in the analysis of flexure of laminates are derived from a priori assumed linear thicknesswise distributions for gradients of transverse shear stresses and using them in the two in-plane equilibrium equations of elasticity in each ply. Adding these corrective terms to (i) Classical Laminate Plate Theory (CLPT) displacements and (ii) Classical Laminate Shear Deformation Theory (CLSDT) displacements, four new refined lamination-dependent shear deformation models for angle-ply laminates are developed. Performance of these models is evaluated by comparing the results from these models with those from exact elasticity solutions for antisymmetric 2-ply laminates and for 4-ply [15/-15](s) laminates. In general, the model with shear corrective terms based on CLPT and added to CLSDT displacements is sufficient and predicts good estimates, both qualitatively and quantitatively, for all displacements and stresses.
Resumo:
We consider a slow fading multiple-input multiple-output (MIMO) system with channel state information at both the transmitter and receiver. A well-known precoding scheme is based upon the singular value decomposition (SVD) of the channel matrix, which transforms the MIMO channel into parallel subchannels. Despite having low maximum likelihood decoding (MLD) complexity, this SVD precoding scheme provides a diversity gain which is limited by the diversity gain of the weakest subchannel. We therefore propose X- and Y-Codes, which improve the diversity gain of the SVD precoding scheme but maintain the low MLD complexity, by jointly coding information across a pair of subchannels. In particular, subchannels with high diversity gain are paired with those having low diversity gain. A pair of subchannels is jointly encoded using a 2 2 real matrix, which is fixed a priori and does not change with each channel realization. For X-Codes, these rotation matrices are parameterized by a single angle, while for Y-Codes, these matrices are left triangular matrices. Moreover, we propose X-, Y-Precoders with the same structure as X-, Y-Codes, but with encoding matrices adapted to each channel realization. We observed that X-Codes/Precoders are good for well-conditioned channels, while Y-Codes/Precoders are good for ill-conditioned channels.
Resumo:
In the absence of a reliable method for a priori prediction of structure and properties of inorganic solid materials, an experimental approach involving a systematic study of composition, structure and properties combined with chemical intuition based on previous experience is likely to be a viable alternative to the problem of rational design of inorganic materials. The approach is illustrated by taking perovskite lithium-ion conductors as an example.
Resumo:
Resonance Raman (RR) spectra are presented for p-nitroazobenzene dissolved in chloroform using 18 excitation Wavelengths, covering the region of (1)(n --> pi*) electronic transition. Raman intensities are observed for various totally symmetric fundamentals, namely, C-C, C-N, N=N, and N-O stretching vibrations, indicating that upon photoexcitation the excited-state evolution occurs along all of these vibrational coordinates. For a few fundamentals, interestingly, in p-nitroazobenzene, it is observed that the RR intensities decrease near the maxima of the resonant electronic (1)(n --> pi*) transition. This is attributed to the interference from preresonant scattering due to the strongly allowed (1)(pi --> pi*) electronic transition. The electronic absorption spectrum and the absolute Raman cross section for the nine Franck-Condon active fundamentals of p-nitroazobenzene have been successfully modeled using Heller's time-dependent formalism for Raman scattering. This employs harmonic description of the lowest energy (1)(n --> pi*) potential energy surface. The short-time isomerization dynamics is then examined from a priori knowledge of the ground-state normal mode descriptions of p-nitroazobenzene to convert the wave packet motion in dimensionless normal coordinates to internal coordinates. It is observed that within 20 fs after photoexcitation in p-nitroazobenzene, the N=N and C-N stretching vibrations undergo significant changes and the unsubstituted phenyl ring and the nitro stretching vibrations are also distorted considerably.
Resumo:
Fragility is viewed as a measure of the loss of rigidity of a glass structure above its glass transition temperature. It is attributed to the weakness of directional bonding and to the presence of a high density of low-energy configurational states. An a priori fragility function of electronegativities and bond distances is proposed which quite remarkably reproduces the entire range of reported fragilities and demonstrates that the fragility of a melt is indeed encrypted in the chemistry of the parent material. It has also been shown that the use of fragility-modified activation barriers in the Arrhenius function account for the whole gamut of viscosity behavior of liquids. It is shown that fragility can be a universal scaling parameter to collapse all viscosity curves on to a master plot.
Resumo:
Several replacement policies for web caches have been proposed and studied extensively in the literature. Different replacement policies perform better in terms of (i) the number of objects found in the cache (cache hit), (ii) the network traffic avoided by fetching the referenced object from the cache, or (iii) the savings in response time. In this paper, we propose a simple and efficient replacement policy (hereafter known as SE) which improves all three performance measures. Trace-driven simulations were done to evaluate the performance of SE. We compare SE with two widely used and efficient replacement policies, namely Least Recently Used (LRU) and Least Unified Value (LUV) algorithms. Our results show that SE performs at least as well as, if not better than, both these replacement policies. Unlike various other replacement policies proposed in literature, our SE policy does not require parameter tuning or a-priori trace analysis and has an efficient and simple implementation that can be incorporated in any existing proxy server or web server with ease.
Resumo:
An improved Monte Carlo technique is presented in this work to simulate nanoparticle formation through a micellar route. The technique builds on the simulation technique proposed by Bandyopadhyaya et al. (Langmuir 2000, 16, 7139) which is general and rigorous but at the same time very computation intensive, so much so that nanoparticle formation in low occupancy systems cannot be simulated in reasonable time. In view of this, several strategies, rationalized by simple mathematical analyses, are proposed to accelerate Monte Carlo simulations. These are elimination of infructuous events, removal of excess reactant postreaction, and use of smaller micelle population a large number of times. Infructuous events include collision of an empty micelle with another empty one or with another one containing only one molecule or only a solid particle. These strategies are incorporated in a new simulation technique which divides the entire micelle population in four classes and shifts micelles from one class to other as the simulation proceeds. The simulation results, throughly tested using chi-square and other tests, show that the predictions of the improved technique remain unchanged, but with more than an order of magnitude decrease in computational effort for some of the simulations reported in the literature. A post priori validation scheme for the correctness of the simulation results has been utilized to propose a new simulation strategy to arrive at converged simulation results with near minimum computational effort.
Resumo:
Scalable Networks on Chips (NoCs) are needed to match the ever-increasing communication demands of large-scale Multi-Processor Systems-on-chip (MPSoCs) for multi media communication applications. The heterogeneous nature of application specific on-chip cores along with the specific communication requirements among the cores calls for the design of application-specific NoCs for improved performance in terms of communication energy, latency, and throughput. In this work, we propose a methodology for the design of customized irregular networks-on-chip. The proposed method exploits a priori knowledge of the applications communication characteristic to generate an optimized network topology and corresponding routing tables.
Resumo:
The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.
Resumo:
This paper addresses the problem of multiagent search in an unknown environment. The agents are autonomous in nature and are equipped with necessary sensors to carry out the search operation. The uncertainty, or lack of information about the search area is known a priori as a probability density function. The agents are deployed in an optimal way so as to maximize the one step uncertainty reduction. The agents continue to deploy themselves and reduce uncertainty till the uncertainty density is reduced over the search space below a minimum acceptable level. It has been shown, using LaSalle’s invariance principle, that a distributed control law which moves each of the agents towards the centroid of its Voronoi partition, modified by the sensor range leads to single step optimal deployment. This principle is now used to devise search trajectories for the agents. The simulations were carried out in 2D space with saturation on speeds of the agents. The results show that the control strategy per step indeed moves the agents to the respective centroid and the algorithm reduces the uncertainty distribution to the required level within a few steps.
Resumo:
Image and video filtering is a key image-processing task in computer vision especially in noisy environment. In most of the cases the noise source is unknown and hence possess a major difficulty in the filtering operation. In this paper we present an error-correction based learning approach for iterative filtering. A new FIR filter is designed in which the filter coefficients are updated based on Widrow-Hoff rule. Unlike the standard filter the proposed filter has the ability to remove noise without the a priori knowledge of the noise. Experimental result shows that the proposed filter efficiently removes the noise and preserves the edges in the image. We demonstrate the capability of the proposed algorithm by testing it on standard images infected by Gaussian noise and on a real time video containing inherent noise. Experimental result shows that the proposed filter is better than some of the existing standard filters
Resumo:
Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design filters based on discrete cosine transform (DCT) is proposed in this study for optimal medical image filtering. This algorithm exploits the better energy compaction property of DCT and re-arrange these coefficients in a wavelet manner to get the better energy clustering at desired spatial locations. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions.
Resumo:
Soot particles are generated in a flame caused by burning ethylene gas. The particles are collected thermophoretically at different locations of the flame. The particles are used to lubricate a steel/steel ball on flat reciprocating sliding contact, as a dry solid lubricant and also as suspended in hexadecane. Reciprocating contact is shown to establish a protective and low friction tribo-film. The friction correlates with the level of graphitic order of the soot, which is highest in the soot extracted from the mid-flame region and is low in the soot extracted from the flame root and flame tip regions. Micro-Raman spectroscopy of the tribo-film shows that the a priori graphitic order, the molecular carbon content of the soot and the graphitization of the film as brought about by tribology distinguish between the frictions of soot extracted from different regions of the flame, and differentiate the friction associated with dry tribology from that recorded under lubricated tribology.