72 resultados para A priori
Resumo:
Several replacement policies for web caches have been proposed and studied extensively in the literature. Different replacement policies perform better in terms of (i) the number of objects found in the cache (cache hit), (ii) the network traffic avoided by fetching the referenced object from the cache, or (iii) the savings in response time. In this paper, we propose a simple and efficient replacement policy (hereafter known as SE) which improves all three performance measures. Trace-driven simulations were done to evaluate the performance of SE. We compare SE with two widely used and efficient replacement policies, namely Least Recently Used (LRU) and Least Unified Value (LUV) algorithms. Our results show that SE performs at least as well as, if not better than, both these replacement policies. Unlike various other replacement policies proposed in literature, our SE policy does not require parameter tuning or a-priori trace analysis and has an efficient and simple implementation that can be incorporated in any existing proxy server or web server with ease.
Resumo:
An improved Monte Carlo technique is presented in this work to simulate nanoparticle formation through a micellar route. The technique builds on the simulation technique proposed by Bandyopadhyaya et al. (Langmuir 2000, 16, 7139) which is general and rigorous but at the same time very computation intensive, so much so that nanoparticle formation in low occupancy systems cannot be simulated in reasonable time. In view of this, several strategies, rationalized by simple mathematical analyses, are proposed to accelerate Monte Carlo simulations. These are elimination of infructuous events, removal of excess reactant postreaction, and use of smaller micelle population a large number of times. Infructuous events include collision of an empty micelle with another empty one or with another one containing only one molecule or only a solid particle. These strategies are incorporated in a new simulation technique which divides the entire micelle population in four classes and shifts micelles from one class to other as the simulation proceeds. The simulation results, throughly tested using chi-square and other tests, show that the predictions of the improved technique remain unchanged, but with more than an order of magnitude decrease in computational effort for some of the simulations reported in the literature. A post priori validation scheme for the correctness of the simulation results has been utilized to propose a new simulation strategy to arrive at converged simulation results with near minimum computational effort.
Resumo:
Scalable Networks on Chips (NoCs) are needed to match the ever-increasing communication demands of large-scale Multi-Processor Systems-on-chip (MPSoCs) for multi media communication applications. The heterogeneous nature of application specific on-chip cores along with the specific communication requirements among the cores calls for the design of application-specific NoCs for improved performance in terms of communication energy, latency, and throughput. In this work, we propose a methodology for the design of customized irregular networks-on-chip. The proposed method exploits a priori knowledge of the applications communication characteristic to generate an optimized network topology and corresponding routing tables.
Resumo:
The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.
Resumo:
This paper addresses the problem of multiagent search in an unknown environment. The agents are autonomous in nature and are equipped with necessary sensors to carry out the search operation. The uncertainty, or lack of information about the search area is known a priori as a probability density function. The agents are deployed in an optimal way so as to maximize the one step uncertainty reduction. The agents continue to deploy themselves and reduce uncertainty till the uncertainty density is reduced over the search space below a minimum acceptable level. It has been shown, using LaSalle’s invariance principle, that a distributed control law which moves each of the agents towards the centroid of its Voronoi partition, modified by the sensor range leads to single step optimal deployment. This principle is now used to devise search trajectories for the agents. The simulations were carried out in 2D space with saturation on speeds of the agents. The results show that the control strategy per step indeed moves the agents to the respective centroid and the algorithm reduces the uncertainty distribution to the required level within a few steps.
Resumo:
Image and video filtering is a key image-processing task in computer vision especially in noisy environment. In most of the cases the noise source is unknown and hence possess a major difficulty in the filtering operation. In this paper we present an error-correction based learning approach for iterative filtering. A new FIR filter is designed in which the filter coefficients are updated based on Widrow-Hoff rule. Unlike the standard filter the proposed filter has the ability to remove noise without the a priori knowledge of the noise. Experimental result shows that the proposed filter efficiently removes the noise and preserves the edges in the image. We demonstrate the capability of the proposed algorithm by testing it on standard images infected by Gaussian noise and on a real time video containing inherent noise. Experimental result shows that the proposed filter is better than some of the existing standard filters
Resumo:
Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design filters based on discrete cosine transform (DCT) is proposed in this study for optimal medical image filtering. This algorithm exploits the better energy compaction property of DCT and re-arrange these coefficients in a wavelet manner to get the better energy clustering at desired spatial locations. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions.
Resumo:
Soot particles are generated in a flame caused by burning ethylene gas. The particles are collected thermophoretically at different locations of the flame. The particles are used to lubricate a steel/steel ball on flat reciprocating sliding contact, as a dry solid lubricant and also as suspended in hexadecane. Reciprocating contact is shown to establish a protective and low friction tribo-film. The friction correlates with the level of graphitic order of the soot, which is highest in the soot extracted from the mid-flame region and is low in the soot extracted from the flame root and flame tip regions. Micro-Raman spectroscopy of the tribo-film shows that the a priori graphitic order, the molecular carbon content of the soot and the graphitization of the film as brought about by tribology distinguish between the frictions of soot extracted from different regions of the flame, and differentiate the friction associated with dry tribology from that recorded under lubricated tribology.
Resumo:
This paper reports on our study of the edge of the 2/5 fractional quantum Hall state, which is more complicated than the edge of the 1/3 state because of the presence of edge sectors corresponding to different partitions of composite fermions in the lowest two Lambda levels. The addition of an electron at the edge is a nonperturbative process and it is not a priori obvious in what manner the added electron distributes itself over these sectors. We show, from a microscopic calculation, that when an electron is added at the edge of the ground state in the [N(1), N(2)] sector, where N(1) and N(2) are the numbers of composite fermions in the lowest two Lambda levels, the resulting state lies in either [N(1) + 1, N(2)] or [N(1), N(2) + 1] sectors; adding an electron at the edge is thus equivalent to adding a composite fermion at the edge. The coupling to other sectors of the form [N(1) + 1 + k, N(2) - k], k integer, is negligible in the asymptotically low-energy limit. This study also allows a detailed comparison with the two-boson model of the 2/5 edge. We compute the spectral weights and find that while the individual spectral weights are complicated and nonuniversal, their sum is consistent with an effective two-boson description of the 2/5 edge.
Resumo:
The eigenvalues and eigenfunctions corresponding to the three-dimensional equations for the linear elastic equilibrium of a clamped plate of thickness 2ϵ, are shown to converge (in a specific sense) to the eigenvalues and eigenfunctions of the well-known two-dimensional biharmonic operator of plate theory, as ϵ approaches zero. In the process, it is found in particular that the displacements and stresses are indeed of the specific forms usually assumed a priori in the literature. It is also shown that the limit eigenvalues and eigenfunctions can be equivalently characterized as the leading terms in an asymptotic expansion of the three-dimensional solutions, in terms of powers of ϵ. The method presented here applies equally well to the stationary problem of linear plate theory, as shown elsewhere by P. Destuynder.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Resumo:
Chemical reactions inside cells are typically subject to the effects both of the cell's confining surfaces and of the viscoelastic behavior of its contents. In this paper, we show how the outcome of one particular reaction of relevance to cellular biochemistry - the diffusion-limited cyclization of long chain polymers - is influenced by such confinement and crowding effects. More specifically, starting from the Rouse model of polymer dynamics, and invoking the Wilemski-Fixman approximation, we determine the scaling relationship between the mean closure time t(c) of a flexible chain (no excluded volume or hydrodynamic interactions) and the length N of its contour under the following separate conditions: (a) confinement of the chain to a sphere of radius d and (b) modulation of its dynamics by colored Gaussian noise. Among other results, we find that in case (a) when d is much smaller than the size of the chain, t(c) similar to Nd-2, and that in case (b), t(c) similar to N-2/(2 (2H)), H being a number between 1/2 and 1 that characterizes the decay of the noise correlations. H is not known a priori, but values of about 0.7 have been used in the successful characterization of protein conformational dynamics. At this value of H (selected for purposes of illustration), t(c) similar to N-3.4, the high scaling exponent reflecting the slow relaxation of the chain in a viscoelastic medium. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4729041]
Resumo:
We develop a quadratic C degrees interior penalty method for linear fourth order boundary value problems with essential and natural boundary conditions of the Cahn-Hilliard type. Both a priori and a posteriori error estimates are derived. The performance of the method is illustrated by numerical experiments.
Resumo:
We address the problem of detecting cells in biological images. The problem is important in many automated image analysis applications. We identify the problem as one of clustering and formulate it within the framework of robust estimation using loss functions. We show how suitable loss functions may be chosen based on a priori knowledge of the noise distribution. Specifically, in the context of biological images, since the measurement noise is not Gaussian, quadratic loss functions yield suboptimal results. We show that by incorporating the Huber loss function, cells can be detected robustly and accurately. To initialize the algorithm, we also propose a seed selection approach. Simulation results show that Huber loss exhibits better performance compared with some standard loss functions. We also provide experimental results on confocal images of yeast cells. The proposed technique exhibits good detection performance even when the signal-to-noise ratio is low.