853 resultados para Packet Filtering


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of detecting an unknown transient signal in noise is considered. The SNR of the observed data is first enhanced using wavelet domain filter The output of the wavelet domain filter is then transformed using a Wigner-Ville transform,which separates the spectrum of the observed signal into narrow frequency bands. Each subband signal at the output of the Wigner-ville block is subjected kto wavelet based level dependent denoising (WBLDD)to supress colored noise A weighted sum of the absolute value of outputs of WBLDD is passed through an energy detector, whose output is used as test statistic to take the final decision. By assigning weights proportional to the energy of the corresponding subband signals, the proposed detector approximates a frequency domain matched filter Simulation results are presented to show that the performance of the proposed detector is better than that of the wavelet packet transform based detector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing use of 3D modeling of Human Face in Face Recognition systems, User Interfaces, Graphics, Gaming and the like has made it an area of active study. Majority of the 3D sensors rely on color coded light projection for 3D estimation. Such systems fail to generate any response in regions covered by Facial Hair (like beard, mustache), and hence generate holes in the model which have to be filled manually later on. We propose the use of wavelet transform based analysis to extract the 3D model of Human Faces from a sinusoidal white light fringe projected image. Our method requires only a single image as input. The method is robust to texture variations on the face due to space-frequency localization property of the wavelet transform. It can generate models to pixel level refinement as the phase is estimated for each pixel by a continuous wavelet transform. In cases of sparse Facial Hair, the shape distortions due to hairs can be filtered out, yielding an estimate for the underlying face. We use a low-pass filtering approach to estimate the face texture from the same image. We demonstrate the method on several Human Faces both with and without Facial Hairs. Unseen views of the face are generated by texture mapping on different rotations of the obtained 3D structure. To the best of our knowledge, this is the first attempt to estimate 3D for Human Faces in presence of Facial hair structures like beard and mustache without generating holes in those areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The neural network finds its application in many image denoising applications because of its inherent characteristics such as nonlinear mapping and self-adaptiveness. The design of filters largely depends on the a-priori knowledge about the type of noise. Due to this, standard filters are application and image specific. Widely used filtering algorithms reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design a finite impulse response filter based on principal component neural network (PCNN) is proposed in this study for image filtering, optimized in the sense of visual inspection and error metric. This algorithm exploits the inter-pixel correlation by iteratively updating the filter coefficients using PCNN. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions. Further, the number of unknown parameters is very few and most of these parameters are adaptively obtained from the processed image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image filtering techniques have potential applications in biomedical image processing such as image restoration and image enhancement. The potential of traditional filters largely depends on the apriori knowledge about the type of noise corrupting the image. This makes the standard filters to be application specific. For example, the well-known median filter and its variants can remove the salt-and-pepper (or impulse) noise at low noise levels. Each of these methods has its own advantages and disadvantages. In this paper, we have introduced a new finite impulse response (FIR) filter for image restoration where, the filter undergoes a learning procedure. The filter coefficients are adaptively updated based on correlated Hebbian learning. This algorithm exploits the inter pixel correlation in the form of Hebbian learning and hence performs optimal smoothening of the noisy images. The application of the proposed filter on images corrupted with Gaussian noise, results in restorations which are better in quality compared to those restored by average and Wiener filters. The restored image is found to be visually appealing and artifact-free

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the slotted ALOHA protocol on a channel with a capture effect. There are M packets are transmitted, then the probability of a successful reception of a packet is q(i). This model contains the CDMA protocols as special cases. We obtain sufficient rate conditions, which are close to necessary for stability of the system, when the arrival streams are stationary ergodic. Under the same rate conditions, for general regenerative arrival streams, we obtain the rates of convergence to stationarity, finiteness of stationary moments and various functional limit theorems. Our arrival streams contain all the traffic models suggested in the recent literature, including the ones which display long range dependence. We also obtain bounds on the stationary moments of waiting times which can be tight under realistic conditions. Finally, we obtain several results on the transient performance of the system, e.g., first time to overflow and the limits of the overflow process. We also extend the above results to the case of a capture channel exhibiting Markov modulated fading. Most of our results and proofs will be shown to hold also for the slotted ALOHA protocol without capture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Väärinkäytettyjen aineiden seulontaan käytetyn menetelmän tulee olla herkkä, selektiivinen, yksinkertainen, nopea ja toistettava. Työn tavoitteena oli kehittää yksinkertainen, mutta herkkä, esikäsittelymenetelmä bentsodiatsepiinien ja amfetamiinijohdannaisten kvalitatiiviseen seulomiseen virtsasta mikropilarisähkösumutussirun (μPESI) avulla, mikä tarjoaisi vaihtoehdon seulonnassa käytetyille immunologisille menetelmille, joiden herkkyys ja selektiivisyys ovat puutteellisia. Tavoitteena oli samalla tarkastella mikropilarisähkösumutussirun toimivuutta biologisten näytteiden analyysissa. Esikäsittely optimoitiin erikseen bentsodiatsepiineille ja amfetamiinijohdannaisille. Käytettyjä esikäsittelymenetelmiä olivat neste-nesteuutto, kiinteäfaasiuutto Oasis HLB-patruunalla ja ZipTip®-pipetinkärjellä sekä laimennus ja suodatus ilman uuttoa. Mittausten perusteella keskityttiin optimoimaan ZipTip®-uuttoa. Optimoinnissa tutkittavia yhdisteitä spiikattiin 0-virtsaan niiden ennaltamääritetyn raja-arvon verran, bentsodiatsepiineja 200 ng/ml ja amfetamiinijohdannaisia 300 ng/ml. Bentsodiatsepiinien kohdalla optimoitiin kutakin uuton vaihetta ja optimoinnin tuloksena näytteen pH säädettiin arvoon 5, faasi kunnostettiin asetonitriililla, tasapainotettiin ja pestiin veden (pH 5) ja asetonitriilin (10 % v/v) seoksella ja eluoitiin asetonitriilin, muurahaishapon ja veden (95:1:4 v/v/v) seoksella. Amfetamiinijohdannaisten uutossa optimoitiin näytteen ja liuottimien pH-arvoja ja tuloksena näytteen pH säädettiin arvoon 10, faasi kunnostettiin veden ja ammoniumvetykarbonaatin(pH 10, 1:1 v/v) seoksella, tasapainotettiin ja pestiin asetonitriilin ja veden (1:5 v/v) seoksella ja eluoitiin metanolilla. Optimoituja uuttoja testattiin Yhtyneet Medix Laboratorioista toimitetuilla autenttisilla virtsanäytteillä ja saatuja tuloksia verrattiin kvantitatiivisen GC/MS-analyysin tuloksiin. Bentsodiatsepiininäytteet hydrolysoitiin ennen uuttoa herkkyyden parantamiseksi. Autenttiset näytteet analysoitiin Q-TOF-laitteella Viikissä. Lisäksi hydrolysoidut bentsodiatsepiininäytteet mitattiin Yhtyneet Medix Laboratorioiden TOF-laitteella. Kehitetty menetelmä vaatii tulosten perusteella lisää optimointia toimiakseen. Ongelmana oli etenkin toistoissa ilmennyt tulosten hajonta. Manuaalista näytteensyöttöä tulisi kehittää toistettavammaksi. Autenttisten bentsodiatsepiininäytteiden analyysissa ongelmana olivat virheelliset negatiiviset tulokset ja amfetamiinijohdannaisten analyysissa virheelliset positiiviset tulokset. Virheellisiä negatiivisia tuloksia selittää menetelmän herkkyyden puute ja virheellisiä positiivisia tuloksia mittalaitteen, sirujen tai liuottimien likaantuminen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The near flow field of small aspect ratio elliptic turbulent free jets (issuing from nozzle and orifice) was experimentally studied using a 2D PIV. Two point velocity correlations in these jets revealed the extent and orientation of the large scale structures in the major and minor planes. The spatial filtering of the instantaneous velocity field using Gaussian convolution kernel shows that while a single large vortex ring circumscribing the jet seems to be present at the exit of nozzle, the orifice jet exhibited a number of smaller vortex ring pairs close to jet exit. The smaller length scale observed in the case of the orifice jet is representative of the smaller azimuthal vortex rings that generate axial vortex field as they are convected. This results in the axis-switching in the case of orifice jet and may have a mechanism different from the self induction process as observed in the case of contoured nozzle jet flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study a sensor node with an energy harvesting source. In any slot,the sensor node is in one of two modes: Wake or Sleep. The generated energy is stored in a buffer. The sensor node senses a random field and generates a packet when it is awake. These packets are stored in a queue and transmitted in the wake mode using the energy available in the energy buffer. We obtain energy management policies which minimize a linear combination of the mean queue length and the mean data loss rate. Then, we obtain two easily implementable suboptimal policies and compare their performance to that of the optimal policy. Next, we extend the Throughput Optimal policy developed in our previous work to sensors with two modes. Via this policy, we can increase the through put substantially and stabilize the data queue by allowing the node to sleep in some slots and to drop some generated packets. This policy requires minimal statistical knowledge of the system. We also modify this policy to decrease the switching costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hardware constraints, which motivate receive antenna selection, also require that various antenna elements at the receiver be sounded sequentially to obtain estimates required for selecting the `best' antenna and for coherently demodulating data thereafter. Consequently, the channel state information at different antennas is outdated by different amounts and corrupted by noise. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, a preferable strategy is to linearly weight the channel estimates of different antennas differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then characterize explicitly the optimal selection weights that minimize the SEP. We also consider packet reception, in which multiple symbols of a packet are received by the same antenna. New suboptimal, but computationally efficient weighted selection schemes are proposed for reducing the packet error rate. The benefits of weighted selection are also demonstrated using a practical channel code used in third generation cellular systems. Our results show that optimal weighted selection yields a significant performance gain over conventional unweighted selection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The new paradigm of connectedness and empowerment brought by the interactivity feature of the Web 2.0 has been challenging the traditional centralized performance of mainstream media. The corporation has been able to survive the strong winds by transforming itself into a global multimedia business network embedded in the network society. By establishing networks, e.g. networks of production and distribution, the global multimedia business network has been able to sight potential solutions by opening the doors to innovation in a decentralized and flexible manner. Under this emerging context of re-organization, traditional practices like sourcing need to be re- explained and that is precisely what this thesis attempts to tackle. Based on ICT and on the network society, the study seeks to explain within the Finnish context the particular case of Helsingin Sanomat (HS) and its relations with the youth news agency, Youth Voice Editorial Board (NÄT). In that sense, the study can be regarded as an explanatory embedded single case study, where HS is the principal unit of analysis and NÄT its embedded unit of analysis. The thesis was able to reach explanations through interrelated steps. First, it determined the role of ICT in HS’s sourcing practices. Then it mapped an overview of the HS’s sourcing relations and provided a context in which NÄT was located. And finally, it established conceptualized institutional relational data between HS and NÄT for their posterior measurement through social network analysis. The data set was collected via qualitative interviews addressed to online and offline editors of HS as well as interviews addressed to NÄT’s personnel. The study concluded that ICT’s interactivity and User Generated Content (UGC) are not sourcing tools as such but mechanism used by HS for getting ideas that could turn into potential news stories. However, when it comes to visual communication, some exemptions were found. The lack of official sources amidst the immediacy leads HS to rely on ICT’s interaction and UGC. More than meets the eye, ICT’s input into the sourcing practice may be more noticeable if the interaction and UGC is well organized and coordinated into proper and innovative networks of alternative content collaboration. Currently, HS performs this sourcing practice via two projects that differ, precisely, by the mode they are coordinated. The first project found, Omakaupunki, is coordinated internally by Sanoma Group’s owned media houses HS, Vartti and Metro. The second project found is coordinated externally. The external alternative sourcing network, as it was labeled, consists of three actors, namely HS, NÄT (professionals in charge) and the youth. This network is a balanced and complete triad in which the actors connect themselves in relations of feedback, recognition, creativity and filtering. However, as innovation is approached very reluctantly, this content collaboration is a laboratory of experiments; a ‘COLLABORATORY’.