918 resultados para Perfusion-weighted electroencephalography
Resumo:
This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess distortion caused due to channel errors. The index output by the source encoder is sent over a fading discrete binary symmetric channel and the possibly incorrect received index is mapped to the corresponding codeword by a Vector Quantization (VQ) decoder at the receiver. The output of the VQ decoder is then processed by a receive filter to obtain an estimate of the source instantiation. The distortion performance is analyzed for weighted mean square error (WMSE) and the optimum receive filter that minimizes the expected distortion is derived for two different cases of fading. It is shown that the performance of the system with the receive filter is strictly better than that of a conventional VQ and the difference becomes more significant as the number of bits transmitted increases. Theoretical expressions for an upper and lower bound on the WMSE performance of the system with the receive filter and a Rayleigh flat fading channel are derived. The design of a receive filter in the presence of channel mismatch is also studied and it is shown that a minimax solution is the one obtained by designing the receive filter for the worst possible channel. Simulation results are presented to validate the theoretical expressions and illustrate the benefits of receive filtering.
Resumo:
The maintenance of chlorine residual is needed at all the points in the distribution system supplied with chlorine as a disinfectant. The propagation and level of chlorine in a distribution system is affected by both bulk and pipe wall reactions. It is well known that the field determination of wall reaction parameter is difficult. The source strength of chlorine to maintain a specified chlorine residual at a target node is also an important parameter. The inverse model presented in the paper determines these water quality parameters, which are associated with different reaction kinetics, either in single or in groups of pipes. The weighted-least-squares method based on the Gauss-Newton minimization technique is used for the estimation of these parameters. The validation and application of the inverse model is illustrated with an example pipe distribution system under steady state. A generalized procedure to handle noisy and bad (abnormal) data is suggested, which can be used to estimate these parameters more accurately. The developed inverse model is useful for water supply agencies to calibrate their water distribution system and to improve their operational strategies to maintain water quality.
Resumo:
We have developed two reduced complexity bit-allocation algorithms for MP3/AAC based audio encoding, which can be useful at low bit-rates. One algorithm derives optimum bit-allocation using constrained optimization of weighted noise-to-mask ratio and the second algorithm uses decoupled iterations for distortion control and rate control, with convergence criteria. MUSHRA based evaluation indicated that the new algorithm would be comparable to AAC but requiring only about 1/10 th the complexity.
Resumo:
We present a improved language modeling technique for Lempel-Ziv-Welch (LZW) based LID scheme. The previous approach to LID using LZW algorithm prepares the language pattern table using LZW algorithm. Because of the sequential nature of the LZW algorithm, several language specific patterns of the language were missing in the pattern table. To overcome this, we build a universal pattern table, which contains all patterns of different length. For each language it's corresponding language specific pattern table is constructed by retaining the patterns of the universal table whose frequency of appearance in the training data is above the threshold.This approach reduces the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score[LZW-WDS]) for non native languages and increases the LID performance considerably.
Resumo:
In each stage of product development, we need to take decisions, by evaluating multiple product alternatives based on multiple criteria. Classical evaluation methods like weighted objectives method assumes certainty about information available during product development. However, designers often must evaluate under uncertainty. Often the likely performance, cost or environmental impacts of a product proposal could be estimated only with certain confidence, which may vary from one proposal to another. In such situations, the classical approaches to evaluation can give misleading results. There is a need for a method that can aid in decision making by supporting quantitative comparison of alternatives to identify the most promising alternative, under uncertain information about the alternatives. A method called confidence weighted objectives method is developed to compare the whole life cycle of product proposals using multiple evaluation criteria under various levels of uncertainty with non crisp values. It estimates the overall worth of proposal and confidence on the estimate, enabling deferment of decision making when decisions cannot be made using current information available.
Resumo:
Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.
Resumo:
We present a new approach to spoken language modeling for language identification (LID) using the Lempel-Ziv-Welch (LZW) algorithm. The LZW technique is applicable to any kind of tokenization of the speech signal. Because of the efficiency of LZW algorithm to obtain variable length symbol strings in the training data, the LZW codebook captures the essentials of a language effectively. We develop two new deterministic measures for LID based on the LZW algorithm namely: (i) Compression ratio score (LZW-CR) and (ii) weighted discriminant score (LZW-WDS). To assess these measures, we consider error-free tokenization of speech as well as artificially induced noise in the tokenization. It is shown that for a 6 language LID task of OGI-TS database with clean tokenization, the new model (LZW-WDS) performs slightly better than the conventional bigram model. For noisy tokenization, which is the more realistic case, LZW-WDS significantly outperforms the bigram technique
Resumo:
Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.
Resumo:
This paper obtains a new accurate model for sensitivity in power systems and uses it in conjunction with linear programming for the solution of load-shedding problems with a minimum loss of loads. For cases where the error in the sensitivity model increases, other linear programming and quadratic programming models have been developed, assuming currents at load buses as variables and not load powers. A weighted error criterion has been used to take priority schedule into account; it can be either a linear or a quadratic function of the errors, and depending upon the function appropriate programming techniques are to be employed.
Resumo:
L-PGlu-(2-proPyl)-L-His-L-ProNH(2) (NP-647) is a CNS active thyrotropin-releasing hormone (TRH) analog with potential application in various CNS disorders including seizures. In the present study, mechanism of action for protective effect of NP-647 was explored by studying role of NP-647 on epileptiform activity and sodium channels by using patch-clamp methods. Epileptiform activity was induced in subicular pyramidal neurons of hippocampal slice of rat by perfusing 4-aminopyridine (4-AP) containing Mg(+2)-free normal artificial cerebrospinal fluid (nACSF). Increase in mean firing frequency was observed after perfusion of 4-AP and zero Mg(+2) (2.10+/-0.47 Hz) as compared with nACSF (0.12+/-0.08 Hz). A significant decrease in mean firing frequency (0.61+/-0.22 Hz), mean frequency of epileptiform events (0.03+/-0.02 Hz vs. 0.22+/-0.05 Hz of 4-AP+0 Mg), and average number of action potentials in paroxysmal depolarization shift-burst (2.54+/-1.21 Hz vs. 8.16+/-0.88 Hz of 4-AP +0 Mg) was observed. A significant reduction in peak dV/dt (246+/-19 mV ms(-1) vs. 297 18 mV ms-1 of 4-AP+0 Mg) and increase (1.332+/-0.018 ms vs. 1.292+/-0.019 ms of 4-AP+0 Mg) in time required to reach maximum depolarization were observed indicating role of sodium channels. Concentration-dependent depression of sodium current was observed after exposure to dorsal root ganglion neurons to NP-647. NP-647 at different concentrations (1, 3, and 10 mu M) depressed sodium current (15+/-0.5%, 50+/-2.6%, and 75+/-0.7%, respectively). However, NP-647 did not show change in the peak sodium current in CNa18 cells. Results of present study demonstrated potential of NP-647 in the inhibition of epileptiform activity by inhibiting sodium channels indirectly. (C) 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Resumo:
The design of machine foundations are done on the basis of two principal criteria viz., vibration amplitude should be within the permissible limits and natural frequency of machine-foundation-soil system should be away from the operating frequency (i.e. avoidance of resonance condition). In this paper the nondimensional amplitude factor M-m or M-r m and the nondimensional frequency factor a(o m) at resonance are related using elastic half space theory and is used as a new approach for a simplified design procedure for the design of machine foundations for all the modes of vibration fiz. vertical, horizontal, rocking and torsional for rigid base pressure distribution and weighted average displacement condition. The analysis show that one need not know the value of Poisson's ratio for rotating mass system for all the modes of vibration.
Resumo:
More than six years after the great (M-w 9.2) Sumatra-Andaman earthquake, postevent processes responsible for relaxation of the coseismic stress change remain controversial. Modeling of Andaman Islands Global Positioning System (GPS) displacements indicated early near-field motions were dominated by slip down-dip of the rupture, but various researchers ascribe elements of relaxation to dominantly poroelastic, dominantly viscoelastic, and dominantly fault slip processes, depending primarily on their measurement sampling and modeling tools used. After subtracting a pre-2004 interseismic velocity, significant transient motion during the 2008.5-2010.5 epoch confirms that postseismic relaxation processes continue in Andaman. Modeling three-component velocities as viscoelastic flow yields a weighted root-mean-square (wrms) misfit that always exceeds the wrms of the measured signal (26.3 mm/yr). The best-fitting models are those that yield negligible deformation, indicating the model parameters have no real physical meaning. GPS velocities are well fit (wrms 4.0 mm/yr) by combining a viscoelastic flow model that best fits the horizontal velocities with similar to 50 cm/yr thrust slip down-dip of the coseismic rupture. Both deep slip and flow respond to stress changes, and each can significantly change stress in the realm of the other; it therefore is reasonable to expect that both transient deep slip and viscoelastic flow will influence surface deformation long after a great earthquake.
Resumo:
Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes with multiple queues and multiple grades of service. We present a closed-loop multi-layered pricing scheme and propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. This is different from most adaptive pricing schemes in the literature that do not obtain a closed-loop state dependent pricing policy. The method that we propose finds optimal price levels that are functions of the queue lengths at individual queues. Further, we also propose a variant of the above scheme that assigns prices to incoming packets at each node according to a weighted average queue length at that node. This is done to reduce frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using both of our schemes over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our first scheme exhibits a throughput improvement in the range of 67-82% among all routes over the above scheme. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Background & objectives: There is a need to develop an affordable and reliable tool for hearing screening of neonates in resource constrained, medically underserved areas of developing nations. This study valuates a strategy of health worker based screening of neonates using a low cost mechanical calibrated noisemaker followed up with parental monitoring of age appropriate auditory milestones for detecting severe-profound hearing impairment in infants by 6 months of age. Methods: A trained health worker under the supervision of a qualified audiologist screened 425 neonates of whom 20 had confirmed severe-profound hearing impairment. Mechanical calibrated noisemakers of 50, 60, 70 and 80 dB (A) were used to elicit the behavioural responses. The parents of screened neonates were instructed to monitor the normal language and auditory milestones till 6 months of age. This strategy was validated against the reference standard consisting of a battery of tests - namely, auditory brain stem response (ABR), otoacoustic emissions (OAE) and behavioural assessment at 2 years of age. Bayesian prevalence weighted measures of screening were calculated. Results: The sensitivity and specificity was high with least false positive referrals for. 70 and 80 dB (A) noisemakers. All the noisemakers had 100 per cent negative predictive value. 70 and 80 dB (A) noisemakers had high positive likelihood ratios of 19 and 34, respectively. The probability differences for pre- and post- test positive was 43 and 58 for 70 and 80 dB (A) noisemakers, respectively. Interpretation & conclusions: In a controlled setting, health workers with primary education can be trained to use a mechanical calibrated noisemaker made of locally available material to reliably screen for severe-profound hearing loss in neonates. The monitoring of auditory responses could be done by informed parents. Multi-centre field trials of this strategy need to be carried out to examine the feasibility of community health care workers using it in resource constrained settings of developing nations to implement an effective national neonatal hearing screening programme.