111 resultados para Likelihood


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In China, the recent outbreak of novel influenza A/H7N9 virus has been assumed to be severe, and it may possibly turn brutal in the near future. In order to develop highly protective vaccines and drugs for the A/H7N9 virus, it is critical to find out the selection pressure of each amino acid site. In the present study, six different statistical methods consisting of four independent codon-based maximum likelihood (CML) methods, one hierarchical Bayesian (HB) method and one branch-site (BS) method, were employed to determine if each amino acid site of A/H7N9 virus is under natural selection pressure. Functions for both positively and negatively selected sites were inferred by annotating these sites with experimentally verified amino acid sites. Comprehensively, the single amino acid site 627 of PB2 protein was inferred as positively selected and it function was identified as a T-cell epitope (TCE). Among the 26 negatively selected amino acid sites of PB2, PB1, PA, HA, NP, NA, M1 and NS2 proteins, only 16 amino acid sites were identified to be involved in TCEs. In addition, 7 amino acid sites including, 608 and 609 of PA, 480 of NP, and 24, 25, 109 and 205 of M1, were identified to be involved in both B-cell epitopes (BCEs) and TCEs. Conversely, the function of positions 62 of PA, and, 43 and 113 of HA was unknown. In conclusion, the seven amino acid sites engaged in both BCEs and TCEs were identified as highly suitable targets, as these sites will be predicted to play a principal role in inducing strong humoral and cellular immune responses against A/H7N9 virus. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have developed a real-time imaging method for two-color wide-field fluorescence microscopy using a combined approach that integrates multi-spectral imaging and Bayesian image reconstruction technique. To enable simultaneous observation of two dyes (primary and secondary), we exploit their spectral properties that allow parallel recording in both the channels. The key advantage of this technique is the use of a single wavelength of light to excite both the primary dye and the secondary dye. The primary and secondary dyes respectively give rise to fluorescence and bleed-through signal, which after normalization were merged to obtain two-color 3D images. To realize real-time imaging, we employed maximum likelihood (ML) and maximum a posteriori (MAP) techniques on a high-performance computing platform (GPU). The results show two-fold improvement in contrast while the signal-to-background ratio (SBR) is improved by a factor of 4. We report a speed boost of 52 and 350 for 2D and 3D images respectively. Using this system, we have studied the real-time protein aggregation in yeast cells and HeLa cells that exhibits dot-like protein distribution. The proposed technique has the ability to temporally resolve rapidly occurring biological events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The estimation of water and solute transit times in catchments is crucial for predicting the response of hydrosystems to external forcings (climatic or anthropogenic). The hydrogeochemical signatures of tracers (either natural or anthropogenic) in streams have been widely used to estimate transit times in catchments as they integrate the various processes at stake. However, most of these tracers are well suited for catchments with mean transit times lower than about 4-5 years. Since the second half of the 20th century, the intensification of agriculture led to a general increase of the nitrogen load in rivers. As nitrate is mainly transported by groundwater in agricultural catchments, this signal can be used to estimate transit times greater than several years, even if nitrate is not a conservative tracer. Conceptual hydrological models can be used to estimate catchment transit times provided their consistency is demonstrated, based on their ability to simulate the stream chemical signatures at various time scales and catchment internal processes such as N storage in groundwater. The objective of this study was to assess if a conceptual lumped model was able to simulate the observed patterns of nitrogen concentration, at various time scales, from seasonal to pluriannual and thus if it was relevant to estimate the nitrogen transit times in headwater catchments. A conceptual lumped model, representing shallow groundwater flow as two parallel linear stores with double porosity, and riparian processes by a constant nitrogen removal function, was applied on two paired agricultural catchments which belong to the Research Observatory ORE AgrHys. The Global Likelihood Uncertainty Estimation (GLUE) approach was used to estimate parameter values and uncertainties. The model performance was assessed on (i) its ability to simulate the contrasted patterns of stream flow and stream nitrate concentrations at seasonal and inter-annual time scales, (ii) its ability to simulate the patterns observed in groundwater at the same temporal scales, and (iii) the consistency of long-term simulations using the calibrated model and the general pattern of the nitrate concentration increase in the region since the beginning of the intensification of agriculture in the 1960s. The simulated nitrate transit times were found more sensitive to climate variability than to parameter uncertainty, and average values were found to be consistent with results from others studies in the same region involving modeling and groundwater dating. This study shows that a simple model can be used to simulate the main dynamics of nitrogen in an intensively polluted catchment and then be used to estimate the transit times of these pollutants in the system which is crucial to guide mitigation plans design and assessment. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyse the hVV (V = W, Z) vertex in a model independent way using Vh production. To that end, we consider possible corrections to the Standard Model Higgs Lagrangian, in the form of higher dimensional operators which parametrise the effects of new physics. In our analysis, we pay special attention to linear observables that can be used to probe CP violation in the same. By considering the associated production of a Higgs boson with a vector boson (W or Z), we use jet substructure methods to define angular observables which are sensitive to new physics effects, including an asymmetry which is linearly sensitive to the presence of CP odd effects. We demonstrate how to use these observables to place bounds on the presence of higher dimensional operators, and quantify these statements using a log likelihood analysis. Our approach allows one to probe separately the hZZ and hWW vertices, involving arbitrary combinations of BSM operators, at the Large Hadron Collider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper was to develop the seismic hazard maps of Patna district considering the region-specific maximum magnitude and ground motion prediction equation (GMPEs) by worst-case deterministic and classical probabilistic approaches. Patna, located near Himalayan active seismic region has been subjected to destructive earthquakes such as 1803 and 1934 Bihar-Nepal earthquakes. Based on the past seismicity and earthquake damage distribution, linear sources and seismic events have been considered at radius of about 500 km around Patna district center. Maximum magnitude (M (max)) has been estimated based on the conventional approaches such as maximum observed magnitude (M (max) (obs) ) and/or increment of 0.5, Kijko method and regional rupture characteristics. Maximum of these three is taken as maximum probable magnitude for each source. Twenty-seven ground motion prediction equations (GMPEs) are found applicable for Patna region. Of these, suitable region-specific GMPEs are selected by performing the `efficacy test,' which makes use of log-likelihood. Maximum magnitude and selected GMPEs are used to estimate PGA and spectral acceleration at 0.2 and 1 s and mapped for worst-case deterministic approach and 2 and 10 % period of exceedance in 50 years. Furthermore, seismic hazard results are used to develop the deaggregation plot to quantify the contribution of seismic sources in terms of magnitude and distance. In this study, normalized site-specific design spectrum has been developed by dividing the hazard map into four zones based on the peak ground acceleration values. This site-specific response spectrum has been compared with recent Sikkim 2011 earthquake and Indian seismic code IS1893.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider spatial modulation (SM) operating in a frequency-selective single-carrier (SC) communication scenario and propose zero-padding instead of the cyclic-prefix considered in the existing literature. We show that the zero-padded single-carrier (ZP-SC) SM system offers full multipath diversity under maximum-likelihood (ML) detection, unlike the cyclic-prefix based SM system. Furthermore, we show that the order of ML detection complexity in our proposed ZP-SC SM system is independent of the frame length and depends only on the number of multipath links between the transmitter and the receiver. Thus, we show that the zero-padding applied in the SC SM system has two advantages over the cyclic prefix: 1) achieves full multipath diversity, and 2) imposes a relatively low ML detection complexity. Furthermore, we extend the partial interference cancellation receiver (PIC-R) proposed by Guo and Xia for the detection of space-time block codes (STBCs) in order to convert the ZP-SC system into a set of narrowband subsystems experiencing flat-fading. We show that full rank STBC transmissions over these subsystems achieves full transmit, receive as well as multipath diversity for the PIC-R. Furthermore, we show that the ZP-SC SM system achieves receive and multipath diversity for the PIC-R at a detection complexity order which is the same as that of the SM system in flat-fading scenario. Our simulation results demonstrate that the symbol error ratio performance of the proposed linear receiver for the ZP-SC SM system is significantly better than that of the SM in cyclic prefix based orthogonal frequency division multiplexing as well as of the SM in the cyclic-prefixed and zero-padded single carrier systems relying on zero-forcing/minimum mean-squared error equalizer based receivers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 2011 outburst of the black hole candidate IGR J17091-3624 followed the canonical track of state transitions along with the evolution of quasi-periodic oscillation (QPO) frequencies before it began exhibiting various variability classes similar to GRS 1915+105. We use this canonical evolution of spectral and temporal properties to determine the mass of IGR J17091-3624, using three different methods: photon index (Gamma)-QPO frequency (nu) correlation, QPO frequency (nu)-time (day) evolution, and broadband spectral modeling based on two-component advective flow (TCAF). We provide a combined mass estimate for the source using a naive Bayes based joint likelihood approach. This gives a probable mass range of 11.8 M-circle dot-13.7 M-circle dot. Considering each individual estimate and taking the lowermost and uppermost bounds among all three methods, we get a mass range of 8.7 M-circle dot-15.6 M-circle dot with 90% confidence. We discuss the possible implications of our findings in the context of two-component accretion flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Breast cancer is one of the leading cause of cancer related deaths in women and early detection is crucial for reducing mortality rates. In this paper, we present a novel and fully automated approach based on tissue transition analysis for lesion detection in breast ultrasound images. Every candidate pixel is classified as belonging to the lesion boundary, lesion interior or normal tissue based on its descriptor value. The tissue transitions are modeled using a Markov chain to estimate the likelihood of a candidate lesion region. Experimental evaluation on a clinical dataset of 135 images show that the proposed approach can achieve high sensitivity (95 %) with modest (3) false positives per image. The approach achieves very similar results (94 % for 3 false positives) on a completely different clinical dataset of 159 images without retraining, highlighting the robustness of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we have proposed an anomaly detection algorithm based on Histogram of Oriented Motion Vectors (HOMV) 1] in sparse representation framework. Usual behavior is learned at each location by sparsely representing the HOMVs over learnt normal feature bases obtained using an online dictionary learning algorithm. In the end, anomaly is detected based on the likelihood of the occurrence of sparse coefficients at that location. The proposed approach is found to be robust compared to existing methods as demonstrated in the experiments on UCSD Ped1 and UCSD Ped2 datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.