876 resultados para Signal filtering and prediction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

SecB, a soluble cytosolic chaperone component of the Secexport pathway, binds to newly synthesized precursor proteins and prevents their premature aggregation and folding and subsequently targets them to the translocation machinery on the membrane. PreMBP, the precursor form of maltose binding protein, has a 26-residue signal sequence attached to the N-terminus of MBP and is a physiological substrate of SecB. We examine the effect of macromolecular crowding and SecB on the stability and refolding of denatured preMBP and MBP. PreMBP was less stable than MBP (ΔTm =7( 0.5 K) in both crowded and uncrowded solutions. Crowding did not cause any substantial changes in the thermal stability ofMBP(ΔTm=1(0.4 K) or preMBP (ΔTm=0(0.6 K), as observed in spectroscopically monitored thermal unfolding experiments. However, both MBP and preMBP were prone to aggregation while refolding under crowded conditions. In contrast to MBP aggregates, which were amorphous, preMBP aggregates form amyloid fibrils.Under uncrowded conditions, a molar excess of SecB was able to completely prevent aggregation and promote disaggregation of preformed aggregates of MBP. When a complex of the denatured protein and SecB was preformed, SecB could completely prevent aggregation and promote folding of MBP and preMBP even in crowded solution. Thus, in addition to maintaining substrates in an unfolded, export-competent conformation, SecB also suppresses the aggregation of its substrates in the crowded intracellular environment. SecB is also able to promote passive disaggregation of macroscopic aggregates of MBP in the absence of an energy source such as ATP or additional cofactors. These experiments also demonstrate that signal peptide can reatly influence protein stability and aggregation propensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four species of large mackerels (Scomberomorus spp.) co-occur in the waters off northern Australia and are important to fisheries in the region. State fisheries agencies monitor these species for fisheries assessment; however, data inaccuracies may exist due to difficulties with identification of these closely related species, particularly when specimens are incomplete from fish processing. This study examined the efficacy of using otolith morphometrics to differentiate and predict among the four mackerel species off northeastern Australia. Seven otolith measurements and five shape indices were recorded from 555 mackerel specimens. Multivariate modelling including linear discriminant analysis (LDA) and support vector machines, successfully differentiated among the four species based on otolith morphometrics. Cross validation determined a predictive accuracy of at least 96% for both models. An optimum predictive model for the four mackerel species was an LDA model that included fork length, feret length, feret width, perimeter, area, roundness, form factor and rectangularity as explanatory variables. This analysis may improve the accuracy of fisheries monitoring, the estimates based on this monitoring (i.e. mortality rate) and the overall management of mackerel species in Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The outcome of the successfully resuscitated patient is mainly determined by the extent of hypoxic-ischemic cerebral injury, and hypothermia has multiple mechanisms of action in mitigating such injury. The present study was undertaken from 1997 to 2001 in Helsinki as a part of the European multicenter study Hypothermia after cardiac arrest (HACA) to test the neuroprotective effect of therapeutic hypothermia in patients resuscitated from out-of-hospital ventricular fibrillation (VF) cardiac arrest (CA). The aim of this substudy was to examine the neurological and cardiological outcome of these patients, and especially to study and develop methods for prediction of outcome in the hypothermia-treated patients. A total of 275 patients were randomized to the HACA trial in Europe. In Helsinki, 70 patients were enrolled in the study according to the inclusion criteria. Those randomized to hypothermia were actively cooled externally to a core temperature 33 ± 1ºC for 24 hours with a cooling device. Serum markers of ischemic neuronal injury, NSE and S-100B, were sampled at 24, 36, and 48 hours after CA. Somatosensory and brain stem auditory evoked potentials (SEPs and BAEPs) were recorded 24 to 28 hours after CA; 24-hour ambulatory electrocardiography recordings were performed three times during the first two weeks and arrhythmias and heart rate variability (HRV) were analyzed from the tapes. The clinical outcome was assessed 3 and 6 months after CA. Neuropsychological examinations were performed on the conscious survivors 3 months after the CA. Quantitative electroencephalography (Q-EEG) and auditory P300 event-related potentials were studied at the same time-point. Therapeutic hypothermia of 33ºC for 24 hours led to an increased chance of good neurological outcome and survival after out-of-hospital VF CA. In the HACA study, 55% of hypothermia-treated patients and 39% of normothermia-treated patients reached a good neurological outcome (p=0.009) at 6 months after CA. Use of therapeutic hypothermia was not associated with any increase in clinically significant arrhythmias. The levels of serum NSE, but not the levels of S-100B, were lower in hypothermia- than in normothermia-treated patients. A decrease in NSE values between 24 and 48 hours was associated with good outcome at 6 months after CA. Decreasing levels of serum NSE but not of S-100B over time may indicate selective attenuation of delayed neuronal death by therapeutic hypothermia, and the time-course of serum NSE between 24 and 48 hours after CA may help in clinical decision-making. In SEP recordings bilaterally absent N20 responses predicted permanent coma with a specificity of 100% in both treatment arms. Recording of BAEPs provided no additional benefit in outcome prediction. Preserved 24- to 48-hour HRV may be a predictor of favorable outcome in CA patients treated with hypothermia. At 3 months after CA, no differences appeared in any cognitive functions between the two groups: 67% of patients in the hypothermia and 44% patients in the normothermia group were cognitively intact or had only very mild impairment. No significant differences emerged in any of the Q-EEG parameters between the two groups. The amplitude of P300 potential was significantly higher in the hypothermia-treated group. These results give further support to the use of therapeutic hypothermia in patients with sudden out-of-hospital CA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study examines the shrinkage behaviour of residually derived black cotton (BC) soil and red soil compacted specimens that were subjected to air-drying from the swollen state. The soil specimens were compacted at varying dry density and moisture contents to simulate varied field conditions. The void ratio and moisture content of the swollen specimens were monitored during the drying process and relationship between them is analyzed. Shrinkage is represented as reduction in void ratio with decrease in water content of soil specimens. It is found to occur in three distinct stages. Total shrinkage magnitude depends on the type of clay mineral present. Variation in compaction conditions effect marginally total shrinkage magnitudes of BC soil specimens but have relatively more effect on red soil specimens. A linear relation is obtained between total shrinkage magnitude and volumetric water content of soil specimens in swollen state and can be used to predict the shrinkage magnitude of soils.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed. Transmission in the GNAF protocol comprises of two phases - the broadcast phase and the cooperation phase. In the broadcast phase, the source broadcasts its information to the relays as well as the destination. In the cooperation phase, the source and the relays together transmit a space-time code in a distributed fashion. The GNAF protocol relaxes the constraints imposed by the protocol of Jing and Hassibi on the code structure. In Part-I of this paper, a code design criteria is obtained and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well. Moreover GNAF protocol enables the use of sphere decoders at the destination with a non-exponential Maximum likelihood (ML) decoding complexity. In Part-II, several low decoding complexity code constructions are studied and a lower bound on the Diversity-Multiplexing Gain tradeoff of the GNAF protocol is obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

South peninsular India experiences a large portion of the annual rainfall during the northeast monsoon season (October to December). In this study, the facets of diurnal, intra-seasonal and inter-annual variability of the northeast monsoon rainfall (the NEMR) over India have been examined. The analysis of satellite derived hourly rainfall reveals that there are distinct features of diurnal variation over the land and oceans during the season. Over the land, rainfall peaks during the late afternoon/evening, while over the oceans an early morning peak is observed. The harmonic analysis of hourly data reveals that the amplitude and variance are the largest over south peninsular India. The NEMR also exhibits significant intra-seasonal variability on a 20-40 day time scale. Analysis also shows significant northward propagation of the maximum cloud zone from south of equator to the south peninsula during the season. The NEMR exhibits large inter-annual variability with the co-efficient of variation (CV) of 25%. The positive phases of ENSO and the Indian Ocean Dipole (IOD) are conducive for normal to above normal rainfall activity during the northeast monsoon. There are multi-decadal variations in the statistical relationship between ENSO and the NEMR. During the period 2001-2010 the statistical relationship between ENSO and the NEMR has significantly weakened. The analysis of seasonal rainfall hindcasts for the period 1960-2005 produced by the state-of-the-art coupled climate models, ENSEMBLES, reveals that the coupled models have very poor skill in predicting the inter-annual variability of the NEMR. This is mainly due to the inability of the ENSEMBLES models to simulate the positive relationship between ENSO and the NEMR correctly. Copyright (C) 2012 Royal Meteorological Society

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resistance to therapy limits the effectiveness of drug treatment in many diseases. Drug resistance can be considered as a successful outcome of the bacterial struggle to survive in the hostile environment of a drug-exposed cell. An important mechanism by which bacteria acquire drug resistance is through mutations in the drug target. Drug resistant strains (multi-drug resistant and extensively drug resistant) of Mycobacterium tuberculosis are being identified at alarming rates, increasing the global burden of tuberculosis. An understanding of the nature of mutations in different drug targets and how they achieve resistance is therefore important. An objective of this study is to first decipher sequence as well as structural bases for the observed resistance in known drug resistant mutants and then to predict positions in each target that are more prone to acquiring drug resistant mutations. A curated database containing hundreds of mutations in the 38 drug targets of nine major clinical drugs, associated with resistance is studied here. Mutations have been classified into those that occur in the binding site itself, those that occur in residues interacting with the binding site and those that occur in outer zones. Structural models of the wild type and mutant forms of the target proteins have been analysed to seek explanations for reduction in drug binding. Stability analysis of an entire array of 19 mutations at each of the residues for each target has been computed using structural models. Conservation indices of individual residues, binding sites and whole proteins are computed based on sequence conservation analysis of the target proteins. The analyses lead to insights about which positions in the polypeptide chain have a higher propensity to acquire drug resistant mutations. Thus critical insights can be obtained about the effect of mutations on drug binding, in terms of which amino acid positions and therefore which interactions should not be heavily relied upon, which in turn can be translated into guidelines for modifying the existing drugs as well as for designing new drugs. The methodology can serve as a general framework to study drug resistant mutants in other micro-organisms as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Entropy is a fundamental thermodynamic property that has attracted a wide attention across domains, including chemistry. Inference of entropy of chemical compounds using various approaches has been a widely studied topic. However, many aspects of entropy in chemical compounds remain unexplained. In the present work, we propose two new information-theoretical molecular descriptors for the prediction of gas phase thermal entropy of organic compounds. The descriptors reflect the bulk and size of the compounds as well as the gross topological symmetry in their structures, all of which are believed to determine entropy. A high correlation () between the entropy values and our information-theoretical indices have been found and the predicted entropy values, obtained from the corresponding statistically significant regression model, have been found to be within acceptable approximation. We provide additional mathematical result in the form of a theorem and proof that might further help in assessing changes in gas phase thermal entropy values with the changes in molecular structures. The proposed information-theoretical molecular descriptors, regression model and the mathematical result are expected to augment predictions of gas phase thermal entropy for a large number of chemical compounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Binaural hearing studies show that the auditory system uses the phase-difference information in the auditory stimuli for localization of a sound source. Motivated by this finding, we present a method for demodulation of amplitude-modulated-frequency-modulated (AM-FM) signals using a ignal and its arbitrary phase-shifted version. The demodulation is achieved using two allpass filters, whose impulse responses are related through the fractional Hilbert transform (FrHT). The allpass filters are obtained by cosine-modulation of a zero-phase flat-top prototype halfband lowpass filter. The outputs of the filters are combined to construct an analytic signal (AS) from which the AM and FM are estimated. We show that, under certain assumptions on the signal and the filter structures, the AM and FM can be obtained exactly. The AM-FM calculations are based on the quasi-eigenfunction approximation. We then extend the concept to the demodulation of multicomponent signals using uniform and non-uniform cosine-modulated filterbank (FB) structures consisting of flat bandpass filters, including the uniform cosine-modulated, equivalent rectangular bandwidth (ERB), and constant-Q filterbanks. We validate the theoretical calculations by considering application on synthesized AM-FM signals and compare the performance in presence of noise with three other multiband demodulation techniques, namely, the Teager-energy-based approach, the Gabor's AS approach, and the linear transduction filter approach. We also show demodulation results for real signals.