132 resultados para INTRINSICALLY MULTIVARIATE PREDICTION
Resumo:
Epoch is defined as the instant of significant excitation within a pitch period of voiced speech. Epoch extraction continues to attract the interest of researchers because of its significance in speech analysis. Existing high performance epoch extraction algorithms require either dynamic programming techniques or a priori information of the average pitch period. An algorithm without such requirements is proposed based on integrated linear prediction residual (ILPR) which resembles the voice source signal. Half wave rectified and negated ILPR (or Hilbert transform of ILPR) is used as the pre-processed signal. A new non-linear temporal measure named the plosion index (PI) has been proposed for detecting `transients' in speech signal. An extension of PI, called the dynamic plosion index (DPI) is applied on pre-processed signal to estimate the epochs. The proposed DPI algorithm is validated using six large databases which provide simultaneous EGG recordings. Creaky and singing voice samples are also analyzed. The algorithm has been tested for its robustness in the presence of additive white and babble noise and on simulated telephone quality speech. The performance of the DPI algorithm is found to be comparable or better than five state-of-the-art techniques for the experiments considered.
Resumo:
An attempt to study the fluid dynamic behavior of two phase flow comprising of solid and liquid with nearly equal density in a geometrical case that has an industrial significance in theareas like processing of polymers, food, pharma ceutical, paints. In this work,crystalline silica is considered as the dispersed medium in glycerin. In the CFD analysis carried out,the two phase components are considered to be premixed homogeneously at the initial state. The flow in a cylinder that has an axially driven bi-lobe rotor, a typical blender used in polymer industry for mixing or kneading to render the multi-component mixture to homogeneous condition is considered. A viscous, incompressible, isothermal flow is considered with an assumption that the components do not undergo any physical change and the solids are rigid and mix in fully wetting conditions. Silica with a particle diameter of 0.4 mm is considered and flow is analyzed for different mixing fractions. An industry standard CFD code is used for solving 3D-RANS equations. As the outcome of the study the torque demand by the bi-lobe rotor for different mixture fractions which are estimated show a behavioral consistency to the expected physical phenomena occurring in the domain considered.
Resumo:
Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix.
Resumo:
The present work deals with the prediction of stiffness of an Indian nanoclay-reinforced polypropylene composite (that can be termed as a nanocomposite) using a Monte Carlo finite element analysis (FEA) technique. Nanocomposite samples are at first prepared in the laboratory using a torque rheometer for achieving desirable dispersion of nanoclay during master batch preparation followed up with extrusion for the fabrication of tensile test dog-bone specimens. It has been observed through SEM (scanning electron microscopy) images of the prepared nanocomposite containing a given percentage (3–9% by weight) of the considered nanoclay that nanoclay platelets tend to remain in clusters. By ascertaining the average size of these nanoclay clusters from the images mentioned, a planar finite element model is created in which nanoclay groups and polymer matrix are modeled as separate entities assuming a given homogeneous distribution of the nanoclay clusters. Using a Monte Carlo simulation procedure, the distribution of nanoclay is varied randomly in an automated manner in a commercial FEA code, and virtual tensile tests are performed for computing the linear stiffness for each case. Values of computed stiffness modulus of highest frequency for nanocomposites with different nanoclay contents correspond well with the experimentally obtained measures of stiffness establishing the effectiveness of the present approach for further applications.
Resumo:
The present paper details the prediction of blast induced ground vibration, using artificial neural network. The data was generated from five different coal mines. Twenty one different parameters involving rock mass parameters, explosive parameters and blast design parameters, were used to develop the one comprehensive ANN model for five different coal bearing formations. A total of 131 datasets was used to develop the ANN model and 44 datasets was used to test the model. The developed ANN model was compared with the USBM model. The prediction capability to predict blast induced ground vibration, of the comprehensive ANN model was found to be superior.
Resumo:
Microorganisms exhibit varied regulatory strategies such as direct regulation, symmetric anticipatory regulation, asymmetric anticipatory regulation, etc. Current mathematical modeling frameworks for the growth of microorganisms either do not incorporate regulation or assume that the microorganisms utilize the direct regulation strategy. In the present study, we extend the cybernetic modeling framework to account for asymmetric anticipatory regulation strategy. The extended model accurately captures various experimental observations. We use the developed model to explore the fitness advantage provided by the asymmetric anticipatory regulation strategy and observe that the optimal extent of asymmetric regulation depends on the selective pressure that the microorganisms experience. We also explore the importance of timing the response in anticipatory regulation and find that there is an optimal time, dependent on the extent of asymmetric regulation, at which microorganisms should respond anticipatorily to maximize their fitness. We then discuss the advantages offered by the cybernetic modeling framework over other modeling frameworks in modeling the asymmetric anticipatory regulation strategy. (C) 2013 Published by Elsevier Inc.
Resumo:
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naive Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (approximate to 85%) and specific (approximate to 95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. Proteins 2014; 82:1219-1234. (c) 2013 Wiley Periodicals, Inc.
Resumo:
Tuberculosis (TB) is a life threatening disease caused due to infection from Mycobacterium tuberculosis (Mtb). That most of the TB strains have become resistant to various existing drugs, development of effective novel drug candidates to combat this disease is a need of the day. In spite of intensive research world-wide, the success rate of discovering a new anti-TB drug is very poor. Therefore, novel drug discovery methods have to be tried. We have used a rule based computational method that utilizes a vertex index, named `distance exponent index (D-x)' (taken x = -4 here) for predicting anti-TB activity of a series of acid alkyl ester derivatives. The method is meant to identify activity related substructures from a series a compounds and predict activity of a compound on that basis. The high degree of successful prediction in the present study suggests that the said method may be useful in discovering effective anti-TB compound. It is also apparent that substructural approaches may be leveraged for wide purposes in computer-aided drug design.
Resumo:
Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.
Resumo:
High wind poses a number of hazards in different areas such as structural safety, aviation, and wind energy-where low wind speed is also a concern, pollutant transport, to name a few. Therefore, usage of a good prediction tool for wind speed is necessary in these areas. Like many other natural processes, behavior of wind is also associated with considerable uncertainties stemming from different sources. Therefore, to develop a reliable prediction tool for wind speed, these uncertainties should be taken into account. In this work, we propose a probabilistic framework for prediction of wind speed from measured spatio-temporal data. The framework is based on decompositions of spatio-temporal covariance and simulation using these decompositions. A novel simulation method based on a tensor decomposition is used here in this context. The proposed framework is composed of a set of four modules, and the modules have flexibility to accommodate further modifications. This framework is applied on measured data on wind speed in Ireland. Both short-and long-term predictions are addressed.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Resumo:
We consider refined versions of Markov chains related to juggling introduced by Warrington. We further generalize the construction to juggling with arbitrary heights as well as infinitely many balls, which are expressed more succinctly in terms of Markov chains on integer partitions. In all cases, we give explicit product formulas for the stationary probabilities. The normalization factor in one case can be explicitly written as a homogeneous symmetric polynomial. We also refine and generalize enriched Markov chains on set partitions. Lastly, we prove that in one case, the stationary distribution is attained in bounded time.
Resumo:
The notion of structure is central to the subject of chemistry. This review traces the development of the idea of crystal structure since the time when a crystal structure could be determined from a three-dimensional diffraction pattern and assesses the feasibility of computationally predicting an unknown crystal structure of a given molecule. Crystal structure prediction is of considerable fundamental and applied importance, and its successful execution is by no means a solved problem. The ease of crystal structure determination today has resulted in the availability of large numbers of crystal structures of higher-energy polymorphs and pseudopolymorphs. These structural libraries lead to the concept of a crystal structure landscape. A crystal structure of a compound may accordingly be taken as a data point in such a landscape.
Resumo:
We show that in studies of light quark- and gluon-initiated jet discrimination, it is important to include the information on softer reconstructed jets (associated jets) around a primary hard jet. This is particularly relevant while adopting a small radius parameter for reconstructing hadronic jets. The probability of having an associated jet as a function of the primary jet transverse momentum (PT) and radius, the minimum associated jet pi, and the association radius is computed up to next-to-double logarithmic accuracy (NDLA), and the predictions are compared with results from Herwig++, Pythia6 and Pythia8 Monte Carlos (MC). We demonstrate the improvement in quark-gluon discrimination on using the associated jet rate variable with the help of a multivariate analysis. The associated jet rates are found to be only mildly sensitive to the choice of parton shower and hadronization algorithms, as well as to the effects of initial state radiation and underlying event. In addition, the number of k(t) subjets of an anti-k(t) jet is found to be an observable that leads to a rather uniform prediction across different MC's, broadly being in agreement with predictions in NDLA, as compared to the often used number of charged tracks observable.