993 resultados para intra prediction
Resumo:
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naive Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (approximate to 85%) and specific (approximate to 95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. Proteins 2014; 82:1219-1234. (c) 2013 Wiley Periodicals, Inc.
Resumo:
Tuberculosis (TB) is a life threatening disease caused due to infection from Mycobacterium tuberculosis (Mtb). That most of the TB strains have become resistant to various existing drugs, development of effective novel drug candidates to combat this disease is a need of the day. In spite of intensive research world-wide, the success rate of discovering a new anti-TB drug is very poor. Therefore, novel drug discovery methods have to be tried. We have used a rule based computational method that utilizes a vertex index, named `distance exponent index (D-x)' (taken x = -4 here) for predicting anti-TB activity of a series of acid alkyl ester derivatives. The method is meant to identify activity related substructures from a series a compounds and predict activity of a compound on that basis. The high degree of successful prediction in the present study suggests that the said method may be useful in discovering effective anti-TB compound. It is also apparent that substructural approaches may be leveraged for wide purposes in computer-aided drug design.
Resumo:
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro-to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine. (C) 2014 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
One hundred complexes have been investigated exhibiting D-X center dot center dot center dot A interactions, where X = H, Cl or Li and DX is the `X bond' donor and A is the acceptor. The optimized structures of all these complexes have been used to propose a generalized `Legon-Millen rule' for the angular geometry in all these interactions. A detailed Atoms in Molecules (AIM) theoretical analysis confirms an important conclusion, known in the literature: there is a strong correlation between the electron density at the X center dot center dot center dot A bond critical point (BCP) and the interaction energy for all these interactions. In addition, we show that extrapolation of the fitted line leads to the ionic bond for Li-bonding (electrostatic) while for hydrogen and chlorine bonding, it leads to the covalent bond. Further, we observe a strong correlation between the change in electron density at the D-X BCP and that at the X center dot center dot center dot A BCP, suggesting conservation of the bond order. The correlation found between penetration and electron density at BCP can be very useful for crystal structure analysis, which relies on arbitrary van der Waals radii for estimating penetration. Various criteria proposed for shared-and closed-shell interactions based on electron density topology have been tested for H/Cl/Li bonded complexes. Finally, using the natural bond orbital (NBO) analysis it is shown that the D-X bond weakens upon X bond formation, whether it is ionic (DLi) or covalent (DH/DCl) and the respective indices such as ionicity or covalent bond order decrease. Clearly, one can think of conservation of bond order that includes ionic and covalent contributions to both D-X and X center dot center dot center dot A bonds, for not only X = H/Cl/Li investigated here but also any atom involved in intermolecular bonding.
Resumo:
Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.
Resumo:
In this work, we address the recovery of block sparse vectors with intra-block correlation, i.e., the recovery of vectors in which the correlated nonzero entries are constrained to lie in a few clusters, from noisy underdetermined linear measurements. Among Bayesian sparse recovery techniques, the cluster Sparse Bayesian Learning (SBL) is an efficient tool for block-sparse vector recovery, with intra-block correlation. However, this technique uses a heuristic method to estimate the intra-block correlation. In this paper, we propose the Nested SBL (NSBL) algorithm, which we derive using a novel Bayesian formulation that facilitates the use of the monotonically convergent nested Expectation Maximization (EM) and a Kalman filtering based learning framework. Unlike the cluster-SBL algorithm, this formulation leads to closed-form EMupdates for estimating the correlation coefficient. We demonstrate the efficacy of the proposed NSBL algorithm using Monte Carlo simulations.
Resumo:
High wind poses a number of hazards in different areas such as structural safety, aviation, and wind energy-where low wind speed is also a concern, pollutant transport, to name a few. Therefore, usage of a good prediction tool for wind speed is necessary in these areas. Like many other natural processes, behavior of wind is also associated with considerable uncertainties stemming from different sources. Therefore, to develop a reliable prediction tool for wind speed, these uncertainties should be taken into account. In this work, we propose a probabilistic framework for prediction of wind speed from measured spatio-temporal data. The framework is based on decompositions of spatio-temporal covariance and simulation using these decompositions. A novel simulation method based on a tensor decomposition is used here in this context. The proposed framework is composed of a set of four modules, and the modules have flexibility to accommodate further modifications. This framework is applied on measured data on wind speed in Ireland. Both short-and long-term predictions are addressed.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Resumo:
The notion of structure is central to the subject of chemistry. This review traces the development of the idea of crystal structure since the time when a crystal structure could be determined from a three-dimensional diffraction pattern and assesses the feasibility of computationally predicting an unknown crystal structure of a given molecule. Crystal structure prediction is of considerable fundamental and applied importance, and its successful execution is by no means a solved problem. The ease of crystal structure determination today has resulted in the availability of large numbers of crystal structures of higher-energy polymorphs and pseudopolymorphs. These structural libraries lead to the concept of a crystal structure landscape. A crystal structure of a compound may accordingly be taken as a data point in such a landscape.
Resumo:
In this discussion, we show that a static definition of a `bond' is not viable by looking at a few examples for both inter-and intra-molecular hydrogen bonding. This follows from our earlier work (Goswami and Arunan, Phys. Chem. Chem. Phys. 2009, 11, 8974) which showed a practical way to differentiate `hydrogen bonding' from `van der Waals interaction'. We report results from ab initio and atoms in molecules theoretical calculations for a series of Rg center dot center dot center dot HX complexes (Rg = He/Ne/Ar and X = F/Cl/Br) and ethane-1,2-diol. Results for the Rg center dot center dot center dot HX/DX complexes show that Rg center dot center dot center dot DX could have a `deuterium bond' even when Rg center dot center dot center dot HX is not `hydrogen bonded', according to the practical criterion given by Goswami and Arunan. Results for ethane-1,2-diol show that an `intra-molecular hydrogen bond' can appear during a normal mode vibration which is dominated by the O center dot center dot center dot O stretching, though a `bond' is not found in the equilibrium structure. This dynamical `bond' formation may nevertheless be important in ensuring the continuity of electron density across a molecule. In the former case, a vibration `breaks' an existing bond and in the later case, a vibration leads to `bond' formation. In both cases, the molecule/complex stays bound irrespective of what happens to this `hydrogen bond'. Both these cases push the borders on the recent IUPAC recommendation on hydrogen bonding (Arunan et al. Pure. Appl. Chem. 2011, 83 1637) and justify the inclusive nature of the definition.
Resumo:
Land surface temperature (LST) is an important variable in climate, hydrologic, ecological, biophysical and biochemical studies (Mildrexler et al., 2011). The most effective way to obtain LST measurements is through satellites. Presently, LST from moderate resolution imaging spectroradiometer (MODIS) sensor is applied in various fields due to its high spatial and temporal availability over the globe, but quite difficult to provide observations in cloudy conditions. This study evolves of prediction of LST under clear and cloudy conditions using microwave vegetation indices (MVIs), elevation, latitude, longitude and Julian day as inputs employing an artificial neural network (ANN) model. MVIs can be obtained even under cloudy condition, since microwave radiation has an ability to penetrate through clouds. In this study LST and MVIs data of the year 2010 for the Cauvery basin on a daily basis were obtained from MODIS and advanced microwave scanning radiometer (AMSR-E) sensors of aqua satellite respectively. Separate ANN models were trained and tested for the grid cells for which both LST and MVI were available. The performance of the models was evaluated based on standard evaluation measures. The best performing model was used to predict LST where MVIs were available. Results revealed that predictions of LST using ANN are in good agreement with the observed values. The ANN approach presented in this study promises to be useful for predicting LST using satellite observations even in cloudy conditions. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Prediction of queue waiting times of jobs submitted to production parallel batch systems is important to provide overall estimates to users and can also help meta-schedulers make scheduling decisions. In this work, we have developed a framework for predicting ranges of queue waiting times for jobs by employing multi-class classification of similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k-Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the class predicted using k-NN and its neighboring classes are used to provide a set of ranges of predicted wait times with probabilities. We have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. Experiments with different production supercomputer job traces show that our prediction strategies can give correct predictions for about 77-87% of the jobs, and also result in about 12% improved accuracy when compared to the next best existing method. Experiments with our meta-scheduling strategy using different production and synthetic job traces for various system sizes, partitioning schemes and different workloads, show that the meta-scheduling strategy gives much improved performance when compared to existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47%.
Resumo:
An energy approach within the framework of thermodynamics is used to model the fatigue process in plain concrete. Fatigue crack growth is an irreversible process associated with an irreversible entropy gain. A closed-form expression for entropy generated during fatigue in terms of energy dissipated is derived using principles of dimensional analysis and self-similarity. An increase in compliance is considered as a measure of damage accumulated during fatigue. The entropy at final fatigue failure is shown to be independent of loading and geometry and is proposed as a material property. A relationship between energy dissipated and number of cycles of fatigue loading is obtained. (C) 2015 American Society of Civil Engineers.
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.
Resumo:
Numerical simulations were performed of experiments from a cascade of stator blades at three low Reynolds numbers representative of flight conditions. Solutions were assessed by comparing blade surface pressures, velocity and turbulence intensity along blade normals at several stations along the suction surface and in the wake. At Re = 210,000 and 380,000 the laminar boundary layer over the suction surface separates and reattaches with significant turbulence fluctuations. A new 3-equation transition model, the k-k(L)-omega model, was used to simulate this flow. Predicted locations of the separation bubble, and profiles of velocity and turbulence fluctuations on blade-normal lines at various stations along the blade were found to be quite close to measurements. Suction surface pressure distributions were not as close at the lower Re. The solution with the standard k-omega SST model showed significant differences in all quantities. At Re = 640,000 transition occurs earlier and it is a turbulent boundary layer that separates near the trailing edge. The solution with the Reynolds stress model was found to be quite close to the experiment in the separated region also, unlike the k-omega SST solution. Three-dimensional computations were performed at Re = 380,000 and 640,000. In both cases there were no significant differences between the midspan solution from 3D computations and the 2D solutions. However, the 3D solutions exhibited flow features observed in the experiments the nearly 2D structure of the flow over most of the span at 380,000 and the spanwise growth of corner vortices from the endwall at 640,000.