143 resultados para PBL tutorial search term
Resumo:
In this paper, we present numerical evidence that supports the notion of minimization in the sequence space of proteins for a target conformation. We use the conformations of the real proteins in the Protein Data Bank (PDB) and present computationally efficient methods to identify the sequences with minimum energy. We use edge-weighted connectivity graph for ranking the residue sites with reduced amino acid alphabet and then use continuous optimization to obtain the energy-minimizing sequences. Our methods enable the computation of a lower bound as well as a tight upper bound for the energy of a given conformation. We validate our results by using three different inter-residue energy matrices for five proteins from protein data bank (PDB), and by comparing our energy-minimizing sequences with 80 million diverse sequences that are generated based on different considerations in each case. When we submitted some of our chosen energy-minimizing sequences to Basic Local Alignment Search Tool (BLAST), we obtained some sequences from non-redundant protein sequence database that are similar to ours with an E-value of the order of 10(-7). In summary, we conclude that proteins show a trend towards minimizing energy in the sequence space but do not seem to adopt the global energy-minimizing sequence. The reason for this could be either that the existing energy matrices are not able to accurately represent the inter-residue interactions in the context of the protein environment or that Nature does not push the optimization in the sequence space, once it is able to perform the function.
Resumo:
A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.
Resumo:
Accurate estimations of water balance are needed in semi-arid and sub-humid tropical regions, where water resources are scarce compared to water demand. Evapotranspiration plays a major role in this context, and the difficulty to quantify it precisely leads to major uncertainties in the groundwater recharge assessment, especially in forested catchments. In this paper, we propose to assess the importance of deep unsaturated regolith and water uptake by deep tree roots on the groundwater recharge process by using a lumped conceptual model (COMFORT). The model is calibrated using a 5 year hydrological monitoring of an experimental watershed under dry deciduous forest in South India (Mule Hole watershed). The model was able to simulate the stream discharge as well as the contrasted behaviour of groundwater table along the hillslope. Water balance simulated for a 32 year climatic time series displayed a large year-to-year variability, with alternance of dry and wet phases with a time period of approximately 14 years. On an average, input by the rainfall was 1090 mm year(-1) and the evapotranspiration was about 900 mm year(-1) out of which 100 mm year(-1) was uptake from the deep saprolite horizons. The stream flow was 100 mm year(-1) while the groundwater underflow was 80 mm year(-1). The simulation results suggest that (i) deciduous trees can uptake a significant amount of water from the deep regolith, (ii) this uptake, combined with the spatial variability of regolith depth, can account for the variable lag time between drainage events and groundwater rise observed for the different piezometers and (iii) water table response to recharge is buffered due to the long vertical travel time through the deep vadose zone, which constitutes a major water reservoir. This study stresses the importance of long term observations for the understanding of hydrological processes in tropical forested ecosystems. (C) 2009 Elsevier B.V. All rights reserved.
An asymptotic analysis for the coupled dispersion characteristics of a structural acoustic waveguide
Resumo:
Analytical expressions are derived, using asymptotics, for the fluid-structure coupled wavenumbers in a one-dimensional (1-D) structural acoustic waveguide. The coupled dispersion equation of the system is rewritten in the form of the uncoupled dispersion equation with an added term due to the fluid-structure coupling. As a result of this coupling, the prior uncoupled structural and acoustic wavenumbers, now become coupled structural and acoustic wavenumbers. A fluid-loading parameter e, defined as the ratio of mass of fluid to mass of the structure per unit area, is introduced which when set to zero yields the uncoupled dispersion equation. The coupled wavenumber is then expressed in terms of an asymptotic series in e. Analytical expressions are found as e is varied from small to large values. Different asymptotic expansions are used for different frequency ranges with continuous transitions occurring between them. This systematic derivation helps to continuously track the wavenumber solutions as the fluid-loading parameter is varied from small to large values. Though the asymptotic expansion used is limited to the first-order correction factor, the results are close to the numerical results. A general trend is that a given wavenumber branch transits from a rigid-walled solution to a pressure-release solution with increasing E. Also, it is found that at any frequency where two wavenumbers intersect in the uncoupled analysis, there is no more an-intersection in the coupled case, but a gap is created at that frequency. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The coupled wavenumbers of a fluid-filled flexible cylindrical shell vibrating in the axisymmetric mode are studied. The coupled dispersion equation of the system is rewritten in the form of the uncoupled dispersion equation of the structure and the acoustic fluid, with an added fluid-loading term involving a parameter e due to the coupling. Using the smallness of Poisson's ratio (v), a double-asymptotic expansion involving e and v 2 is substituted in this equation. Analytical expressions are derived for the coupled wavenumbers (for large and small values of E). Different asymptotic expansions are used for different frequency ranges with continuous transitions occurring between them. The wavenumber solutions are continuously tracked as e varies from small to large values. A general trend observed is that a given wavenumber branch transits from a rigidwalled solution to a pressure-release solution with increasing E. Also, it is found that at any frequency where two wavenumbers intersect in the uncoupled analysis, there is no more an intersection in the coupled case, but a gap is created at that frequency. Only the axisymmetric mode is considered. However, the method can be extended to the higher order modes.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
In this paper we analyze a deploy and search strategy for multi-agent systems. Mobile agents equipped with sensors carry out search operation in the search space. The lack of information about the search space is modeled as an uncertainty density distribution over the space, and is assumed to be known to the agents a priori. In each step, the agents deploy themselves in an optimal way so as to maximize per step reduction in the uncertainty density. We analyze the proposed strategy for convergence and spatial distributedness. The control law moving the agents has been analyzed for stability and convergence using LaSalle's invariance principle, and for spatial distributedness under a few realistic constraints on the control input such as constant speed, limit on maximum speed, and also sensor range limits. The simulation experiments show that the strategy successfully reduces the average uncertainty density below the required level.
Resumo:
In this paper we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we design a novel auction which we call the OPT (optimal) auction. The OPT mechanism maximizes the search engine's expected revenue while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We show that the OPT mechanism is superior to two of the most commonly used mechanisms for sponsored search namely (1) GSP (Generalized Second Price) and (2) VCG (Vickrey-Clarke-Groves). We then show an important revenue equivalence result that the expected revenue earned by the search engine is the same for all the three mechanisms provided the advertisers are symmetric and the number of sponsored slots is strictly less than the number of advertisers.
Resumo:
The keyword based search technique suffers from the problem of synonymic and polysemic queries. Current approaches address only theproblem of synonymic queries in which different queries might have the same information requirement. But the problem of polysemic queries,i.e., same query having different intentions, still remains unaddressed. In this paper, we propose the notion of intent clusters, the members of which will have the same intention. We develop a clustering algorithm that uses the user session information in query logs in addition to query URL entries to identify cluster of queries having the same intention. The proposed approach has been studied through case examples from the actual log data from AOL, and the clustering algorithm is shown to be successful in discerning the user intentions.
Resumo:
The search engine log files have been used to gather direct user feedback on the relevancy of the documents presented in the results page. Typically the relative position of the clicks gathered from the log files is used a proxy for the direct user feedback. In this paper we identify reasons for the incompleteness of the relative position of clicks for deciphering the user preferences. Hence, we propose the use of time spent by the user in reading through the document as indicative of user preference for a document with respect to a query. Also, we identify the issues involved in using the time measure and propose means to address them.
Resumo:
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.
Resumo:
Grover's database search algorithm, although discovered in the context of quantum computation, can be implemented using any physical system that allows superposition of states. A physical realization of this algorithm is described using coupled simple harmonic oscillators, which can be exactly solved in both classical and quantum domains. Classical wave algorithms are far more stable against decoherence compared to their quantum counterparts. In addition to providing convenient demonstration models, they may have a role in practical situations, such as catalysis.
Resumo:
Four algorithms, all variants of Simultaneous Perturbation Stochastic Approximation (SPSA), are proposed. The original one-measurement SPSA uses an estimate of the gradient of objective function L containing an additional bias term not seen in two-measurement SPSA. As a result, the asymptotic covariance matrix of the iterate convergence process has a bias term. We propose a one-measurement algorithm that eliminates this bias, and has asymptotic convergence properties making for easier comparison with the two-measurement SPSA. The algorithm, under certain conditions, outperforms both forms of SPSA with the only overhead being the storage of a single measurement. We also propose a similar algorithm that uses perturbations obtained from normalized Hadamard matrices. The convergence w.p. 1 of both algorithms is established. We extend measurement reuse to design two second-order SPSA algorithms and sketch the convergence analysis. Finally, we present simulation results on an illustrative minimization problem.
Resumo:
Owing to high evolutionary divergence, it is not always possible to identify distantly related protein domains by sequence search techniques. Intermediate sequences possess sequence features of more than one protein and facilitate detection of remotely related proteins. We have demonstrated recently the employment of Cascade PSI-BLAST where we perform PSI-BLAST for many 'generations', initiating searches from new homologues as well. Such a rigorous propagation through generations of PSI-BLAST employs effectively the role of intermediates in detecting distant similarities between proteins. This approach has been tested on a large number of folds and its performance in detecting superfamily level relationships is similar to 35% better than simple PSI-BLAST searches. We present a web server for this search method that permits users to perform Cascade PSI-BLAST searches against the Pfam, SCOP and SwissProt databases. The URL for this server is http://crick.mbu.iisc.ernet.in/similar to CASCADE/CascadeBlast.html.
Resumo:
In this paper, we present self assessment schemes (SAS) for multiple agents performing a search mission on an unknown terrain. The agents are subjected to limited communication and sensor ranges. The agents communicate and coordinate with their neighbours to arrive at route decisions. The self assessment schemes proposed here have very low communication and computational overhead. The SAS also has attractive features like scalability to large number of agents and fast decision-making capability. SAS can be used with partial or complete information sharing schemes during the search mission. We validate the performance of SAS using simulation on a large search space consisting of 100 agents with different information structures and self assessment schemes. We also compare the results obtained using SAS with that of a previously proposed negotiation scheme. The simulation results show that the SAS is scalable to large number of agents and can perform as good as the negotiation schemes with reduced communication requirement (almost 20% of that required for negotiation).