924 resultados para Computational methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current scientific research is characterized by increasing specialization, accumulating knowledge at a high speed due to parallel advances in a multitude of sub-disciplines. Recent estimates suggest that human knowledge doubles every two to three years – and with the advances in information and communication technologies, this wide body of scientific knowledge is available to anyone, anywhere, anytime. This may also be referred to as ambient intelligence – an environment characterized by plentiful and available knowledge. The bottleneck in utilizing this knowledge for specific applications is not accessing but assimilating the information and transforming it to suit the needs for a specific application. The increasingly specialized areas of scientific research often have the common goal of converting data into insight allowing the identification of solutions to scientific problems. Due to this common goal, there are strong parallels between different areas of applications that can be exploited and used to cross-fertilize different disciplines. For example, the same fundamental statistical methods are used extensively in speech and language processing, in materials science applications, in visual processing and in biomedicine. Each sub-discipline has found its own specialized methodologies making these statistical methods successful to the given application. The unification of specialized areas is possible because many different problems can share strong analogies, making the theories developed for one problem applicable to other areas of research. It is the goal of this paper to demonstrate the utility of merging two disparate areas of applications to advance scientific research. The merging process requires cross-disciplinary collaboration to allow maximal exploitation of advances in one sub-discipline for that of another. We will demonstrate this general concept with the specific example of merging language technologies and computational biology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we have developed methods to compute maps from differential equations. We take two examples. First is the case of the harmonic oscillator and the second is the case of Duffing's equation. First we convert these equations to a canonical form. This is slightly nontrivial for the Duffing's equation. Then we show a method to extend these differential equations. In the second case, symbolic algebra needs to be used. Once the extensions are accomplished, various maps are generated. The Poincare sections are seen as a special case of such generated maps. Other applications are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study constrained maximum entropy and minimum divergence optimization problems, in the cases where integer valued sufficient statistics exists, using tools from computational commutative algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. We give an implicit description of maximum entropy models by embedding them in algebraic varieties for which we give a Grobner basis method to compute it. In the cases of minimum KL-divergence models we show that implicitization preserves specialization of prior distribution. This result leads us to a Grobner basis method to embed minimum KL-divergence models in algebraic varieties. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent times computational algorithms inspired by biological processes and evolution are gaining much popularity for solving science and engineering problems. These algorithms are broadly classified into evolutionary computation and swarm intelligence algorithms, which are derived based on the analogy of natural evolution and biological activities. These include genetic algorithms, genetic programming, differential evolution, particle swarm optimization, ant colony optimization, artificial neural networks, etc. The algorithms being random-search techniques, use some heuristics to guide the search towards optimal solution and speed-up the convergence to obtain the global optimal solutions. The bio-inspired methods have several attractive features and advantages compared to conventional optimization solvers. They also facilitate the advantage of simulation and optimization environment simultaneously to solve hard-to-define (in simple expressions), real-world problems. These biologically inspired methods have provided novel ways of problem-solving for practical problems in traffic routing, networking, games, industry, robotics, economics, mechanical, chemical, electrical, civil, water resources and others fields. This article discusses the key features and development of bio-inspired computational algorithms, and their scope for application in science and engineering fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using Generalized Gradient Approximation (GGA) and meta-GGA density functional methods, structures, binding energies and harmonic vibrational frequencies for the clusters O-4(+), O-6(+), O-8(+) and O-10(+) have been calculated. The stable structures of O-4(+), O-6(+), O-8(+) and O-10(+) have point groups D-2h, D-3h, D-4h, and D-5h optimized on the quartet, sextet, octet and dectet potential energy surfaces, respectively. Rectangular (D-2h) O-4(+) has been found to be more stable compared to trans-planar (C-2h) on the quartet potential energy surface. Cyclic structure (D-3h) of CA cluster ion has been calculated to be more stable than other structures. Binding energy (B.E.) of the cyclic O-6(+) is in good agreement with experimental measurement. The zero-point corrected B.E. of O-8(+) with D4h symmetry on the octet potential energy surface and zero-point corrected B.E. of O-10(+) with D-5h symmetry on the dectet potential energy surface are also in good agreement with experimental values. The B.E. value for O-4(+) is close to the experimental value when single point energy is calculated by Brueckner coupled-cluster method, BD(T). (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local heterogeneity is ubiquitous in natural aqueous systems. It can be caused locally by external biomolecular subsystems like proteins, DNA, micelles and reverse micelles, nanoscopic materials etc., but can also be intrinsic to the thermodynamic nature of the aqueous solution itself (like binary mixtures or at the gas-liquid interface). The altered dynamics of water in the presence of such diverse surfaces has attracted considerable attention in recent years. As these interfaces are quite narrow, only a few molecular layers thick, they are hard to study by conventional methods. The recent development of two dimensional infra-red (2D-IR) spectroscopy allows us to estimate length and time scales of such dynamics fairly accurately. In this work, we present a series of interesting studies employing two dimensional infra-red spectroscopy (2D-IR) to investigate (i) the heterogeneous dynamics of water inside reverse micelles of varying sizes, (ii) supercritical water near the Widom line that is known to exhibit pronounced density fluctuations and also study (iii) the collective and local polarization fluctuation of water molecules in the presence of several different proteins. The spatio-temporal correlation of confined water molecules inside reverse micelles of varying sizes is well captured through the spectral diffusion of corresponding 2D-IR spectra. In the case of supercritical water also, we observe a strong signature of dynamic heterogeneity from the elongated nature of the 2D-IR spectra. In this case the relaxation is ultrafast. We find remarkable agreement between the different tools employed to study the relaxation of density heterogeneity. For aqueous protein solutions, we find that the calculated dielectric constant of the respective systems unanimously shows a noticeable increment compared to that of neat water. However, the `effective' dielectric constant for successive layers shows significant variation, with the layer adjacent to the protein having a much lower value. Relaxation is also slowest at the surface. We find that the dielectric constant achieves the bulk value at distances more than 3 nm from the surface of the protein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Variational Asymptotic Method (VAM) is used for modeling a coupled non-linear electromechanical problem finding applications in aircrafts and Micro Aerial Vehicle (MAV) development. VAM coupled with geometrically exact kinematics forms a powerful tool for analyzing a complex nonlinear phenomena as shown previously by many in the literature 3 - 7] for various challenging problems like modeling of an initially twisted helicopter rotor blades, matrix crack propagation in a composite, modeling of hyper elastic plates and various multi-physics problems. The problem consists of design and analysis of a piezocomposite laminate applied with electrical voltage(s) which can induce direct and planar distributed shear stresses and strains in the structure. The deformations are large and conventional beam theories are inappropriate for the analysis. The behavior of an elastic body is completely understood by its energy. This energy must be integrated over the cross-sectional area to obtain the 1-D behavior as is typical in a beam analysis. VAM can be used efficiently to approximate 3-D strain energy as closely as possible. To perform this simplification, VAM makes use of thickness to width, width to length, width multiplied by initial twist and strain as small parameters embedded in the problem definition and provides a way to approach the exact solution asymptotically. In this work, above mentioned electromechanical problem is modeled using VAM which breaks down the 3-D elasticity problem into two parts, namely a 2-D non-linear cross-sectional analysis and a 1-D non-linear analysis, along the reference curve. The recovery relations obtained as a by-product in the cross-sectional analysis earlier are used to obtain 3-D stresses, displacements and velocity contours. The piezo-composite laminate which is chosen for an initial phase of computational modeling is made up of commercially available Macro Fiber Composites (MFCs) stacked together in an arbitrary lay-up and applied with electrical voltages for actuation. The expressions of sectional forces and moments as obtained from cross-sectional analysis in closed-form show the electro-mechanical coupling and relative contribution of electric field in individual layers of the piezo-composite laminate. The spatial and temporal constitutive law as obtained from the cross-sectional analysis are substituted into 1-D fully intrinsic, geometrically exact equilibrium equations of motion and 1-D intrinsic kinematical equations to solve for all 1-D generalized variables as function of time and an along the reference curve co-ordinate, x(1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reliable and efficient a posteriori error estimator is derived for a class of discontinuous Galerkin (DG) methods for the Signorini problem. A common property shared by many DG methods leads to a unified error analysis with the help of a constraint preserving enriching map. The error estimator of DG methods is comparable with the error estimator of the conforming methods. Numerical experiments illustrate the performance of the error estimator. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid evolution of nanotechnology appeals for the understanding of global response of nanoscale systems based on atomic interactions, hence necessitates novel, sophisticated, and physically based approaches to bridge the gaps between various length and time scales. In this paper, we propose a group of statistical thermodynamics methods for the simulations of nanoscale systems under quasi-static loading at finite temperature, that is, molecular statistical thermodynamics (MST) method, cluster statistical thermodynamics (CST) method, and the hybrid molecular/cluster statistical thermodynamics (HMCST) method. These methods, by treating atoms as oscillators and particles simultaneously, as well as clusters, comprise different spatial and temporal scales in a unified framework. One appealing feature of these methods is their "seamlessness" or consistency in the same underlying atomistic model in all regions consisting of atoms and clusters, and hence can avoid the ghost force in the simulation. On the other hand, compared with conventional MD simulations, their high computational efficiency appears very attractive, as manifested by the simulations of uniaxial compression and nanoindenation. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proteolytic enzymes have evolved several mechanisms to cleave peptide bonds. These distinct types have been systematically categorized in the MEROPS database. While a BLAST search on these proteases identifies homologous proteins, sequence alignment methods often fail to identify relationships arising from convergent evolution, exon shuffling, and modular reuse of catalytic units. We have previously established a computational method to detect functions in proteins based on the spatial and electrostatic properties of the catalytic residues (CLASP). CLASP identified a promiscuous serine protease scaffold in alkaline phosphatases (AP) and a scaffold recognizing a beta-lactam (imipenem) in a cold-active Vibrio AP. Subsequently, we defined a methodology to quantify promiscuous activities in a wide range of proteins. Here, we assemble a module which encapsulates the multifarious motifs used by protease families listed in the MEROPS database. Since APs and proteases are an integral component of outer membrane vesicles (OMV), we sought to query other OMV proteins, like phospholipase C (PLC), using this search module. Our analysis indicated that phosphoinositide-specific PLC from Bacillus cereus is a serine protease. This was validated by protease assays, mass spectrometry and by inhibition of the native phospholipase activity of PI-PLC by the well-known serine protease inhibitor AEBSF (IC50 = 0.018 mM). Edman degradation analysis linked the specificity of the protease activity to a proline in the amino terminal, suggesting that the PI-PLC is a prolyl peptidase. Thus, we propose a computational method of extending protein families based on the spatial and electrostatic congruence of active site residues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report presents the results from a survey of current practice in the use of design optimization conducted amongst UK companies. The survey was completed by the Design Optimization Group in the Department of Engineering at Cambridge University. The general aims of this research were to understand the current status of design optimization research and practice and to identify ways in which the use of design optimization methods and tools could be improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.

For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.

For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.

For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.