969 resultados para computational models
Resumo:
What does it mean for curriculum to be interactive? It encourages student engagement and active participation in both individual and group work. It offers teachers a coherent set of materials to choose from that can enhance their classes. It is the product of on-going development and continuous improvement based on research and feedback from the field. This paper will introduce work in progress from the Center for Excellence in Education, Science, and Technology (CELEST), an NSF Science of Learning Center. Among its many goals, CELEST is developing a unique educational curriculum, an interactive curriculum based upon models of mind and brain. Teachers, administrators, and governments are naturally concerned with how students learn. Students are greatly concerned about how minds work, including how to learn. CELEST aims to introduce curricula that not only meet current U.S. standards in mathematics, science, and psychology but also influence plans to improve those standards. Software and support materials are in development and available at http://cns.bu.edu/celest/private/. Interested parties are invited to contact the author for access.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
Alzheimer's disease (AD) is characterized by the cerebral accumulation of misfolded and aggregated amyloid-beta protein (Abeta). Disease symptoms can be alleviated, in vitro and in vivo, by 'beta-sheet breaker' pentapeptides that reduce plaque load. However the peptide nature of these compounds, made them biologically unstable and unable to penetrate membranes with high efficiency. The main goal of this study was to use computational methods to identify small molecule mimetics with better drug-like properties. For this purpose, the docked conformations of the active peptides were used to identify compounds with similar activities. A series of related beta-sheet breaker peptides were docked to solid state NMR structures of a fibrillar form of Abeta. The lowest energy conformations of the active peptides were used to design three dimensional (3D)-pharmacophores, suitable for screening the NCI database with Unity. Small molecular weight compounds with physicochemical features and a conformation similar to the active peptides were selected, ranked by docking and biochemical parameters. Of 16 diverse compounds selected for experimental screening, 2 prevented and reversed Abeta aggregation at 2-3microM concentration, as measured by Thioflavin T (ThT) fluorescence and ELISA assays. They also prevented the toxic effects of aggregated Abeta on neuroblastoma cells. Their low molecular weight and aqueous solubility makes them promising lead compounds for treating AD.
Resumo:
Models of DNA sequence evolution and methods for estimating evolutionary distances are needed for studying the rate and pattern of molecular evolution and for inferring the evolutionary relationships of organisms or genes. In this dissertation, several new models and methods are developed.^ The rate variation among nucleotide sites: To obtain unbiased estimates of evolutionary distances, the rate heterogeneity among nucleotide sites of a gene should be considered. Commonly, it is assumed that the substitution rate varies among sites according to a gamma distribution (gamma model) or, more generally, an invariant+gamma model which includes some invariable sites. A maximum likelihood (ML) approach was developed for estimating the shape parameter of the gamma distribution $(\alpha)$ and/or the proportion of invariable sites $(\theta).$ Computer simulation showed that (1) under the gamma model, $\alpha$ can be well estimated from 3 or 4 sequences if the sequence length is long; and (2) the distance estimate is unbiased and robust against violations of the assumptions of the invariant+gamma model.^ However, this ML method requires a huge amount of computational time and is useful only for less than 6 sequences. Therefore, I developed a fast method for estimating $\alpha,$ which is easy to implement and requires no knowledge of tree. A computer program was developed for estimating $\alpha$ and evolutionary distances, which can handle the number of sequences as large as 30.^ Evolutionary distances under the stationary, time-reversible (SR) model: The SR model is a general model of nucleotide substitution, which assumes (i) stationary nucleotide frequencies and (ii) time-reversibility. It can be extended to SRV model which allows rate variation among sites. I developed a method for estimating the distance under the SR or SRV model, as well as the variance-covariance matrix of distances. Computer simulation showed that the SR method is better than a simpler method when the sequence length $L>1,000$ bp and is robust against deviations from time-reversibility. As expected, when the rate varies among sites, the SRV method is much better than the SR method.^ The evolutionary distances under nonstationary nucleotide frequencies: The statistical properties of the paralinear and LogDet distances under nonstationary nucleotide frequencies were studied. First, I developed formulas for correcting the estimation biases of the paralinear and LogDet distances. The performances of these formulas and the formulas for sampling variances were examined by computer simulation. Second, I developed a method for estimating the variance-covariance matrix of the paralinear distance, so that statistical tests of phylogenies can be conducted when the nucleotide frequencies are nonstationary. Third, a new method for testing the molecular clock hypothesis was developed in the nonstationary case. ^
Resumo:
In attempts to elucidate the underlying mechanisms of spinal injuries and spinal deformities, several experimental and numerical studies have been conducted to understand the biomechanical behavior of the spine. However, numerical biomechanical studies suffer from uncertainties associated with hard- and soft-tissue anatomies. Currently, these parameters are identified manually on each mesh model prior to simulations. The determination of soft connective tissues on finite element meshes can be a tedious procedure, which limits the number of models used in the numerical studies to a few instances. In order to address these limitations, an image-based method for automatic morphing of soft connective tissues has been proposed. Results showed that the proposed method is capable to accurately determine the spatial locations of predetermined bony landmarks. The present method can be used to automatically generate patient-specific models, which may be helpful in designing studies involving a large number of instances and to understand the mechanical behavior of biomechanical structures across a given population.
Resumo:
The potential and adaptive flexibility of population dynamic P-systems (PDP) to study population dynamics suggests that they may be suitable for modelling complex fluvial ecosystems, characterized by a composition of dynamic habitats with many variables that interact simultaneously. Using as a model a reservoir occupied by the zebra mussel Dreissena polymorpha, we designed a computational model based on P systems to study the population dynamics of larvae, in order to evaluate management actions to control or eradicate this invasive species. The population dynamics of this species was simulated under different scenarios ranging from the absence of water flow change to a weekly variation with different flow rates, to the actual hydrodynamic situation of an intermediate flow rate. Our results show that PDP models can be very useful tools to model complex, partially desynchronized, processes that work in parallel. This allows the study of complex hydroecological processes such as the one presented, where reproductive cycles, temperature and water dynamics are involved in the desynchronization of the population dynamics both, within areas and among them. The results obtained may be useful in the management of other reservoirs with similar hydrodynamic situations in which the presence of this invasive species has been documented.
Resumo:
Accurate three-dimensional (3D) models of lumbar vertebrae are required for image-based 3D kinematics analysis. MRI or CT datasets are frequently used to derive 3D models but have the disadvantages that they are expensive, time-consuming or involving ionizing radiation (e.g., CT acquisition). In this chapter, we present an alternative technique that can reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image and a statistical shape model. Cadaveric studies are conducted to verify the reconstruction accuracy by comparing the surface models reconstructed from a single lateral fluoroscopic image to the ground truth data from 3D CT segmentation. A mean reconstruction error between 0.7 and 1.4 mm was found.
Resumo:
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.
Resumo:
Vestibular cognition has recently gained attention. Despite numerous experimental and clinical demonstrations, it is not yet clear what vestibular cognition really is. For future research in vestibular cognition, adopting a computational approach will make it easier to explore the underlying mech- anisms. Indeed, most modeling approaches in vestibular science include a top-down or a priori component. We review recent Bayesian optimal observer models, and discuss in detail the conceptual value of prior assumptions, likelihood and posterior estimates for research in vestibular cognition. We then consider forward models in vestibular processing, which are required in order to distinguish between sensory input that is induced by active self-motion, and sensory input that is due to passive self-motion. We suggest that forward models are used not only in the service of estimating sensory states but they can also be drawn upon in an offline mode (e.g., spatial perspective transformations), in which interaction with sensory input is not desired. A computational approach to vestibular cogni- tion will help to discover connections across studies, and it will provide a more coherent framework for investigating vestibular cognition.
Resumo:
BACKGROUND Zebrafish is a clinically-relevant model of heart regeneration. Unlike mammals, it has a remarkable heart repair capacity after injury, and promises novel translational applications. Amputation and cryoinjury models are key research tools for understanding injury response and regeneration in vivo. An understanding of the transcriptional responses following injury is needed to identify key players of heart tissue repair, as well as potential targets for boosting this property in humans. RESULTS We investigated amputation and cryoinjury in vivo models of heart damage in the zebrafish through unbiased, integrative analyses of independent molecular datasets. To detect genes with potential biological roles, we derived computational prediction models with microarray data from heart amputation experiments. We focused on a top-ranked set of genes highly activated in the early post-injury stage, whose activity was further verified in independent microarray datasets. Next, we performed independent validations of expression responses with qPCR in a cryoinjury model. Across in vivo models, the top candidates showed highly concordant responses at 1 and 3 days post-injury, which highlights the predictive power of our analysis strategies and the possible biological relevance of these genes. Top candidates are significantly involved in cell fate specification and differentiation, and include heart failure markers such as periostin, as well as potential new targets for heart regeneration. For example, ptgis and ca2 were overexpressed, while usp2a, a regulator of the p53 pathway, was down-regulated in our in vivo models. Interestingly, a high activity of ptgis and ca2 has been previously observed in failing hearts from rats and humans. CONCLUSIONS We identified genes with potential critical roles in the response to cardiac damage in the zebrafish. Their transcriptional activities are reproducible in different in vivo models of cardiac injury.
Resumo:
This paper shows how an Armington-Krugman-Melitz encompassing module based on Dixon and Rimmer (2012) can be calibrated, and clarifies the choice of initial levels for two kinds of number of firms, or parameter values for two kinds of fixed costs, that enter a Melitz-type specification can be set freely to any preferred value, just as the cases we derive quantities from given value data assuming some of the initial prices to be unity. In consequence, only one kind of additional information, which is on the shape parameter related to productivity, just is required in order to incorporate Melitz-type monopolistic competition and heterogeneous firms into a standard applied general equilibrium model. To be a Krugman-type, nothing is needed. This enables model builders in applied economics to fully enjoy the featured properties of the theoretical models invented by Krugman (1980) and Melitz (2003) in practical policy simulations at low cost.
Resumo:
This paper explore how simulation results change with different choice of trade specification, and the strength of preference for traded variety by economic agent differs, utilizing two types of three-region, three-sector AGE model that includes the Armington-Krugman-Melitz Encompassing module based on Dixon and Rimmer (2012). Simulation experiments reveal that: (1) the Melitz-type specification does not always enhance effectiveness of a certain policy change more than the one obtained with the Krugman-type, especially when economic agents' preference for traded variety is not so strong; (2) there are likely to be points where the volumes of effects obtained with the Melitz-type exceed the ones with the Krugman-type; and (3) the preference of the producers, those who are in the sectors that exhibit increasing returns to scale, for traded variety might be the engine of explosive effects as suggested by Fujita, et al. (2000).
Resumo:
A new method is presented to generate reduced order models (ROMs) in Fluid Dynamics problems of industrial interest. The method is based on the expansion of the flow variables in a Proper Orthogonal Decomposition (POD) basis, calculated from a limited number of snapshots, which are obtained via Computational Fluid Dynamics (CFD). Then, the POD-mode amplitudes are calculated as minimizers of a properly defined overall residual of the equations and boundary conditions. The method includes various ingredients that are new in this field. The residual can be calculated using only a limited number of points in the flow field, which can be scattered either all over the whole computational domain or over a smaller projection window. The resulting ROM is both computationally efficient(reconstructed flow fields require, in cases that do not present shock waves, less than 1 % of the time needed to compute a full CFD solution) and flexible(the projection window can avoid regions of large localized CFD errors).Also, for problems related with aerodynamics, POD modes are obtained from a set of snapshots calculated by a CFD method based on the compressible Navier Stokes equations and a turbulence model (which further more includes some unphysical stabilizing terms that are included for purely numerical reasons), but projection onto the POD manifold is made using the inviscid Euler equations, which makes the method independent of the CFD scheme. In addition, shock waves are treated specifically in the POD description, to avoid the need of using a too large number of snapshots. Various definitions of the residual are also discussed, along with the number and distribution of snapshots, the number of retained modes, and the effect of CFD errors. The method is checked and discussed on several test problems that describe (i) heat transfer in the recirculation region downstream of a backwards facing step, (ii) the flow past a two-dimensional airfoil in both the subsonic and transonic regimes, and (iii) the flow past a three-dimensional horizontal tail plane. The method is both efficient and numerically robust in the sense that the computational effort is quite small compared to CFD and results are both reasonably accurate and largely insensitive to the definition of the residual, to CFD errors, and to the CFD method itself, which may contain artificial stabilizing terms. Thus, the method is amenable for practical engineering applications. Resumen Se presenta un nuevo método para generar modelos de orden reducido (ROMs) aplicado a problemas fluidodinámicos de interés industrial. El nuevo método se basa en la expansión de las variables fluidas en una base POD, calculada a partir de un cierto número de snapshots, los cuales se han obtenido gracias a simulaciones numéricas (CFD). A continuación, las amplitudes de los modos POD se calculan minimizando un residual global adecuadamente definido que combina las ecuaciones y las condiciones de contorno. El método incluye varios ingredientes que son nuevos en este campo de estudio. El residual puede calcularse utilizando únicamente un número limitado de puntos del campo fluido. Estos puntos puede encontrarse dispersos a lo largo del dominio computacional completo o sobre una ventana de proyección. El modelo ROM obtenido es tanto computacionalmente eficiente (en aquellos casos que no presentan ondas de choque reconstruir los campos fluidos requiere menos del 1% del tiempo necesario para calcular una solución CFD) como flexible (la ventana de proyección puede escogerse de forma que evite contener regiones con errores en la solución CFD localizados y grandes). Además, en problemas aerodinámicos, los modos POD se obtienen de un conjunto de snapshots calculados utilizando un código CFD basado en la versión compresible de las ecuaciones de Navier Stokes y un modelo de turbulencia (el cual puede incluir algunos términos estabilizadores sin sentido físico que se añaden por razones puramente numéricas), aunque la proyección en la variedad POD se hace utilizando las ecuaciones de Euler, lo que hace al método independiente del esquema utilizado en el código CFD. Además, las ondas de choque se tratan específicamente en la descripción POD para evitar la necesidad de utilizar un número demasiado grande de snapshots. Varias definiciones del residual se discuten, así como el número y distribución de los snapshots,el número de modos retenidos y el efecto de los errores debidos al CFD. El método se comprueba y discute para varios problemas de evaluación que describen (i) la transferencia de calor en la región de recirculación aguas abajo de un escalón, (ii) el flujo alrededor de un perfil bidimensional en regímenes subsónico y transónico y (iii) el flujo alrededor de un estabilizador horizontal tridimensional. El método es tanto eficiente como numéricamente robusto en el sentido de que el esfuerzo computacional es muy pequeño comparado con el requerido por el CFD y los resultados son razonablemente precisos y muy insensibles a la definición del residual, los errores debidos al CFD y al método CFD en sí mismo, el cual puede contener términos estabilizadores artificiales. Por lo tanto, el método puede utilizarse en aplicaciones prácticas de ingeniería.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Resumo:
In this work we propose a method to accelerate time dependent numerical solvers of systems of PDEs that require a high cost in computational time and memory. The method is based on the combined use of such numerical solver with a proper orthogonal decomposition, from which we identify modes, a Galerkin projection (that provides a reduced system of equations) and the integration of the reduced system, studying the evolution of the modal amplitudes. We integrate the reduced model until our a priori error estimator indicates that our approximation in not accurate. At this point we use again our original numerical code in a short time interval to adapt the POD manifold and continue then with the integration of the reduced model. Application will be made to two model problems: the Ginzburg-Landau equation in transient chaos conditions and the two-dimensional pulsating cavity problem, which describes the motion of liquid in a box whose upper wall is moving back and forth in a quasi-periodic fashion. Finally, we will discuss a way of improving the performance of the method using experimental data or information from numerical simulations