982 resultados para Embedded-Atom Method
Resumo:
The difficulties of applying the Hartree-Fock method to many body problems is illustrated by treating Helium's electrons up to the point where tractability vanishes. Second, the problem of applying Hartree-Fock methods to the helium atom's electrons, when they are constrained to remain on a sphere, is revisited. The 6-dimensional total energy operator is reduced to a 2-dimensional one, and the application of that 2-dimensional operator in the Hartree-Fock mode is discussed.
Resumo:
The potential for the direct analysis of enzyme reactions by fast atom bombardment (FAB) mass spectrometry has been investigated. Conditions are presented for the maintenance of enzymatic activity under FAB conditions along with FAB mass spectrometric data showing that these conditions can be applied to solutions of enzyme and substrate to follow enzymatic reactions inside the mass spectrometer in real-time. In addition, enzyme kinetic behavior under FAB mass spectrometric conditions is characterized using trypsin and its assay substrate, TAME, as an enzyme-substrate reaction model. These results show that two monitoring methods can be utilized to follow reactions by FAB mass spectrometry. The advantages of each method are discussed and illustrated by obtaining kinetic parameters from the direct analysis of enzyme reactions with assay or peptide substrates. ^
Resumo:
The major aim of this study was to examine the influence of an embedded viscoelastic-plastic layer at different viscosity values on accretionary wedges at subduction zones. To quantify the effects of the layer viscosity, we analysed the wedge geometry, accretion mode, thrust systems and mass transport pattern. Therefore, we developed a numerical 2D 'sandbox' model utilising the Discrete Element Method. Starting with a simple pure Mohr Coulomb sequence, we added an embedded viscoelastic-plastic layer within the brittle, undeformed 'sediment' package. This layer followed Burger's rheology, which simulates the creep behaviour of natural rocks, such as evaporites. This layer got thrusted and folded during the subduction process. The testing of different bulk viscosity values, from 1 × 10**13 to 1 × 10**14 (Pa s), revealed a certain range where an active detachment evolved within the viscoelastic-plastic layer that decoupled the over- and the underlying brittle strata. This mid-level detachment caused the evolution of a frontally accreted wedge above it and a long underthrusted and subsequently basally accreted sequence beneath it. Both sequences were characterised by specific mass transport patterns depending on the used viscosity value. With decreasing bulk viscosities, thrust systems above this weak mid-level detachment became increasingly symmetrical and the particle uplift was reduced, as would be expected for a salt controlled forearc in nature. Simultaneously, antiformal stacking was favoured over hinterland dipping in the lower brittle layer and overturning of the uplifted material increased. Hence, we validated that the viscosity of an embedded detachment strongly influences the whole wedge mechanics, both the respective lower slope and the upper slope duplex, shown by e.g. the mass transport pattern.
Resumo:
This dissertation, whose research has been conducted at the Group of Electronic and Microelectronic Design (GDEM) within the framework of the project Power Consumption Control in Multimedia Terminals (PCCMUTE), focuses on the development of an energy estimation model for the battery-powered embedded processor board. The main objectives and contributions of the work are summarized as follows: A model is proposed to obtain the accurate energy estimation results based on the linear correlation between the performance monitoring counters (PMCs) and energy consumption. the uniqueness of the appropriate PMCs for each different system, the modeling methodology is improved to obtain stable accuracies with slight variations among multiple scenarios and to be repeatable in other systems. It includes two steps: the former, the PMC-filter, to identify the most proper set among the available PMCs of a system and the latter, the k-fold cross validation method, to avoid the bias during the model training stage. The methodology is implemented on a commercial embedded board running the 2.6.34 Linux kernel and the PAPI, a cross-platform interface to configure and access PMCs. The results show that the methodology is able to keep a good stability in different scenarios and provide robust estimation results with the average relative error being less than 5%. Este trabajo fin de máster, cuya investigación se ha desarrollado en el Grupo de Diseño Electrónico y Microelectrónico (GDEM) en el marco del proyecto PccMuTe, se centra en el desarrollo de un modelo de estimación de energía para un sistema empotrado alimentado por batería. Los objetivos principales y las contribuciones de esta tesis se resumen como sigue: Se propone un modelo para obtener estimaciones precisas del consumo de energía de un sistema empotrado. El modelo se basa en la correlación lineal entre los valores de los contadores de prestaciones y el consumo de energía. Considerando la particularidad de los contadores de prestaciones en cada sistema, la metodología de modelado se ha mejorado para obtener precisiones estables, con ligeras variaciones entre escenarios múltiples y para replicar los resultados en diferentes sistemas. La metodología incluye dos etapas: la primera, filtrado-PMC, que consiste en identificar el conjunto más apropiado de contadores de prestaciones de entre los disponibles en un sistema y la segunda, el método de validación cruzada de K iteraciones, cuyo fin es evitar los sesgos durante la fase de entrenamiento. La metodología se implementa en un sistema empotrado que ejecuta el kernel 2.6.34 de Linux y PAPI, un interfaz multiplataforma para configurar y acceder a los contadores. Los resultados muestran que esta metodología consigue una buena estabilidad en diferentes escenarios y proporciona unos resultados robustos de estimación con un error medio relativo inferior al 5%.
Resumo:
Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.
Resumo:
Dynamic soil-structure interaction has been for a long time one of the most fascinating areas for the engineering profession. The building of large alternating machines and their effects on surrounding structures as well as on their own functional behavior, provided the initial impetus; a large amount of experimental research was done,and the results of the Russian and German groups were especially worthwhile. Analytical results by Reissner and Sehkter were reexamined by Quinlan, Sung, et. al., and finally Veletsos presented the first set of reliable results. Since then, the modeling of the homogeneous, elastic halfspace as a equivalent set of springs and dashpots has become an everyday tool in soil engineering practice, especially after the appearance of the fast Fourier transportation algorithm, which makes possible the treatment of the frequency-dependent characteristics of the equivalent elements in a unified fashion with the general method of analysis of the structure. Extensions to the viscoelastic case, as well as to embedded foundations and complicated geometries, have been presented by various authors. In general, they used the finite element method with the well known problems of geometric truncations and the subsequent use of absorbing boundaries. The properties of boundary integral equation methods are, in our opinion, specially well suited to this problem, and several of the previous results have confirmed our opinion. In what follows we present the general features related to steady-state elastodynamics and a series of results showing the splendid results that the BIEM provided. Especially interesting are the outputs obtained through the use of the so-called singular elements, whose description is incorporated at the end of the paper. The reduction in time spent by the computer and the small number of elements needed to simulate realistically the global properties of the halfspace make this procedure one of the most interesting applications of the BIEM.
Resumo:
This paper presents a numerical implementation of the cohesive crack model for the anal-ysis of quasibrittle materials based on the strong discontinuity approach in the framework of the finite element method. A simple central force model is used for the stress versus crack opening curve. The additional degrees of freedom defining the crack opening are determined at the crack level, thus avoiding the need for performing a static condensation at the element level. The need for a tracking algorithm is avoided by using a consistent pro-cedure for the selection of the separated nodes. Such a model is then implemented into a commercial program by means of a user subroutine, consequently being contrasted with the experimental results. The model takes into account the anisotropy of the material. Numerical simulations of well-known experiments are presented to show the ability of the proposed model to simulate the fracture of quasibrittle materials such as mortar, concrete and masonry.
Resumo:
In an unprecedented finding, Davis et al. [Davis, R. E., Miller, S., Herrnstadt, C., Ghosh, S. S., Fahy, E., Shinobu, L. A., Galasko, D., Thal, L. J., Beal, M. F., Howell, N. & Parker, W. D., Jr. (1997) Proc. Natl. Acad. Sci. USA 94, 4526–4531] used an unusual DNA isolation method to show that healthy adults harbor a specific population of mutated mitochondrial cytochrome c oxidase (COX) genes that coexist with normal mtDNAs. They reported that this heteroplasmic population was present at a level of 10–15% in the blood of normal individuals and at a significantly higher level (20–30%) in patients with sporadic Alzheimer’s disease. We provide compelling evidence that the DNA isolation method employed resulted in the coamplification of authentic mtDNA-encoded COX genes together with highly similar COX-like sequences embedded in nuclear DNA (“mtDNA pseudogenes”). We conclude that the observed heteroplasmy is an artifact.
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Cold atoms in optical potentials provide an ideal test bed to explore quantum nonlinear dynamics. Atoms are prepared in a magneto-optic trap or as a dilute Bose-Einstein condensate and subjected to a far detuned optical standing wave that is modulated. They exhibit a wide range of dynamics, some of which can be explained by classical theory while other aspects show the underlying quantum nature of the system. The atoms have a mixed phase space containing regions of regular motion which appear as distinct peaks in the atomic momentum distribution embedded in a sea of chaos. The action of the atoms is of the order of Planck's constant, making quantum effects significant. This tutorial presents a detailed description of experiments measuring the evolution of atoms in time-dependent optical potentials. Experimental methods are developed providing means for the observation and selective loading of regions of regular motion. The dependence of the atomic dynamics on the system parameters is explored and distinct changes in the atomic momentum distribution are observed which are explained by the applicable quantum and classical theory. The observation of a bifurcation sequence is reported and explained using classical perturbation theory. Experimental methods for the accurate control of the momentum of an ensemble of atoms are developed. They use phase space resonances and chaotic transients providing novel ensemble atomic beamsplitters. The divergence between quantum and classical nonlinear dynamics is manifest in the experimental observation of dynamical tunnelling. It involves no potential barrier. However a constant of motion other than energy still forbids classically this quantum allowed motion. Atoms coherently tunnel back and forth between their initial state of oscillatory motion and the state 180 out of phase with the initial state.
Resumo:
A comprehensive study has been conducted to compare the adsorptions of alkali metals (including Li, Na, and K) on the basal plane of graphite by using molecular orbital theory calculations. All three metal atoms prefer to be adsorbed on the middle hollow site above a hexagonal aromatic ring. A novel phenomenon was observed, that is, Na, instead of Li or K, is the weakest among the three types of metal atoms in adsorption. The reason is that the SOMO (single occupied molecular orbital) of the Na atom is exactly at the middle point between the HOMO and the LUMO of the graphite layer in energy level. As a result, the SOMO of Na cannot form a stable interaction with either the HOMO or the LUMO of the graphite. On the other hand, the SOMO of Li and K can form a relatively stable interaction with either the HOMO or the LUMO of graphite. Why Li has a relatively stronger adsorption than K on graphite has also been interpreted on the basis of their molecular-orbital energy levels.
Resumo:
In this work, the different adsorption properties of H and alkali metal atoms on the basal plane of graphite are studied and compared using a density functional method on the same model chemistry level. The results show that H prefers the on-top site while alkali metals favor the middle hollow site of graphite basal plane due to the unique electronic structures of H, alkali metals, and graphite. H has a higher electronegativity than carbon, preferring to form a covalent bond with C atoms, whereas alkaline metals have lower electronegativity, tending to adsorb on the highest electrostatic potential sites. During adsorption, there are more charges transferred from alkali metal to graphite than from H to graphite.
Resumo:
The nitrogen substitution in carbon materials is investigated theoretically using the density functional theory method. Our calculations show that nitrogen substitution decreases the hydrogen adsorption energy if hydrogen atoms are adsorbed on both nitrogen atoms and the neighboring carbon atoms. On the contrary, the hydrogen adsorption energy can be increased if hydrogen atoms are adsorbed only on the neighboring carbon atoms. The reason can be explained by the electronic structures analysis of N-substituted graphene sheets. Nitrogen substitution reduces the pi electron conjugation and increases the HOMO energy of a graphene sheet, and the nitrogen atom is not stable due to its 3-valent character. This raises an interesting research topic on the optimization of the N-substitution degree, and is important to many applications such as hydrogen storage and the tokamaks device. The electronic structure studies also explain well why nitrogen substitution increases the capacitance but decreases the electron conductivity of carbon electrodes as was experimentally observed in our experiments on the supercapacitor.
Resumo:
Chromogenic (CISH) and fluorescent ( FISH) in situ hybridization have emerged as reliable techniques to identify amplifications and chromosomal translocations. CISH provides a spatial distribution of gene copy number changes in tumour tissue and allows a direct correlation between copy number changes and the morphological features of neoplastic cells. However, the limited number of commercially available gene probes has hindered the use of this technique. We have devised a protocol to generate probes for CISH that can be applied to formalin-fixed, paraffin-embedded tissue sections (FFPETS). Bacterial artificial chromosomes ( BACs) containing fragments of human DNA which map to specific genomic regions of interest are amplified with phi 29 polymerase and random primer labelled with biotin. The genomic location of these can be readily confirmed by BAC end pair sequencing and FISH mapping on normal lymphocyte metaphase spreads. To demonstrate the reliability of the probes generated with this protocol, four strategies were employed: (i) probes mapping to cyclin D1 (CCND1) were generated and their performance was compared with that of a commercially available probe for the same gene in a series of 10 FFPETS of breast cancer samples of which five harboured CCND1 amplification; (ii) probes targeting cyclin-dependent kinase 4 were used to validate an amplification identified by microarray-based comparative genomic hybridization (aCGH) in a pleomorphic adenoma; (iii) probes targeting fibroblast growth factor receptor 1 and CCND1 were used to validate amplifications mapping to these regions, as defined by aCGH, in an invasive lobular breast carcinoma with FISH and CISH; and (iv) gene-specific probes for ETV6 and NTRK3 were used to demonstrate the presence of t(12; 15)(p12; q25) translocation in a case of breast secretory carcinoma with dual colour FISH. In summary, this protocol enables the generation of probes mapping to any gene of interest that can be applied to FFPETS, allowing correlation of morphological features with gene copy number.