892 resultados para Energy consumption -- Computer simulation
Resumo:
Electric propulsion is now a succeful method for primary propulsion of deep space long duration missions and for geosyncronous satellite attitude control. Closed Drift Thruster, so called Hall Thruster or SPT (Stationary Plasma Thruster), was primarily conceived in USSR (the ancient Soviet Union) and, since then, it has been developed by space agencies, space research institutes and industries in several countries such as France, USA, Israel, Russian Federation and Brazil. In this work we present the main features of the Permanent Magnet Hall Thruster (PMHT) developed at the Plasma Laboratory of the University of Brasilia. The idea of using an array of permanent magnets, instead of an electromagnet, to produce a radial magnetic field inside the plasma channel of the thruster is very significant. It allows the development of a Hall Thruster with power consumption low enough to be used in small and medium size satellites. Description of a new vacuum chamber used to test the second prototype of the PMHT (PHALL II) will be given. PHALL II has an aluminum plasma chamber and is smaller with 15 cm diameter and will contain rare earth magnets. We will show plasma density and temperature space profiles inside and outside the thruster channel. Ion temperature measurements based on Doppler broadening of spectral lines and ion energy measurements are also shown. Based on the measured plasma parameters we constructed an aptitude figure of the PMHT. It contains the specific impulse, total thrust, propellant flow rate and power consumption necessary for orbit raising of satellites. Based on previous studies of geosyncronous satellite orbit positioning we perform numerical simulations of satellite orbit raising from an altitude of 700 km to 36000 km using a PMHT operating in the 100 mN - 500 mN thrust range. In order to perform these calculations integration techniques were used. The main simulation paraters were orbit raising time, fuel mass, total satellite mass, thrust and exaust velocity. We conclude comparing our results with results obtainned with known space missions performed with Hall Thrusters. © 2008 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
In this thesis I present a new coarse-grained model suitable to investigate the phase behavior of rod-coil block copolymers on mesoscopic length scales. In this model the rods are represented by hard spherocylinders, whereas the coil block consists of interconnected beads. The interactions between the constituents are based on local densities. This facilitates an efficient Monte-Carlo sampling of the phase space. I verify the applicability of the model and the simulation approach by means of several examples. I treat pure rod systems and mixtures of rod and coil polymers. Then I append coils to the rods and investigate the role of the different model parameters. Furthermore, I compare different implementations of the model. I prove the capability of the rod-coil block copolymers in our model to exhibit typical micro-phase separated configurations as well as extraordinary phases, such as the wavy lamellar state, percolating structuresrnand clusters. Additionally, I demonstrate the metastability of the observed zigzag phase in our model. A central point of this thesis is the examination of the phase behavior of the rod-coil block copolymers in dependence of different chain lengths and interaction strengths between rods and coil. The observations of these studies are summarized in a phase diagram for rod-coil block copolymers. Furthermore, I validate a stabilization of the smectic phase with increasing coil fraction.rnIn the second part of this work I present a side project in which I derive a model permitting the simulation of tetrapods with and without grafted semiconducting block copolymers. The effect of these polymers is added in an implicit manner by effective interactions between the tetrapods. While the depletion interaction is described in an approximate manner within the Asakura-Oosawa model, the free energy penalty for the brush compression is calculated within the Alexander-de Gennes model. Recent experiments with CdSe tetrapods show that grafted tetrapods are clearly much better dispersed in the polymer matrix than bare tetrapods. My simulations confirm that bare tetrapods tend to aggregate in the matrix of excess polymers, while clustering is significantly reduced after grafting polymer chains to the tetrapods. Finally, I propose a possible extension enabling the simulation of a system with fluctuating volume and demonstrate its basic functionality. This study is originated in a cooperation with an experimental group with the goal to analyze the morphology of these systems in order to find the ideal morphology for hybrid solar cells.
Resumo:
In this paper, we propose an intelligent method, named the Novelty Detection Power Meter (NodePM), to detect novelties in electronic equipment monitored by a smart grid. Considering the entropy of each device monitored, which is calculated based on a Markov chain model, the proposed method identifies novelties through a machine learning algorithm. To this end, the NodePM is integrated into a platform for the remote monitoring of energy consumption, which consists of a wireless sensors network (WSN). It thus should be stressed that the experiments were conducted in real environments different from many related works, which are evaluated in simulated environments. In this sense, the results show that the NodePM reduces by 13.7% the power consumption of the equipment we monitored. In addition, the NodePM provides better efficiency to detect novelties when compared to an approach from the literature, surpassing it in different scenarios in all evaluations that were carried out.
Resumo:
Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.
Resumo:
Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.
Resumo:
The widespread use of wireless enabled devices and the increasing capabilities of wireless technologies has promoted multimedia content access and sharing among users. However, the quality perceived by the users still depends on multiple factors such as video characteristics, device capabilities, and link quality. While video characteristics include the video time and spatial complexity as well as the coding complexity, one of the most important device characteristics is the battery lifetime. There is the need to assess how these aspects interact and how they impact the overall user satisfaction. This paper advances previous works by proposing and validating a flexible framework, named EViTEQ, to be applied in real testbeds to satisfy the requirements of performance assessment. EViTEQ is able to measure network interface energy consumption with high precision, while being completely technology independent and assessing the application level quality of experience. The results obtained in the testbed show the relevance of combined multi-criteria measurement approaches, leading to superior end-user satisfaction perception evaluation .
Resumo:
Energy consumption modelling by state based approaches often assume constant energy consumption values in each state. However, it happens in certain situations that during state transitions or even during a state the energy consumption is not constant and does fluctuate. This paper discusses those issues by presenting some examples from wireless sensor and wireless local area networks for such cases and possible solutions.
Resumo:
Resource analysis aims at inferring the cost of executing programs for any possible input, in terms of a given resource, such as the traditional execution steps, time ormemory, and, more recently energy consumption or user defined resources (e.g., number of bits sent over a socket, number of database accesses, number of calls to particular procedures, etc.). This is performed statically, i.e., without actually running the programs. Resource usage information is useful for a variety of optimization and verification applications, as well as for guiding software design. For example, programmers can use such information to choose different algorithmic solutions to a problem; program transformation systems can use cost information to choose between alternative transformations; parallelizing compilers can use cost estimates for granularity control, which tries to balance the overheads of task creation and manipulation against the benefits of parallelization. In this thesis we have significatively improved an existing prototype implementation for resource usage analysis based on abstract interpretation, addressing a number of relevant challenges and overcoming many limitations it presented. The goal of that prototype was to show the viability of casting the resource analysis as an abstract domain, and howit could overcome important limitations of the state-of-the-art resource usage analysis tools. For this purpose, it was implemented as an abstract domain in the abstract interpretation framework of the CiaoPP system, PLAI.We have improved both the design and implementation of the prototype, for eventually allowing an evolution of the tool to the industrial application level. The abstract operations of such tool heavily depend on the setting up and finding closed-form solutions of recurrence relations representing the resource usage behavior of program components and the whole program as well. While there exist many tools, such as Computer Algebra Systems (CAS) and libraries able to find closed-form solutions for some types of recurrences, none of them alone is able to handle all the types of recurrences arising during program analysis. In addition, there are some types of recurrences that cannot be solved by any existing tool. This clearly constitutes a bottleneck for this kind of resource usage analysis. Thus, one of the major challenges we have addressed in this thesis is the design and development of a novel modular framework for solving recurrence relations, able to combine and take advantage of the results of existing solvers. Additionally, we have developed and integrated into our novel solver a technique for finding upper-bound closed-form solutions of a special class of recurrence relations that arise during the analysis of programs with accumulating parameters. Finally, we have integrated the improved resource analysis into the CiaoPP general framework for resource usage verification, and specialized the framework for verifying energy consumption specifications of embedded imperative programs in a real application, showing the usefulness and practicality of the resulting tool.---ABSTRACT---El Análisis de recursos tiene como objetivo inferir el coste de la ejecución de programas para cualquier entrada posible, en términos de algún recurso determinado, como pasos de ejecución, tiempo o memoria, y, más recientemente, el consumo de energía o recursos definidos por el usuario (por ejemplo, número de bits enviados a través de un socket, el número de accesos a una base de datos, cantidad de llamadas a determinados procedimientos, etc.). Ello se realiza estáticamente, es decir, sin necesidad de ejecutar los programas. La información sobre el uso de recursos resulta muy útil para una gran variedad de aplicaciones de optimización y verificación de programas, así como para asistir en el diseño de los mismos. Por ejemplo, los programadores pueden utilizar dicha información para elegir diferentes soluciones algorítmicas a un problema; los sistemas de transformación de programas pueden utilizar la información de coste para elegir entre transformaciones alternativas; los compiladores paralelizantes pueden utilizar las estimaciones de coste para realizar control de granularidad, el cual trata de equilibrar el coste debido a la creación y gestión de tareas, con los beneficios de la paralelización. En esta tesis hemos mejorado de manera significativa la implementación de un prototipo existente para el análisis del uso de recursos basado en interpretación abstracta, abordando diversos desafíos relevantes y superando numerosas limitaciones que éste presentaba. El objetivo de dicho prototipo era mostrar la viabilidad de definir el análisis de recursos como un dominio abstracto, y cómo se podían superar las limitaciones de otras herramientas similares que constituyen el estado del arte. Para ello, se implementó como un dominio abstracto en el marco de interpretación abstracta presente en el sistema CiaoPP, PLAI. Hemos mejorado tanto el diseño como la implementación del mencionado prototipo para posibilitar su evolución hacia una herramienta utilizable en el ámbito industrial. Las operaciones abstractas de dicha herramienta dependen en gran medida de la generación, y posterior búsqueda de soluciones en forma cerrada, de relaciones recurrentes, las cuales modelizan el comportamiento, respecto al consumo de recursos, de los componentes del programa y del programa completo. Si bien existen actualmente muchas herramientas capaces de encontrar soluciones en forma cerrada para ciertos tipos de recurrencias, tales como Sistemas de Computación Algebraicos (CAS) y librerías de programación, ninguna de dichas herramientas es capaz de tratar, por sí sola, todos los tipos de recurrencias que surgen durante el análisis de recursos. Existen incluso recurrencias que no las puede resolver ninguna herramienta actual. Esto constituye claramente un cuello de botella para este tipo de análisis del uso de recursos. Por lo tanto, uno de los principales desafíos que hemos abordado en esta tesis es el diseño y desarrollo de un novedoso marco modular para la resolución de relaciones recurrentes, combinando y aprovechando los resultados de resolutores existentes. Además de ello, hemos desarrollado e integrado en nuestro nuevo resolutor una técnica para la obtención de cotas superiores en forma cerrada de una clase característica de relaciones recurrentes que surgen durante el análisis de programas lógicos con parámetros de acumulación. Finalmente, hemos integrado el nuevo análisis de recursos con el marco general para verificación de recursos de CiaoPP, y hemos instanciado dicho marco para la verificación de especificaciones sobre el consumo de energía de programas imperativas embarcados, mostrando la viabilidad y utilidad de la herramienta resultante en una aplicación real.
Resumo:
El consumo energético de las Redes de Sensores Inalámbricas (WSNs por sus siglas en inglés) es un problema histórico que ha sido abordado desde diferentes niveles y visiones, ya que no solo afecta a la propia supervivencia de la red sino que el creciente uso de dispositivos inteligentes y el nuevo paradigma del Internet de las Cosas hace que las WSNs tengan cada vez una mayor influencia en la huella energética. Debido a la tendencia al alza en el uso de estas redes se añade un nuevo problema, la saturación espectral. Las WSNs operan habitualmente en bandas sin licencia como son las bandas Industrial, Científica y Médica (ISM por sus siglas en inglés). Estas bandas se comparten con otro tipo de redes como Wi-Fi o Bluetooth cuyo uso ha crecido exponencialmente en los últimos años. Para abordar este problema aparece el paradigma de la Radio Cognitiva (CR), una tecnología que permite el acceso oportunista al espectro. La introducción de capacidades cognitivas en las WSNs no solo permite optimizar su eficiencia espectral sino que también tiene un impacto positivo en parámetros como la calidad de servicio, la seguridad o el consumo energético. Sin embargo, por otra parte, este nuevo paradigma plantea algunos retos relacionados con el consumo energético. Concretamente, el sensado del espectro, la colaboración entre los nodos (que requiere comunicación adicional) y el cambio en los parámetros de transmisión aumentan el consumo respecto a las WSN clásicas. Teniendo en cuenta que la investigación en el campo del consumo energético ha sido ampliamente abordada puesto que se trata de una de sus principales limitaciones, asumimos que las nuevas estrategias deben surgir de las nuevas capacidades añadidas por las redes cognitivas. Por otro lado, a la hora de diseñar estrategias de optimización para CWSN hay que tener muy presentes las limitaciones de recursos de estas redes en cuanto a memoria, computación y consumo energético de los nodos. En esta tesis doctoral proponemos dos estrategias de reducción de consumo energético en CWSNs basadas en tres pilares fundamentales. El primero son las capacidades cognitivas añadidas a las WSNs que proporcionan la posibilidad de adaptar los parámetros de transmisión en función del espectro disponible. La segunda es la colaboración, como característica intrínseca de las CWSNs. Finalmente, el tercer pilar de este trabajo es teoría de juegos como algoritmo de soporte a la decisión, ampliamente utilizado en WSNs debido a su simplicidad. Como primer aporte de la tesis se presenta un análisis completo de las posibilidades introducidas por la radio cognitiva en materia de reducción de consumo para WSNs. Gracias a las conclusiones extraídas de este análisis, se han planteado las hipótesis de esta tesis relacionadas con la validez de usar capacidades cognitivas como herramienta para la reducción de consumo en CWSNs. Una vez presentada las hipótesis, pasamos a desarrollar las principales contribuciones de la tesis: las dos estrategias diseñadas para reducción de consumo basadas en teoría de juegos y CR. La primera de ellas hace uso de un juego no cooperativo que se juega mediante pares de jugadores. En la segunda estrategia, aunque el juego continúa siendo no cooperativo, se añade el concepto de colaboración. Para cada una de las estrategias se presenta el modelo del juego, el análisis formal de equilibrios y óptimos y la descripción de la estrategia completa donde se incluye la interacción entre nodos. Con el propósito de probar las estrategias mediante simulación e implementación en dispositivos reales hemos desarrollado un marco de pruebas compuesto por un simulador cognitivo y un banco de pruebas formado por nodos cognitivos capaces de comunicarse en tres bandas ISM desarrollados en el B105 Lab. Este marco de pruebas constituye otra de las aportaciones de la tesis que permitirá el avance en la investigación en el área de las CWSNs. Finalmente, se presentan y discuten los resultados derivados de la prueba de las estrategias desarrolladas. La primera estrategia proporciona ahorros de energía mayores al 65% comparados con una WSN sin capacidades cognitivas y alrededor del 25% si la comparamos con una estrategia cognitiva basada en el sensado periódico del espectro para el cambio de canal de acuerdo a un nivel de ruido fijado. Este algoritmo se comporta de forma similar independientemente del nivel de ruido siempre que éste sea espacialmente uniformemente. Esta estrategia, a pesar de su sencillez, nos asegura el comportamiento óptimo en cuanto a consumo energético debido a la utilización de teoría de juegos en la fase de diseño del comportamiento de los nodos. La estrategia colaborativa presenta mejoras respecto a la anterior en términos de protección frente al ruido en escenarios de ruido más complejos donde aporta una mejora del 50% comparada con la estrategia anterior. ABSTRACT Energy consumption in Wireless Sensor Networks (WSNs) is a known historical problem that has been addressed from different areas and on many levels. But this problem should not only be approached from the point of view of their own efficiency for survival. A major portion of communication traffic has migrated to mobile networks and systems. The increased use of smart devices and the introduction of the Internet of Things (IoT) give WSNs a great influence on the carbon footprint. Thus, optimizing the energy consumption of wireless networks could reduce their environmental impact considerably. In recent years, another problem has been added to the equation: spectrum saturation. Wireless Sensor Networks usually operate in unlicensed spectrum bands such as Industrial, Scientific, and Medical (ISM) bands shared with other networks (mainly Wi-Fi and Bluetooth). To address the efficient spectrum utilization problem, Cognitive Radio (CR) has emerged as the key technology that enables opportunistic access to the spectrum. Therefore, the introduction of cognitive capabilities to WSNs allows optimizing their spectral occupation. Cognitive Wireless Sensor Networks (CWSNs) do not only increase the reliability of communications, but they also have a positive impact on parameters such as the Quality of Service (QoS), network security, or energy consumption. These new opportunities introduced by CWSNs unveil a wide field in the energy consumption research area. However, this also implies some challenges. Specifically, the spectrum sensing stage, collaboration among devices (which requires extra communication), and changes in the transmission parameters increase the total energy consumption of the network. When designing CWSN optimization strategies, the fact that WSN nodes are very limited in terms of memory, computational power, or energy consumption has to be considered. Thus, light strategies that require a low computing capacity must be found. Since the field of energy conservation in WSNs has been widely explored, we assume that new strategies could emerge from the new opportunities presented by cognitive networks. In this PhD Thesis, we present two strategies for energy consumption reduction in CWSNs supported by three main pillars. The first pillar is that cognitive capabilities added to the WSN provide the ability to change the transmission parameters according to the spectrum. The second pillar is that the ability to collaborate is a basic characteristic of CWSNs. Finally, the third pillar for this work is the game theory as a decision-making algorithm, which has been widely used in WSNs due to its lightness and simplicity that make it valid to operate in CWSNs. For the development of these strategies, a complete analysis of the possibilities is first carried out by incorporating the cognitive abilities into the network. Once this analysis has been performed, we expose the hypotheses of this thesis related to the use of cognitive capabilities as a useful tool to reduce energy consumption in CWSNs. Once the analyses are exposed, we present the main contribution of this thesis: the two designed strategies for energy consumption reduction based on game theory and cognitive capabilities. The first one is based on a non-cooperative game played between two players in a simple and selfish way. In the second strategy, the concept of collaboration is introduced. Despite the fact that the game used is also a non-cooperative game, the decisions are taken through collaboration. For each strategy, we present the modeled game, the formal analysis of equilibrium and optimum, and the complete strategy describing the interaction between nodes. In order to test the strategies through simulation and implementation in real devices, we have developed a CWSN framework composed by a CWSN simulator based on Castalia and a testbed based on CWSN nodes able to communicate in three different ISM bands. We present and discuss the results derived by the energy optimization strategies. The first strategy brings energy improvement rates of over 65% compared to WSN without cognitive techniques. It also brings energy improvement rates of over 25% compared with sensing strategies for changing channels based on a decision threshold. We have also seen that the algorithm behaves similarly even with significant variations in the level of noise while working in a uniform noise scenario. The collaborative strategy presents improvements respecting the previous strategy in terms of noise protection when the noise scheme is more complex where this strategy shows improvement rates of over 50%.
Resumo:
We describe a procedure for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (i) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (ii) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (iii) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (iv) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol (1 kcal = 4.18 kJ) of experimental values. The use of the calibrated QM and microsolvation QM/MM (molecular mechanics) models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa values of the reacting species.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Transportation Department, Washington, D.C.
Resumo:
A computer model of the mechanical alloying process has been developed to simulate phase formation during the mechanical alloying of Mo and Si elemental powders with a ternary addition of Al, Mg, Ti or Zr. Using the Arhennius equation, the model balances the formation rates of the competing reactions that are observed during milling. These reactions include the formation of tetragonal C11(b) MOSi2 (t-MoSi2) by combustion, the formation of the hexagonal C40 MoSi2 polymorph (h-MoSi2), the transformation of the tetragonal to the hexagonal form, and the recovery of t-MoSi2 from h-MoSi2 and deformed t-MoSi2. The addition of the ternary additions changes the free energy of formation of the associated MoSi2 alloys, i.e. Mo(Si, Al)(2), Mo(Mg, Al)(2), (Mo, Ti)Si-2 (Mo, Zr)Si-2 and (Mo, Fe)Si-2, respectively. Variation of the energy of formation alone is sufficient for the simulation to accurately model the observed phase formation. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The adsorption of simple Lennard-Jones fluids in a carbon slit pore of finite length was studied with Canonical Ensemble (NVT) and Gibbs Ensemble Monte Carlo Simulations (GEMC). The Canonical Ensemble was a collection of cubic simulation boxes in which a finite pore resides, while the Gibbs Ensemble was that of the pore space of the finite pore. Argon was used as a model for Lennard-Jones fluids, while the adsorbent was modelled as a finite carbon slit pore whose two walls were composed of three graphene layers with carbon atoms arranged in a hexagonal pattern. The Lennard-Jones (LJ) 12-6 potential model was used to compute the interaction energy between two fluid particles, and also between a fluid particle and a carbon atom. Argon adsorption isotherms were obtained at 87.3 K for pore widths of 1.0, 1.5 and 2.0 nm using both Canonical and Gibbs Ensembles. These results were compared with isotherms obtained with corresponding infinite pores using Grand Canonical Ensembles. The effects of the number of cycles necessary to reach equilibrium, the initial allocation of particles, the displacement step and the simulation box size were particularly investigated in the Monte Carlo simulation with Canonical Ensembles. Of these parameters, the displacement step had the most significant effect on the performance of the Monte Carlo simulation. The simulation box size was also important, especially at low pressures at which the size must be sufficiently large to have a statistically acceptable number of particles in the bulk phase. Finally, it was found that the Canonical Ensemble and the Gibbs Ensemble both yielded the same isotherm (within statistical error); however, the computation time for GEMC was shorter than that for canonical ensemble simulation. However, the latter method described the proper interface between the reservoir and the adsorbed phase (and hence the meniscus).
Resumo:
The XSophe computer simulation software suite consisting of a daemon, the XSophe interface and the computational program Sophe is a state of the art package for the simulation of electron paramagnetic resonance spectra. The Sophe program performs the computer simulation and includes a number of new technologies including; the SOPHE partition and interpolation schemes, a field segmentation algorithm, homotopy, parallelisation and spectral optimisation. The SOPHE partition and interpolation scheme along with a field segmentation algorithm greatly increases the speed of simulations for most systems. Multidimensional homotopy provides an efficient method for accurately tracing energy levels and hence tracing transitions in the presence of energy level anticrossings and looping transitions and allowing computer simulations in frequency space. Recent enhancements to Sophe include the generalised treatment of distributions of orientational parameters, termed the mosaic misorientation linewidth model and a faster more efficient algorithm for the calculation of resonant field positions and transition probabilities. For complex systems the parallelisation enables the simulation of these systems on a parallel computer and the optimisation algorithms in the suite provide the experimentalist with the possibility of finding the spin Hamiltonian parameters in a systematic manner rather than a trial-and-error process. The XSophe software suite has been used to simulate multifrequency EPR spectra (200 MHz to 6 00 GHz) from isolated spin systems (S > ~½) and coupled centres (Si, Sj _> I/2). Griffin, M.; Muys, A.; Noble, C.; Wang, D.; Eldershaw, C.; Gates, K.E.; Burrage, K.; Hanson, G.R."XSophe, a Computer Simulation Software Suite for the Analysis of Electron Paramagnetic Resonance Spectra", 1999, Mol. Phys. Rep., 26, 60-84.