898 resultados para LARGE SYSTEMS
Resumo:
Current methods for molecular simulations of Electric Double Layer Capacitors (EDLC) have both the electrodes and the electrolyte region in a single simulation box. This necessitates simulation of the electrode-electrolyte region interface. Typical capacitors have macroscopic dimensions where the fraction of the molecules at the electrode-electrolyte region interface is very low. Hence, large systems sizes are needed to minimize the electrode-electrolyte region interfacial effects. To overcome these problems, a new technique based on the Gibbs Ensemble is proposed for simulation of an EDLC. In the proposed technique, each electrode is simulated in a separate simulation box. Application of periodic boundary conditions eliminates the interfacial effects. This in addition to the use of constant voltage ensemble allows for a more convenient comparison of simulation results with experimental measurements on typical EDLCs. (C) 2014 AIP Publishing LLC.
Resumo:
The field of micro-/nano-mechanics of materials has been driven, on the one hand by the development of ever smaller structures in devices, and, on the other, by the need to map property variations in large systems that are microstructurally graded. Observations of `smaller is stronger' have also brought in questions of accompanying fracture property changes in the materials. In the wake of scattered articles on micro-scale fracture testing of various material classes, this review attempts to provide a holistic picture of the current state of the art. In the process, various reliable micro-scale geometries are shown, challenges with respect to instrumentation to probe ever smaller length scales are discussed and examples from recent literature are put together to exhibit the expanse of unusual fracture response of materials, from ductility in Si to brittleness in Pt. Outstanding issues related to fracture mechanics of small structures are critically examined for plausible solutions.
Resumo:
Researchers have spent decades refining and improving their methods for fabricating smaller, finer-tuned, higher-quality nanoscale optical elements with the goal of making more sensitive and accurate measurements of the world around them using optics. Quantum optics has been a well-established tool of choice in making these increasingly sensitive measurements which have repeatedly pushed the limits on the accuracy of measurement set forth by quantum mechanics. A recent development in quantum optics has been a creative integration of robust, high-quality, and well-established macroscopic experimental systems with highly-engineerable on-chip nanoscale oscillators fabricated in cleanrooms. However, merging large systems with nanoscale oscillators often require them to have extremely high aspect-ratios, which make them extremely delicate and difficult to fabricate with an "experimentally reasonable" repeatability, yield and high quality. In this work we give an overview of our research, which focused on microscopic oscillators which are coupled with macroscopic optical cavities towards the goal of cooling them to their motional ground state in room temperature environments. The quality factor of a mechanical resonator is an important figure of merit for various sensing applications and observing quantum behavior. We demonstrated a technique for pushing the quality factor of a micromechanical resonator beyond conventional material and fabrication limits by using an optical field to stiffen and trap a particular motional mode of a nanoscale oscillator. Optical forces increase the oscillation frequency by storing most of the mechanical energy in a nearly loss-less optical potential, thereby strongly diluting the effects of material dissipation. By placing a 130 nm thick SiO2 pendulum in an optical standing wave, we achieve an increase in the pendulum center-of-mass frequency from 6.2 to 145 kHz. The corresponding quality factor increases 50-fold from its intrinsic value to a final value of Qm = 5.8(1.1) x 105, representing more than an order of magnitude improvement over the conventional limits of SiO2 for a pendulum geometry. Our technique may enable new opportunities for mechanical sensing and facilitate observations of quantum behavior in this class of mechanical systems. We then give a detailed overview of the techniques used to produce high-aspect-ratio nanostructures with applications in a wide range of quantum optics experiments. The ability to fabricate such nanodevices with high precision opens the door to a vast array of experiments which integrate macroscopic optical setups with lithographically engineered nanodevices. Coupled with atom-trapping experiments in the Kimble Lab, we use these techniques to realize a new waveguide chip designed to address ultra-cold atoms along lithographically patterned nanobeams which have large atom-photon coupling and near 4π Steradian optical access for cooling and trapping atoms. We describe a fully integrated and scalable design where cold atoms are spatially overlapped with the nanostring cavities in order to observe a resonant optical depth of d0 ≈ 0.15. The nanodevice illuminates new possibilities for integrating atoms into photonic circuits and engineering quantum states of atoms and light on a microscopic scale. We then describe our work with superconducting microwave resonators coupled to a phononic cavity towards the goal of building an integrated device for quantum-limited microwave-to-optical wavelength conversion. We give an overview of our characterizations of several types of substrates for fabricating a low-loss high-frequency electromechanical system. We describe our electromechanical system fabricated on a Si3N4 membrane which consists of a 12 GHz superconducting LC resonator coupled capacitively to the high frequency localized modes of a phononic nanobeam. Using our suspended membrane geometry we isolate our system from substrates with significant loss tangents, drastically reducing the parasitic capacitance of our superconducting circuit to ≈ 2.5$ fF. This opens up a number of possibilities in making a new class of low-loss high-frequency electromechanics with relatively large electromechanical coupling. We present our substrate studies, fabrication methods, and device characterization.
Resumo:
In this report we have attempted to evaluate the ecological and economic consequences of hypoxia in the northern Gulf of Mexico. Although our initial approach was to rely on published accounts, we quickly realized that the body of published literature deahng with hypoxia was limited, and we would have to conduct our own exploratory analysis of existing Gulf data, or rely on published accounts from other systems to infer possible or potential effects of hypoxia. For the economic analysis, we developed a conceptual model of how hypoxia-related impacts could affect fisheries. Our model included both supply and demand components. The supply model had two components: (1) a physical production function for fish or shrimp, and (2) the cost of fishing. If hypoxia causes the cost of a unit of fishing effort to change, then this will result in a shift in supply. The demand model considered how hypoxia might affect the quality of landed fish or shrimp. In particular, the market value per pound is lower for small shrimp than for large shrimp. Given the limitations of the ecological assessment, the shallow continental shelf area affected by hypoxia does show signs of hypoxia-related stress. While current ecological conditions are a response to a variety of stressors, the effects of hypoxia are most obvious in the benthos that experience mortality, elimination of larger long-lived species, and a shifting of productivity to nonhypoxic periods (energy pulsing). What is not known is whether hypoxia leads to higher productivity during productive periods, or simply to a reduction of productivity during oxygen-stressed periods. The economic assessment based on fisheries data, however, failed to detect effects attributable to hypoxia. Overall, fisheries landings statistics for at least the last few decades have been relatively constant. The failure to identify clear hypoxic effects in the fisheries statistics does not necessarily mean that they are absent. There are several possibilities: (1) hypoxic effects are small relative to the overall variability in the data sets evaluated; (2) the data and the power of the analyses are not adequate; and (3) currently there are no hypoxic effects on fisheries. Lack of identified hypoxic effects in available fisheries data does not imply that effects would not occur should conditions worsen. Experience with other hypoxic zones around the globe shows that both ecological and fisheries effects become progressively more severe as hypoxia increases. Several large systems around the globe have suffered serious ecological and economic consequences from seasonal summertime hypoxia; most notable are the Kattegat and Black Sea. The consequences range from localized loss of catch and recruitment failure to complete system-wide loss of fishery species. If experiences in other systems are applicable to the Gulf of Mexico, then in the face of worsening hypoxic conditions, at some point fisheries and other species will decline, perhaps precipitously.
Resumo:
Monte Carlo burnup codes use various schemes to solve the coupled criticality and burnup equations. Previous studies have shown that the simplest methods, such as the beginning-of-step and middle-of-step constant flux approximations, are numerically unstable in fuel cycle calculations of critical reactors. Here we show that even the predictor-corrector methods that are implemented in established Monte Carlo burnup codes can be numerically unstable in cycle calculations of large systems. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
Resumo:
© 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.A key component in calculations of exchange and correlation energies is the Coulomb operator, which requires the evaluation of two-electron integrals. For localized basis sets, these four-center integrals are most efficiently evaluated with the resolution of identity (RI) technique, which expands basis-function products in an auxiliary basis. In this work we show the practical applicability of a localized RI-variant ('RI-LVL'), which expands products of basis functions only in the subset of those auxiliary basis functions which are located at the same atoms as the basis functions. We demonstrate the accuracy of RI-LVL for Hartree-Fock calculations, for the PBE0 hybrid density functional, as well as for RPA and MP2 perturbation theory. Molecular test sets used include the S22 set of weakly interacting molecules, the G3 test set, as well as the G2-1 and BH76 test sets, and heavy elements including titanium dioxide, copper and gold clusters. Our RI-LVL implementation paves the way for linear-scaling RI-based hybrid functional calculations for large systems and for all-electron many-body perturbation theory with significantly reduced computational and memory cost.
Resumo:
Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.
Resumo:
For structural health monitoring it is impractical to identify a large structure with complete measurement due to limited number of sensors and difficulty in field instrumentation. Furthermore, it is not desirable to identify a large number of unknown parameters in a full system because of numerical difficulty in convergence. A novel substructural strategy was presented for identification of stiffness matrices and damage assessment with incomplete measurement. The substructural approach was employed to identify large systems in a divide-and-conquer manner. In addition, the concept of model condensation was invoked to avoid the need for complete measurement, and the recovery process to obtain the full set of parameters was formulated. The efficiency of the proposed method is demonstrated numerically through multi-storey shear buildings subjected to random force. A fairly large structural system with 50 DOFs was identified with good results, taking into consideration the effects of noisy signals and the limited number of sensors. Two variations of the method were applied, depending on whether the sensor could be repositioned. The proposed strategy was further substantiated experimentally using an eight-storey steel plane frame model subjected to shaker and impulse hammer excitations. Both numerical and experimental results have shown that the proposed substructural strategy gave reasonably accurate identification in terms of locating and quantifying structural damage.
Resumo:
An exact and general approach to study molecular vibrations is provided by the Watson Hamiltonian. Within this framework, it is customary to omit the contribution of the terms with the vibrational angular momentum and the Watson term, especially for the study of large systems. We discover that this omission leads to results which depend on the choice of the reference structure. The self-consistent solution proposed here yields a geometry that coincides with the quantum averaged geometry of the Watson Hamiltonian and appears to be a promising way for the computation of the vibrational spectra of strongly anharmonic systems.
Resumo:
Geometries, vibrational frequencies, and interaction energies of the CNH⋯O3 and HCCH⋯O3 complexes are calculated in a counterpoise-corrected (CP-corrected) potential-energy surface (PES) that corrects for the basis set superposition error (BSSE). Ab initio calculations are performed at the Hartree-Fock (HF) and second-order Møller-Plesset (MP2) levels, using the 6-31G(d,p) and D95++(d,p) basis sets. Interaction energies are presented including corrections for zero-point vibrational energy (ZPVE) and thermal correction to enthalpy at 298 K. The CP-corrected and conventional PES are compared; the unconnected PES obtained using the larger basis set including diffuse functions exhibits a double well shape, whereas use of the 6-31G(d,p) basis set leads to a flat single-well profile. The CP-corrected PES has always a multiple-well shape. In particular, it is shown that the CP-corrected PES using the smaller basis set is qualitatively analogous to that obtained with the larger basis sets, so the CP method becomes useful to correctly describe large systems, where the use of small basis sets may be necessary
Resumo:
In this work, a systematic study of SO2 molecules interacting with pristine and transition metal (TM) covered C-60 is presented by means of first principles calculations. It is observed that the SO2 molecule interacts weakly with the pristine C-60 fullerene, although the resulting interaction is largely increased when the C-60 structure is covered with Fe, Mn, or Ti atoms and the SO2 Molecules are bounded through the TM atoms. The number of bounded SO2 molecules per TM atoms, in addition to the elevated binding energies per molecules, allows us to conclude that such composites can be used as a template for efficient devices to remove SO2 molecules or, alternatively, as SO2 gas sensor.
Resumo:
Several experimental groups have achieved effective n- and p-type doping of silicon nanowires (SiNWs). However, theoretical analyses on ultrathin SiNWs suggest that dopants tend to segregate to their surfaces, where they would combine with defects such as dangling bonds (DB), becoming electronically inactive. Using fully ab initio calculations, we show that the differences in formation energies among surface and core substitutional sites decrease rapidly as the diameters of the wires increase, indicating that the dopants will be uniformly distributed. Moreover, occurrence of the electronically inactive impurity/DB complex rapidly becomes less frequent for NWs of larger diameters. We also show that the high confinement in the ultrathin SiNWs causes the impurity levels to be deeper than in the silicon bulk, but our results indicate that for NWs of diameters larger than approximately 3 nm the impurity levels recover bulk characteristics. Finally, we show that different surfaces will lead to different dopant properties in the gap.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)