875 resultados para Building energy simulations
Resumo:
Acoustic emission (AE) energy, instead of amplitude, associated with each of the event is used to estimate the fracture process zone (FPZ) size. A steep increase in the cumulative AE energy of the events with respect to time is correlated with the formation of FPZ. Based on the AE energy released during these events and the locations of the events, FPZ size is obtained. The size-independent fracture energy is computed using the expressions given in the boundary effect model by least squares method since over-determined system of equations are obtained when data from several specimens are used. Instead of least squares method a different method is suggested in which the transition ligament length, measured from the plot of histograms of AE events plotted over the un-cracked ligament, is used directly to obtain size-independent fracture energy. The fracture energy thus calculated seems to be size-independent.
Resumo:
We describe a noniterative method for recovering optical absorption coefficient distribution from the absorbed energy map reconstructed using simulated and noisy boundary pressure measurements. The source reconstruction problem is first solved for the absorbed energy map corresponding to single- and multiple-source illuminations from the side of the imaging plane. It is shown that the absorbed energy map and the absorption coefficient distribution, recovered from the single-source illumination with a large variation in photon flux distribution, have signal-to-noise ratios comparable to those of the reconstructed parameters from a more uniform photon density distribution corresponding to multiple-source illuminations. The absorbed energy map is input as absorption coefficient times photon flux in the time-independent diffusion equation (DE) governing photon transport to recover the photon flux in a single step. The recovered photon flux is used to compute the optical absorption coefficient distribution from the absorbed energy map. In the absence of experimental data, we obtain the boundary measurements through Monte Carlo simulations, and we attempt to address the possible limitations of the DE model in the overall reconstruction procedure.
Resumo:
Thin film applications have become increasingly important in our search for multifunctional and economically viable technological solutions of the future. Thin film coatings can be used for a multitude of purposes, ranging from a basic enhancement of aesthetic attributes to the addition of a complex surface functionality. Anything from electronic or optical properties, to an increased catalytic or biological activity, can be added or enhanced by the deposition of a thin film, with a thickness of only a few atomic layers at the best, on an already existing surface. Thin films offer both a means of saving in materials and the possibility for improving properties without a critical enlargement of devices. Nanocluster deposition is a promising new method for the growth of structured thin films. Nanoclusters are small aggregates of atoms or molecules, ranging in sizes from only a few nanometers up to several hundreds of nanometers in diameter. Due to their large surface to volume ratio, and the confinement of atoms and electrons in all three dimensions, nanoclusters exhibit a wide variety of exotic properties that differ notably from those of both single atoms and bulk materials. Nanoclusters are a completely new type of building block for thin film deposition. As preformed entities, clusters provide a new means of tailoring the properties of thin films before their growth, simply by changing the size or composition of the clusters that are to be deposited. Contrary to contemporary methods of thin film growth, which mainly rely on the deposition of single atoms, cluster deposition also allows for a more precise assembly of thin films, as the configuration of single atoms with respect to each other is already predetermined in clusters. Nanocluster deposition offers a possibility for the coating of virtually any material with a nanostructured thin film, and therein the enhancement of already existing physical or chemical properties, or the addition of some exciting new feature. A clearer understanding of cluster-surface interactions, and the growth of thin films by cluster deposition, must, however, be achieved, if clusters are to be successfully used in thin film technologies. Using a combination of experimental techniques and molecular dynamics simulations, both the deposition of nanoclusters, and the growth and modification of cluster-assembled thin films, are studied in this thesis. Emphasis is laid on an understanding of the interaction between metal clusters and surfaces, and therein the behaviour of these clusters during deposition and thin film growth. The behaviour of single metal clusters, as they impact on clean metal surfaces, is analysed in detail, from which it is shown that there exists a cluster size and deposition energy dependent limit, below which epitaxial alignment occurs. If larger clusters are deposited at low energies, or cluster-surface interactions are weaker, non-epitaxial deposition will take place, resulting in the formation of nanocrystalline structures. The effect of cluster size and deposition energy on the morphology of cluster-assembled thin films is also determined, from which it is shown that nanocrystalline cluster-assembled films will be porous. Modification of these thin films, with the purpose of enhancing their mechanical properties and durability, without destroying their nanostructure, is presented. Irradiation with heavy ions is introduced as a feasible method for increasing the density, and therein the mechanical stability, of cluster-assembled thin films, without critically destroying their nanocrystalline properties. The results of this thesis demonstrate that nanocluster deposition is a suitable technique for the growth of nanostructured thin films. The interactions between nanoclusters and their supporting surfaces must, however, be carefully considered, if a controlled growth of cluster-assembled thin films, with precisely tailored properties, is to be achieved.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
In this paper, the effects of energy quantization on different single-electron transistor (SET) circuits (logic inverter, current-biased circuits, and hybrid MOS-SET circuits) are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantizationmainly increases the Coulomb blockade area and Coulomb blockade oscillation periodicity, and thus, affects the SET circuit performance. A new model for the noise margin of the SET inverter is proposed, which includes the energy quantization effects. Using the noise margin as a metric, the robustness of the SET inverter is studied against the effects of energy quantization. An analytical expression is developed, which explicitly defines the maximum energy quantization (termed as ``quantization threshold'') that an SET inverter can withstand before its noise margin falls below a specified tolerance level. The effects of energy quantization are further studiedfor the current-biased negative differential resistance (NDR) circuitand hybrid SETMOS circuit. A new model for the conductance of NDR characteristics is also formulated that explains the energy quantization effects.
Resumo:
Following market reforms in 1986 Vietnam has transformed from a poor closed economy to a low middle income economy. Like other developing countries, economic growth has placed significant pressure on both infrastructure and environment, particularly the pressure of increasing housing demand, energy consumption, and waste and pollution management. In response to the development challenges and the green movement globally, the government has initiated actions to promote green building to promote more sustainable development. However, green building adoption in Vietnam is still criticised as being slow and lacking governmental support. This paper proposes that promoting green building could solve three inter-connected challenges hindering sustainable development, and provides a comparative review of progress.
Resumo:
Nanotechnology applications are entering the market in increasing numbers, nanoparticles being among the main classes of materials used. Particles can be used, e.g., for catalysing chemical reactions, such as is done in car exhaust catalysts today. They can also modify the optical and electronic properties of materials or be used as building blocks for thin film coatings on a variety of surfaces. To develop materials for specific applications, an intricate control of the particle properties, structure, size and shape is required. All these depend on a multitude of factors from methods of synthesis and deposition to post-processing. This thesis addresses the control of nanoparticle structure by low-energy cluster beam deposition and post-synthesis ion irradiation. Cluster deposition in high vacuum offers a method for obtaining precisely controlled cluster-assembled materials with minimal contamination. Due to the clusters small size, however, the cluster-surface interaction may drastically change the cluster properties on deposition. In this thesis, the deposition process of metal and alloy clusters on metallic surfaces is modelled using molecular dynamics simulations, and the mechanisms influencing cluster structure are identified. Two mechanisms, mechanical melting upon deposition and thermally activated dislocation motion, are shown to determine whether a deposited cluster will align epitaxially with its support. The semiconductor industry has used ion irradiation as a tool to modify material properties for decades. Irradiation can be used for doping, patterning surfaces, and inducing chemical ordering in alloys, just to give a few examples. The irradiation response of nanoparticles has, however, remained an almost uncharted territory. Although irradiation effects in nanoparticles embedded inside solid matrices have been studied, almost no work has been done on supported particles. In this thesis, the response of supported nanoparticles is studied systematically for heavy and light ion irradiation. The processes leading to damage production are identified and models are developed for both types of irradiation. In recent experiments, helium irradiation has been shown to induce a phase transformation from multiply twinned to single-crystalline nanoparticles in bimetallic alloys, but the nature of the transition has remained unknown. The alloys for which the effect has been observed are CuAu and FePt. It is shown in this thesis that transient amorphization leads to the observed transition and that while CuAu and FePt do not amorphize upon irradiation in bulk or as thin films, they readily do so as nanoparticles. This is the first time such an effect is demonstrated with supported particles, not embedded in a matrix where mixing is always an issue. An understanding of the above physical processes is essential, if nanoparticles are to be used in applications in an optimal way. This thesis clarifies the mechanisms which control particle morphology, and paves way for the synthesis of nanostructured materials tailored for specific applications.
Resumo:
The surface of a soft elastic film becomes unstable and forms a self-organized undulating pattern because of adhesive interactions when it comes in contact proximity with a rigid surface. For a single film, the pattern length scale lambda, which is governed by the minimization of the elastic stored energy, gives lambda similar to 3h, where h is the film thickness. Based on a linear stability analysis and simulations of adhesion and debonding, we consider the contact instability of an elastic bilayer, which provides greater flexibility in the morphological control of interfacial instability. Unlike the case of a single film, the morphology of the contact instability patterns, debonding distance, and debonding force in a bilayer can be controlled in a nonlinear way by varying the thicknesses and shear moduli of the films. Interestingly, the pattern wavelength in a bilayer can be greatly increased or decreased compared to a single film when the adhesive contact is formed by the stiffer or the softer of the two films, respectively. In particular, lambda as small as 0.5h can be obtained. This indicates a new strategy for pattern miniaturization in elastic contact lithography.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Thermotropic liquid crystals are known to display rich phase behavior on temperature variation. Although the nematic phase is orientationally ordered but translationally disordered, a smectic phase is characterized by the appearance of a partial translational order in addition to a further increase in orientational order. In an attempt to understand the interplay between orientational and translational order in the mesophases that thermotropic liquid crystals typically exhibit upon cooling from the high-temperature isotropic phase, we investigate the potential energy landscapes of a family of model liquid crystalline systems. The configurations of the system corresponding to the local potential energy minima, known as the inherent structures, are determined from computer simulations across the mesophases. We find that the depth of the potential energy minima explored by the system along an isochor grows through the nematic phase as temperature drops in contrast to its insensitivity to temperature in the isotropic and smectic phases. The onset of the growth of the orientational order in the parent phase is found to induce a translational order, resulting in a smectic-like layer in the underlying inherent structures; the inherent structures, surprisingly, never seem to sustain orientational order alone if the parent nematic phase is sandwiched between the high-temperature isotropic phase and the low-temperature smectic phase. The Arrhenius temperature dependence of the orientational relaxation time breaks down near the isotropic-nematic transition. We find that this breakdown occurs at a temperature below which the system explores increasingly deeper potential energy minima.
Resumo:
Presented here is the two-phase thermodynamic (2PT) model for the calculation of energy and entropy of molecular fluids from the trajectory of molecular dynamics (MD) simulations. In this method, the density of state (DoS) functions (including the normal modes of translation, rotation, and intramolecular vibration motions) are determined from the Fourier transform of the corresponding velocity autocorrelation functions. A fluidicity parameter (f), extracted from the thermodynamic state of the system derived from the same MD, is used to partition the translation and rotation modes into a diffusive, gas-like component (with 3Nf degrees of freedom) and a nondiffusive, solid-like component. The thermodynamic properties, including the absolute value of entropy, are then obtained by applying quantum statistics to the solid component and applying hard sphere/rigid rotor thermodynamics to the gas component. The 2PT method produces exact thermodynamic properties of the system in two limiting states: the nondiffusive solid state (where the fluidicity is zero) and the ideal gas state (where the fluidicity becomes unity). We examine the 2PT entropy for various water models (F3C, SPC, SPC/E, TIP3P, and TIP4P-Ew) at ambient conditions and find good agreement with literature results obtained based on other simulation techniques. We also validate the entropy of water in the liquid and vapor phases along the vapor-liquid equilibrium curve from the triple point to the critical point. We show that this method produces converged liquid phase entropy in tens of picoseconds, making it an efficient means for extracting thermodynamic properties from MD simulations.
Resumo:
Magnetometer data, acquired on spacecraft and simultaneously at high and low latitudes on the ground, are compared in order to study the propagation characteristics of hydromagnetic energy deep into the magnetosphere. Single events provide evidence that wave energy at L ∼ 3 can at times be only one order of magnitude lower than at L ∼ 13. In addition, statistical analyses of the H-component groundbased data obtained during local daytime hours of 17 July-3 August 1985 show that wave amplitudes at L ∼ 3 are generally 10-30 times lower than at L ∼ 13. The L-dependence of near-equator magnetic field fluctuations measured on ISEE-2 show a sharp drop in energy near the magnetopause and a more gradual fall-off of energy deeper inside the magnetosphere. Such high levels of wave power deep in the magnetosphere have not been quantitatively understood previously. Our initial attempt is to calculate the decay length of an evanescent wave generated at a thick magnetopause boundary. Numerical calculations show that fast magnetosonic modes (called magnetopause and inner mode) can be generated under very restrictive conditions for the field and plasma parameters. These fast compressional modes may have their energy reduced by only one order of magnitude over a penetration depth of about 8RE. More realistic numerical simulations need to be carried out to see whether better agreement with the data can be attained.
Resumo:
Spike detection in neural recordings is the initial step in the creation of brain machine interfaces. The Teager energy operator (TEO) treats a spike as an increase in the `local' energy and detects this increase. The performance of TEO in detecting action potential spikes suffers due to its sensitivity to the frequency of spikes in the presence of noise which is present in microelectrode array (MEA) recordings. The multiresolution TEO (mTEO) method overcomes this shortcoming of the TEO by tuning the parameter k to an optimal value m so as to match to frequency of the spike. In this paper, we present an algorithm for the mTEO using the multiresolution structure of wavelets along with inbuilt lowpass filtering of the subband signals. The algorithm is efficient and can be implemented for real-time processing of neural signals for spike detection. The performance of the algorithm is tested on a simulated neural signal with 10 spike templates obtained from [14]. The background noise is modeled as a colored Gaussian random process. Using the noise standard deviation and autocorrelation functions obtained from recorded data, background noise was simulated by an autoregressive (AR(5)) filter. The simulations show a spike detection accuracy of 90%and above with less than 5% false positives at an SNR of 2.35 dB as compared to 80% accuracy and 10% false positives reported [6] on simulated neural signals.