8 resultados para Lattice theory - Computer programs
em Digital Commons - Michigan Tech
Resumo:
ab-initio Hartree Fock (HF), density functional theory (DFT) and hybrid potentials were employed to compute the optimized lattice parameters and elastic properties of perovskite 3-d transition metal oxides. The optimized lattice parameters and elastic properties are interdependent in these materials. An interaction is observed between the electronic charge, spin and lattice degrees of freedom in 3-d transition metal oxides. The coupling between the electronic charge, spin and lattice structures originates due to localization of d-atomic orbitals. The coupling between the electronic charge, spin and crystalline lattice also contributes in the ferroelectric and ferromagnetic properties in perovskites. The cubic and tetragonal crystalline structures of perovskite transition metal oxides of ABO3 are studied. The electronic structure and the physics of 3-d perovskite materials is complex and less well considered. Moreover, the novelty of the electronic structure and properties of these perovskites transition metal oxides exceeds the challenge offered by their complex crystalline structures. To achieve the objective of understanding the structure and property relationship of these materials the first-principle computational method is employed. CRYSTAL09 code is employed for computing crystalline structure, elastic, ferromagnetic and other electronic properties. Second-order elastic constants (SOEC) and bulk moduli (B) are computed in an automated process by employing ELASTCON (elastic constants) and EOS (equation of state) programs in CRYSTAL09 code. ELASTCON, EOS and other computational algorithms are utilized to determine the elastic properties of tetragonal BaTiO3, rutile TiO2, cubic and tetragonal BaFeO3 and the ferromagentic properties of 3-d transition metal oxides. Multiple methods are employed to crosscheck the consistency of our computational results. Computational results have motivated us to explore the ferromagnetic properties of 3-d transition metal oxides. Billyscript and CRYSTAL09 code are employed to compute the optimized geometry of the cubic and tetragonal crystalline structure of transition metal oxides of Sc to Cu. Cubic crystalline structure is initially chosen to determine the effect of lattice strains on ferromagnetism due to the spin angular momentum of an electron. The 3-d transition metals and their oxides are challenging as the basis functions and potentials are not fully developed to address the complex physics of the transition metals. Moreover, perovskite crystalline structures are extremely challenging with respect to the quality of computations as the latter requires the well established methods. Ferroelectric and ferromagnetic properties of bulk, surfaces and interfaces are explored by employing CRYSTAL09 code. In our computations done on cubic TMOs of Sc-Fe it is observed that there is a coupling between the crystalline structure and FM/AFM spin polarization. Strained crystalline structures of 3-d transition metal oxides are subjected to changes in the electromagnetic and electronic properties. The electronic structure and properties of bulk, composites, surfaces of 3-d transition metal oxides are computed successfully.
Resumo:
Interest in the study of magnetic/non-magnetic multilayered structures took a giant leap since Grünberg and his group established that the interlayer exchange coupling (IEC) is a function of the non-magnetic spacer width. This interest was further fuelled by the discovery of the phenomenal Giant Magnetoresistance (GMR) effect. In fact, in 2007 Albert Fert and Peter Grünberg were awarded the Nobel Prize in Physics for their contribution to the discovery of GMR. GMR is the key property that is being used in the read-head of the present day computer hard drive as it requires a high sensitivity in the detection of magnetic field. The recent increase in demand for device miniaturization encouraged researchers to look for GMR in nanoscale multilayered structures. In this context, one dimensional(1-D) multilayerd nanowire structure has shown tremendous promise as a viable candidate for ultra sensitive read head sensors. In fact, the phenomenal giant magnetoresistance(GMR) effect, which is the novel feature of the currently used multilayered thin film, has already been observed in multilayered nanowire systems at ambient temperature. Geometrical confinement of the supper lattice along the 2-dimensions (2-D) to construct the 1-D multilayered nanowire prohibits the minimization of magnetic interaction- offering a rich variety of magnetic properties in nanowire that can be exploited for novel functionality. In addition, introduction of non-magnetic spacer between the magnetic layers presents additional advantage in controlling magnetic properties via tuning the interlayer magnetic interaction. Despite of a large volume of theoretical works devoted towards the understanding of GMR and IEC in super lattice structures, limited theoretical calculations are reported in 1-D multilayered systems. Thus to gauge their potential application in new generation magneto-electronic devices, in this thesis, I have discussed the usage of first principles density functional theory (DFT) in predicting the equilibrium structure, stability as well as electronic and magnetic properties of one dimensional multilayered nanowires. Particularly, I have focused on the electronic and magnetic properties of Fe/Pt multilayered nanowire structures and the role of non-magnetic Pt spacer in modulating the magnetic properties of the wire. It is found that the average magnetic moment per atom in the nanowire increases monotonically with an ~1/(N(Fe)) dependance, where N(Fe) is the number of iron layers in the nanowire. A simple model based upon the interfacial structure is given to explain the 1/(N(Fe)) trend in magnetic moment obtained from the first principle calculations. A new mechanism, based upon spin flip with in the layer and multistep electron transfer between the layers, is proposed to elucidate the enhancement of magnetic moment of Iron atom at the Platinum interface. The calculated IEC in the Fe/Pt multilayered nanowire is found to switch sign as the width of the non-magnetic spacer varies. The competition among short and long range direct exchange and the super exchange has been found to play a key role for the non-monotonous sign in IEC depending upon the width of the Platinum spacer layer. The calculated magnetoresistance from Julliere's model also exhibit similar switching behavior as that of IEC. The universality of the behavior of exchange coupling has also been looked into by introducing different non-magnetic spacers like Palladium, Copper, Silver, and Gold in between magnetic Iron layers. The nature of hybridization between Fe and other non-magnetic spacer is found to dictate the inter layer magnetic interaction. For example, in Fe/Pd nanowire the d-p hybridization in two spacer layer case favors anti-ferromagnetic (AFM) configuration over ferromagnetic (FM) configuration. However, the hybridization between half-filled Fe(d) and filled Cu(p) state in Fe/Cu nanowire favors FM coupling in the 2-spacer system.
Resumo:
Technical communication certificates are offered by many colleges and universities as an alternative to a full undergraduate or graduate degree in the field. Despite certificates’ increasing popularity in recent years, however, surprisingly little commentary exists about them within the scholarly literature. In this work, I describe a survey of certificate and baccalaureate programs that I performed in 2008 in order to develop basic, descriptive data on programs’ age, size, and graduation rates; departmental location; curricular requirements; online offerings; and instructor status and qualifications. In performing this research, I apply recent insights from neosophistic rhetorical theory and feminist critiques of science to both articulate, and model, a feminist-sophistic methodology. I also suggest in this work that technical communication certificates can be theorized as a particularly sophistic credential for a particularly sophistic field, and I discuss the implications of neosophistic theory for certificate program design and administration.
Resumo:
The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.
Resumo:
This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.
Resumo:
Amorphous carbon has been investigated for a long time. Since it has the random orientation of carbon atoms, its density depends on the position of each carbon atom. It is important to know the density of amorphous carbon to use it for modeling advance carbon materials in the future. Two methods were used to create the initial structures of amorphous carbon. One is the random placement method by randomly locating 100 carbon atoms in a cubic lattice. Another method is the liquid-quench method by using reactive force field (ReaxFF) to rapidly decrease the system of 100 carbon atoms from the melting temperature. Density functional theory (DFT) was used to refine the position of each carbon atom and the dimensions of the boundaries to minimize the ground energy of the structure. The average densities of amorphous carbon structures created by the random placement method and the liquid-quench method are 2.59 and 2.44 g/cm3, respectively. Both densities have a good agreement with previous works. In addition, the final structure of amorphous carbon generated by the liquid-quench method has lower energy.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.