929 resultados para Lattice theory - Computer programs
Resumo:
Interest in the study of magnetic/non-magnetic multilayered structures took a giant leap since Grünberg and his group established that the interlayer exchange coupling (IEC) is a function of the non-magnetic spacer width. This interest was further fuelled by the discovery of the phenomenal Giant Magnetoresistance (GMR) effect. In fact, in 2007 Albert Fert and Peter Grünberg were awarded the Nobel Prize in Physics for their contribution to the discovery of GMR. GMR is the key property that is being used in the read-head of the present day computer hard drive as it requires a high sensitivity in the detection of magnetic field. The recent increase in demand for device miniaturization encouraged researchers to look for GMR in nanoscale multilayered structures. In this context, one dimensional(1-D) multilayerd nanowire structure has shown tremendous promise as a viable candidate for ultra sensitive read head sensors. In fact, the phenomenal giant magnetoresistance(GMR) effect, which is the novel feature of the currently used multilayered thin film, has already been observed in multilayered nanowire systems at ambient temperature. Geometrical confinement of the supper lattice along the 2-dimensions (2-D) to construct the 1-D multilayered nanowire prohibits the minimization of magnetic interaction- offering a rich variety of magnetic properties in nanowire that can be exploited for novel functionality. In addition, introduction of non-magnetic spacer between the magnetic layers presents additional advantage in controlling magnetic properties via tuning the interlayer magnetic interaction. Despite of a large volume of theoretical works devoted towards the understanding of GMR and IEC in super lattice structures, limited theoretical calculations are reported in 1-D multilayered systems. Thus to gauge their potential application in new generation magneto-electronic devices, in this thesis, I have discussed the usage of first principles density functional theory (DFT) in predicting the equilibrium structure, stability as well as electronic and magnetic properties of one dimensional multilayered nanowires. Particularly, I have focused on the electronic and magnetic properties of Fe/Pt multilayered nanowire structures and the role of non-magnetic Pt spacer in modulating the magnetic properties of the wire. It is found that the average magnetic moment per atom in the nanowire increases monotonically with an ~1/(N(Fe)) dependance, where N(Fe) is the number of iron layers in the nanowire. A simple model based upon the interfacial structure is given to explain the 1/(N(Fe)) trend in magnetic moment obtained from the first principle calculations. A new mechanism, based upon spin flip with in the layer and multistep electron transfer between the layers, is proposed to elucidate the enhancement of magnetic moment of Iron atom at the Platinum interface. The calculated IEC in the Fe/Pt multilayered nanowire is found to switch sign as the width of the non-magnetic spacer varies. The competition among short and long range direct exchange and the super exchange has been found to play a key role for the non-monotonous sign in IEC depending upon the width of the Platinum spacer layer. The calculated magnetoresistance from Julliere's model also exhibit similar switching behavior as that of IEC. The universality of the behavior of exchange coupling has also been looked into by introducing different non-magnetic spacers like Palladium, Copper, Silver, and Gold in between magnetic Iron layers. The nature of hybridization between Fe and other non-magnetic spacer is found to dictate the inter layer magnetic interaction. For example, in Fe/Pd nanowire the d-p hybridization in two spacer layer case favors anti-ferromagnetic (AFM) configuration over ferromagnetic (FM) configuration. However, the hybridization between half-filled Fe(d) and filled Cu(p) state in Fe/Cu nanowire favors FM coupling in the 2-spacer system.
Resumo:
Technical communication certificates are offered by many colleges and universities as an alternative to a full undergraduate or graduate degree in the field. Despite certificates’ increasing popularity in recent years, however, surprisingly little commentary exists about them within the scholarly literature. In this work, I describe a survey of certificate and baccalaureate programs that I performed in 2008 in order to develop basic, descriptive data on programs’ age, size, and graduation rates; departmental location; curricular requirements; online offerings; and instructor status and qualifications. In performing this research, I apply recent insights from neosophistic rhetorical theory and feminist critiques of science to both articulate, and model, a feminist-sophistic methodology. I also suggest in this work that technical communication certificates can be theorized as a particularly sophistic credential for a particularly sophistic field, and I discuss the implications of neosophistic theory for certificate program design and administration.
Resumo:
The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.
Resumo:
This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.
Resumo:
Amorphous carbon has been investigated for a long time. Since it has the random orientation of carbon atoms, its density depends on the position of each carbon atom. It is important to know the density of amorphous carbon to use it for modeling advance carbon materials in the future. Two methods were used to create the initial structures of amorphous carbon. One is the random placement method by randomly locating 100 carbon atoms in a cubic lattice. Another method is the liquid-quench method by using reactive force field (ReaxFF) to rapidly decrease the system of 100 carbon atoms from the melting temperature. Density functional theory (DFT) was used to refine the position of each carbon atom and the dimensions of the boundaries to minimize the ground energy of the structure. The average densities of amorphous carbon structures created by the random placement method and the liquid-quench method are 2.59 and 2.44 g/cm3, respectively. Both densities have a good agreement with previous works. In addition, the final structure of amorphous carbon generated by the liquid-quench method has lower energy.
Resumo:
Since the UsedSoft ruling of the CJEU in 2012, there has been the distinct feeling that – like the big bang - UsedSoft signals the start of a new beginning. As we enter this brave new world, the Copyright Directive will be read anew: misalignments in the treatment of physical and digital content will be resolved; accessibility and affordability for consumers will be heightened; and lock-in will be reduced as e-exhaustion takes hold. With UsedSoft as a precedent, the Court can do nothing but keep expanding its own ruling. For big bang theorists, it is only a matter of time until the digital first sale meteor strikes non-software downloads also. This paper looks at whether the UsedSoft ruling could indeed be the beginning of a wider doctrine of e-exhaustion, or if it is simply a one-shot comet restrained by provisions of the Computer Program Directive on which it was based. Fighting the latter corner, we have the strict word of the law; in the UsedSoft ruling, the Court appears to willingly bypass the international legal framework of the WCT. As far as expansion goes, the Copyright Directive was conceived specifically to implement the WCT, thus the legislative intent is clear. The Court would not, surely, invoke its modicum of creativity there also... With perhaps undue haste in a digital market of many unknowns, it seems this might well be the case. Provoking the big bang theory of e-exhaustion, the UsedSoft ruling can be read as distinctly purposive, but rather than having copyright norms in mind, the standard for the Court is the same free movement rules that underpin the exhaustion doctrine in the physical world. With an endowed sense of principled equivalence, the Court clearly wishes the tangible and intangible rules to be aligned. Against the backdrop of the European internal market, perhaps few legislative instruments would staunchly stand in its way. With firm objectives in mind, the UsedSoft ruling could be a rather disruptive meteor indeed.
Resumo:
We carry out lattice simulations of a cosmological electroweak phase transition for a Higgs mass mh 126 GeV. The analysis is based on a dimensionally reduced effective theory for an MSSM-like scenario including a relatively light coloured SU(2)-singlet scalar, referred to as a right-handed stop. The non-perturbative transition is stronger than in 2-loop perturbation theory, and may offer a window for electroweak baryogenesis. The main remaining uncertainties concern the physical value of the right-handed stop mass which according to our analysis could be as high as mR 155 GeV; a more precise effective theory derivation and vacuum renormalization than available at present are needed for confirming this value.
Resumo:
Researchers suggest that personalization on the Semantic Web adds up to a Web 3.0 eventually. In this Web, personalized agents process and thus generate the biggest share of information rather than humans. In the sense of emergent semantics, which supplements traditional formal semantics of the Semantic Web, this is well conceivable. An emergent Semantic Web underlying fuzzy grassroots ontology can be accomplished through inducing knowledge from users' common parlance in mutual Web 2.0 interactions [1]. These ontologies can also be matched against existing Semantic Web ontologies, to create comprehensive top-level ontologies. On the Web, if augmented with information in the form of restrictions andassociated reliability (Z-numbers) [2], this collection of fuzzy ontologies constitutes an important basis for an implementation of Zadeh's restriction-centered theory of reasoning and computation (RRC) [3]. By considering real world's fuzziness, RRC differs from traditional approaches because it can handle restrictions described in natural language. A restriction is an answer to a question of the value of a variable such as the duration of an appointment. In addition to mathematically well-defined answers, RRC can likewise deal with unprecisiated answers as "about one hour." Inspired by mental functions, it constitutes an important basis to leverage present-day Web efforts to a natural Web 3.0. Based on natural language information, RRC may be accomplished with Z-number calculation to achieve a personalized Web reasoning and computation. Finally, through Web agents' understanding of natural language, they can react to humans more intuitively and thus generate and process information.
Resumo:
Abelian and non-Abelian gauge theories are of central importance in many areas of physics. In condensed matter physics, AbelianU(1) lattice gauge theories arise in the description of certain quantum spin liquids. In quantum information theory, Kitaev’s toric code is a Z(2) lattice gauge theory. In particle physics, Quantum Chromodynamics (QCD), the non-Abelian SU(3) gauge theory of the strong interactions between quarks and gluons, is nonperturbatively regularized on a lattice. Quantum link models extend the concept of lattice gauge theories beyond the Wilson formulation, and are well suited for both digital and analog quantum simulation using ultracold atomic gases in optical lattices. Since quantum simulators do not suffer from the notorious sign problem, they open the door to studies of the real-time evolution of strongly coupled quantum systems, which are impossible with classical simulation methods. A plethora of interesting lattice gauge theories suggests itself for quantum simulation, which should allow us to address very challenging problems, ranging from confinement and deconfinement, or chiral symmetry breaking and its restoration at finite baryon density, to color superconductivity and the real-time evolution of heavy-ion collisions, first in simpler model gauge theories and ultimately in QCD.
Resumo:
In order to fully describe the construct of empowerment and to determine possible measures for this construct in racially and ethnically diverse neighborhoods, a qualitative study based on Grounded Theory was conducted at both the individual and collective levels. Participants for the study included 49 grassroots experts on community empowerment who were interviewed through semi-structured interviews and focus groups. The researcher also conducted field observations as part of the research protocol.^ The results of the study identified benchmarks of individual and collective empowerment and hundreds of possible markers of collective empowerment applicable in diverse communities. Results also indicated that community involvement is essential in the selection and implementation of proper measures. Additional findings were that the construct of empowerment involves specific principles of empowering relationships and particular motivational factors. All of these findings lead to a two dimensional model of empowerment based on the concepts of relationships among members of a collective body and the collective body's desire for socio-political change.^ These results suggest that the design, implementation, and evaluation of programs that foster empowerment must be based on collaborative ventures between the population being served and program staff because of the interactive, synergistic nature of the construct. In addition, empowering programs should embrace specific principles and processes of individual and collective empowerment in order to maximize their effectiveness and efficiency. And finally, the results suggest that collaboratively choosing markers to measure the processes and outcomes of empowerment in the main systems and populations living in today's multifaceted communities is a useful mechanism to determine change. ^
Resumo:
(1) A mathematical theory for computing the probabilities of various nucleotide configurations is developed, and the probability of obtaining the correct phylogenetic tree (model tree) from sequence data is evaluated for six phylogenetic tree-making methods (UPGMA, distance Wagner method, transformed distance method, Fitch-Margoliash's method, maximum parsimony method, and compatibility method). The number of nucleotides (m*) necessary to obtain the correct tree with a probability of 95% is estimated with special reference to the human, chimpanzee, and gorilla divergence. m* is at least 4,200, but the availability of outgroup species greatly reduces m* for all methods except UPGMA. m* increases if transitions occur more frequently than transversions as in the case of mitochondrial DNA. (2) A new tree-making method called the neighbor-joining method is proposed. This method is applicable either for distance data or character state data. Computer simulation has shown that the neighbor-joining method is generally better than UPGMA, Farris' method, Li's method, and modified Farris method on recovering the true topology when distance data are used. A related method, the simultaneous partitioning method, is also discussed. (3) The maximum likelihood (ML) method for phylogeny reconstruction under the assumption of both constant and varying evolutionary rates is studied, and a new algorithm for obtaining the ML tree is presented. This method gives a tree similar to that obtained by UPGMA when constant evolutionary rate is assumed, whereas it gives a tree similar to that obtained by the maximum parsimony tree and the neighbor-joining method when varying evolutionary rate is assumed. ^
Resumo:
We review lattice results related to pion, kaon, D- and B-meson physics with the aim of making them easily accessible to the particle-physics community. More specifically, we report on the determination of the lightquark masses, the form factor f+(0), arising in semileptonic K → π transition at zero momentum transfer, as well as the decay-constant ratio fK / fπ of decay constants and its consequences for the CKM matrix elements Vus and Vud. Furthermore, we describe the results obtained on the lattice for some of the low-energy constants of SU(2)L × SU(2)R and SU(3)L×SU(3)R Chiral Perturbation Theory and review the determination of the BK parameter of neutral kaon mixing. The inclusion of heavy-quark quantities significantly expands the FLAG scope with respect to the previous review. Therefore, we focus here on D- and B-meson decay constants, form factors, and mixing parameters, since these are most relevant for the determination of CKM matrix elements and the global CKM unitarity-triangle fit. In addition we review the status of lattice determinations of the strong coupling constant αs.