7 resultados para Quantum many-body systems
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Quantum Materials are many body systems displaying emergent phenomena caused by quantum collective behaviour, such as superconductivity, charge density wave, fractional hall effect, and exotic magnetism. Among quantum materials, two families have recently attracted attention: kagome metals and Kitaev materials. Kagome metals have a unique crystal structure made up of triangular lattice layers that are used to form the kagome layer. Due to superconductivity, magnetism, and charge ordering states such as the Charge Density Wave (CDW), unexpected physical phenomena such as the massive Anomalous Hall Effect (AHE) and possible Majorana fermions develop in these materials. Kitaev materials are a type of quantum material with a unique spin model named after Alexei Kitaev. They include fractional fluctuations of Majorana fermions and non-topological abelian anyons, both of which might be used in quantum computing. Furthermore, they provide a realistic framework for the development of quantum spin liquid (QSL), in which quantum fluctuations produce long-range entanglements between electronic states despite the lack of classical magnetic ordering. In my research, I performed several nuclear magnetic resonance (NMR), nuclear quadrupole resonance (NQR), and muon spin spectroscopy (µSR) experiments to explain and unravel novel phases of matter within these unusual families of materials. NMR has been found to be an excellent tool for studying these materials’ local electronic structures and magnetic properties. I could use NMR to determine, for the first time, the structure of a novel kagome superconductor, RbV3Sb5, below the CDW transition, and to highlight the role of chemical doping in the CDW phase of AV3Sb5 superconductors. µSR has been used to investigate the effect of doping on kagome material samples in order to study the presence and behaviour of an anomalous phase developing at low temperatures and possibly related to time-reversal symmetry breaking.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
In this work, we discuss some theoretical topics related to many-body physics in ultracold atomic and molecular gases. First, we present a comparison between experimental data and theoretical predictions in the context of quantum emulator of quantum field theories, finding good results which supports the efficiency of such simulators. In the second and third parts, we investigate several many-body properties of atomic and molecular gases confined in one dimension.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
III-nitrides are wide-band gap materials that have applications in both electronics and optoelectronic devices. Because to their inherent strong polarization properties, thermal stability and higher breakdown voltage in Al(Ga,In)N/GaN heterostructures, they have emerged as strong candidates for high power high frequency transistors. Nonetheless, the use of (Al,In)GaN/GaN in solid state lighting has already proved its success by the commercialization of light-emitting diodes and lasers in blue to UV-range. However, devices based on these heterostructures suffer problems associated to structural defects. This thesis primarily focuses on the nanoscale electrical characterization and the identification of these defects, their physical origin and their effect on the electrical and optical properties of the material. Since, these defects are nano-sized, the thesis deals with the understanding of the results obtained by nano and micro-characterization techniques such as atomic force microscopy(AFM), current-AFM, scanning kelvin probe microscopy (SKPM), electron beam induced current (EBIC) and scanning tunneling microscopy (STM). This allowed us to probe individual defects (dislocations and cracks) and unveil their electrical properties. Taking further advantage of these techniques,conduction mechanism in two-dimensional electron gas heterostructures was well understood and modeled. Secondarily, origin of photoluminescence was deeply investigated. Radiative transition related to confined electrons and photoexcited holes in 2DEG heterostructures was identified and many body effects in nitrides under strong optical excitations were comprehended.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.