17 resultados para 2nd-order perturbation-theory
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Chalcogenides are chemical compounds with at least one of the following three chemical elements: Sulfur (S), Selenium (Sn), and Tellurium (Te). As opposed to other materials, chalcogenide atomic arrangement can quickly and reversibly inter-change between crystalline, amorphous and liquid phases. Therefore they are also called phase change materials. As a results, chalcogenide thermal, optical, structural, electronic, electrical properties change pronouncedly and significantly with the phase they are in, leading to a host of different applications in different areas. The noticeable optical reflectivity difference between crystalline and amorphous phases has allowed optical storage devices to be made. Their very high thermal conductivity and heat fusion provided remarkable benefits in the frame of thermal energy storage for heating and cooling in residential and commercial buildings. The outstanding resistivity difference between crystalline and amorphous phases led to a significant improvement of solid state storage devices from the power consumption to the re-writability to say nothing of the shrinkability. This work focuses on a better understanding from a simulative stand point of the electronic, vibrational and optical properties for the crystalline phases (hexagonal and faced-centered cubic). The electronic properties are calculated implementing the density functional theory combined with pseudo-potentials, plane waves and the local density approximation. The phonon properties are computed using the density functional perturbation theory. The phonon dispersion and spectrum are calculated using the density functional perturbation theory. As it relates to the optical constants, the real part dielectric function is calculated through the Drude-Lorentz expression. The imaginary part results from the real part through the Kramers-Kronig transformation. The refractive index, the extinctive and absorption coefficients are analytically calculated from the dielectric function. The transmission and reflection coefficients are calculated using the Fresnel equations. All calculated optical constants compare well the experimental ones.
Resumo:
In this work we investigate the existence of resonances for two-centers Coulomb systems with arbitrary charges in two and three dimensions, defining them in terms of generalized complex eigenvalues of a non-selfadjoint deformation of the two-center Schrödinger operator. After giving a description of the bifurcation of the classical system for positive energies, we construct the resolvent kernel of the operators and we prove that they can be extended analytically to the second Riemann sheet. The resonances are then defined and studied with numerical methods and perturbation theory.
Resumo:
In this thesis I show a triple new connection we found between quantum integrability, N=2 supersymmetric gauge theories and black holes perturbation theory. I use the approach of the ODE/IM correspondence between Ordinary Differential Equations (ODE) and Integrable Models (IM), first to connect basic integrability functions - the Baxter’s Q, T and Y functions - to the gauge theory periods. This fundamental identification allows several new results for both theories, for example: an exact non linear integral equation (Thermodynamic Bethe Ansatz, TBA) for the gauge periods; an interpretation of the integrability functional relations as new exact R-symmetry relations for the periods; new formulas for the local integrals of motion in terms of gauge periods. This I develop in all details at least for the SU(2) gauge theory with Nf=0,1,2 matter flavours. Still through to the ODE/IM correspondence, I connect the mathematically precise definition of quasinormal modes of black holes (having an important role in gravitational waves’ obervations) with quantization conditions on the Q, Y functions. In this way I also give a mathematical explanation of the recently found connection between quasinormal modes and N=2 supersymmetric gauge theories. Moreover, it follows a new simple and effective method to numerically compute the quasinormal modes - the TBA - which I compare with other standard methods. The spacetimes for which I show these in all details are in the simplest Nf=0 case the D3 brane in the Nf=1,2 case a generalization of extremal Reissner-Nordström (charged) black holes. Then I begin treating also the Nf=3,4 theories and argue on how our integrability-gauge-gravity correspondence can generalize to other types of black holes in either asymptotically flat (Nf=3) or Anti-de-Sitter (Nf=4) spacetime. Finally I begin to show the extension to a 4-fold correspondence with also Conformal Field Theory (CFT), through the renowned AdS/CFT correspondence.
Excitonic properties of transition metal oxide perovskites and workflow automatization of GW schemes
Resumo:
The Many-Body-Perturbation Theory approach is among the most successful theoretical frameworks for the study of excited state properties. It allows to describe the excitonic interactions, which play a fundamental role in the optical response of insulators and semiconductors. The first part of the thesis focuses on the study of the quasiparticle, optical and excitonic properties of \textit{bulk} Transition Metal Oxide (TMO) perovskites using a G$_0$W$_0$+Bethe Salpeter Equation (BSE) approach. A representative set of 14 compounds has been selected, including 3d, 4d and 5d perovskites. An approximation of the BSE scheme, based on an analytic diagonal expression for the inverse dielectric function, is used to compute the exciton binding energies and is carefully bench-marked against the standard BSE results. In 2019 an important breakthrough has been achieved with the synthesis of ultrathin SrTiO3 films down to the monolayer limit. This allows us to explore how the quasiparticle and optical properties of SrTiO3 evolve from the bulk to the two-dimensional limit. The electronic structure is computed with G0W0 approach: we prove that the inclusion of the off-diagonal self-energy terms is required to avoid non-physical band dispersions. The excitonic properties are investigated beyond the optical limit at finite momenta. Lastly a study of the under pressure optical response of the topological nodal line semimetal ZrSiS is presented, in conjunction with the experimental results from the group of Prof. Dr. Kuntscher of the Augsburg University. The second part of the thesis discusses the implementation of a workflow to automate G$_0$W$_0$ and BSE calculations with the VASP software. The workflow adopts a convergence scheme based on an explicit basis-extrapolation approach [J. Klimeš \textit{et al.}, Phys. Rev.B 90, 075125 (2014)] which allows to reduce the number of intermediate calculations required to reach convergence and to explicit estimate the error associated to the basis-set truncation.
Resumo:
A 2D Unconstrained Third Order Shear Deformation Theory (UTSDT) is presented for the evaluation of tangential and normal stresses in moderately thick functionally graded conical and cylindrical shells subjected to mechanical loadings. Several types of graded materials are investigated. The functionally graded material consists of ceramic and metallic constituents. A four parameter power law function is used. The UTSDT allows the presence of a finite transverse shear stress at the top and bottom surfaces of the graded shell. In addition, the initial curvature effect included in the formulation leads to the generalization of the present theory (GUTSDT). The Generalized Differential Quadrature (GDQ) method is used to discretize the derivatives in the governing equations, the external boundary conditions and the compatibility conditions. Transverse and normal stresses are also calculated by integrating the three dimensional equations of equilibrium in the thickness direction. In this way, the six components of the stress tensor at a point of the conical or cylindrical shell or panel can be given. The initial curvature effect and the role of the power law functions are shown for a wide range of functionally conical and cylindrical shells under various loading and boundary conditions. Finally, numerical examples of the available literature are worked out.
Resumo:
Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.
Resumo:
The research work concerns the analysis of the foundations of Quantum Field Theory carried out from an educational perspective. The whole research has been driven by two questions: • How the concept of object changes when moving from classical to contemporary physics? • How are the concepts of field and interaction shaped and conceptualized within contemporary physics? What makes quantum field and interaction similar to and what makes them different from the classical ones? The whole work has been developed through several studies: 1. A study aimed to analyze the formal and conceptual structures characterizing the description of the continuous systems that remain invariant in the transition from classical to contemporary physics. 2. A study aimed to analyze the changes in the meanings of the concepts of field and interaction in the transition to quantum field theory. 3. A detailed study of the Klein-Gordon equation aimed at analyzing, in a case considered emblematic, some interpretative (conceptual and didactical) problems in the concept of field that the university textbooks do not address explicitly. 4. A study concerning the application of the “Discipline-Culture” Model elaborated by I. Galili to the analysis of the Klein-Gordon equation, in order to reconstruct the meanings of the equation from a cultural perspective. 5. A critical analysis, in the light of the results of the studies mentioned above, of the existing proposals for teaching basic concepts of Quantum Field Theory and particle physics at the secondary school level or in introductory physics university courses.
Resumo:
The importance of organizational issues to assess the success of international development project has not been fully considered yet. After a brief overview, in 1st chapter, on main actors involved on international cooperation, in the 2nd chapter an analysis of the literature on the project success definition, focused on the success criteria and success factors, was carried out by surveying the contribution of different authors and approaches. Traditionally projects were perceived as successful when they met time, budget and performance goals, assuming a basic similarity among projects (universalistic approach). However, starting from a non-universalistic approach, the importance of organization’s effectiveness, in terms of Relations Sustainability, emerged as a dimension able to define and assess a project success. The identification of the factors influencing the relationship between and inside the organizations becomes consequently a priority. In 3th chapter, starting from a literature survey, the different analytical approaches related to the inter and intra-organization relationships are analysed. They involve two different groups: the first includes studies focused on the type of organizations relationship structure (Supply Chains, Networks, Clusters and Industrial Districts); the second group includes approaches related to the general theories on firms relationship interpretation (Transaction Costs Economics, Resource Based View, Organization Theory). The variables and logical frameworks provided by these different theoretical contributions are compared and classified in order to find out possible connections and/or juxtapositions. Being an exhaustive collection of the literature on the subject is impossible, the main goal is to underline the existence of potentially overlapping and/or integrating approaches examining the contribution provided by different representative authors. The survey showed first of all many variables in common between approaches coming from different disciplines; furthermore the non overlapping variables can be integrated contributing to a broader picture of the variables influencing the organization relations; in particular a theoretical design for the identification of connections between the inter and the intra-organizations relations was made possible. The results obtained in 3th chapter help to defining a general theoretical framework linking the different interpretative variables. Based on extensive research contributions on the factors influencing the relations between organizations, the 4th chapter expands the analysis of the influence of variables like Human Resource Management, Organizational Climate, Psychological Contract and KSA (Knowledge, Skills, Abilities) on the relation sustainability. A detailed analysis of these relations is provided and a research hypothesis are built. According to this new framework in 5th chapter a statistical analysis was performed to qualify and quantify the influence of Organizational Climate on the Relations Sustainability. To this end the Structural Equation Modeling (SEMs) has adopted as method for the definition of the latent variables and the measure of their relations. The results obtained are satisfactory. An effective strategy to motivate the respondents to participate in the survey seems to be at the moment one of the major obstacles to the analysis implementation since the organizational performances are not specifically required by the projects’ evaluation guidelines and they represent an increase in the project related transaction costs. Their explicit introduction in the project presentation guidelines should be explored as an opportunity to increase the chances of success of these projects.
Resumo:
Most electronic systems can be described in a very simplified way as an assemblage of analog and digital components put all together in order to perform a certain function. Nowadays, there is an increasing tendency to reduce the analog components, and to replace them by operations performed in the digital domain. This tendency has led to the emergence of new electronic systems that are more flexible, cheaper and robust. However, no matter the amount of digital process implemented, there will be always an analog part to be sorted out and thus, the step of converting digital signals into analog signals and vice versa cannot be avoided. This conversion can be more or less complex depending on the characteristics of the signals. Thus, even if it is desirable to replace functions carried out by analog components by digital processes, it is equally important to do so in a way that simplifies the conversion from digital to analog signals and vice versa. In the present thesis, we have study strategies based on increasing the amount of processing in the digital domain in such a way that the implementation of analog hardware stages can be simplified. To this aim, we have proposed the use of very low quantized signals, i.e. 1-bit, for the acquisition and for the generation of particular classes of signals.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
This work deals with the theory of Relativity and its diffusion in Italy in the first decades of the XX century. Not many scientists belonging to Italian universities were active in understanding Relativity, but two of them, Max Abraham and Tullio Levi-Civita left a deep mark. Max Abraham engaged a substantial debate against Einstein between 1912 and 1914 about electromagnetic and gravitation aspects of the theories. Levi-Civita played a fundamental role in giving Einstein the correct mathematical instruments for the General Relativity formulation since 1915. This work, which doesn't have the aim of a mere historical chronicle of the events, wants to highlight two particular perspectives: on one hand, the importance of Abraham-Einstein debate in order to clarify the basis of Special Relativity, to observe the rigorous logical structure resulting from a fragmentary reasoning sequence and to understand Einstein's thinking; on the other hand, the originality of Levi-Civita's approach, quite different from the Einstein's one, characterized by the introduction of a method typical of General Relativity even to Special Relativity and the attempt to hide the two Einstein Special Relativity postulates.
Resumo:
In chapter 1 and 2 calcium hydroxide as impregnation agent before steam explosion of sugarcane bagasse and switchgrass, respectively, was compared with auto-hydrolysis, assessing the effects on enzymatic hydrolysis and simultaneous saccharification and fermentation (SSF) at high solid concentration of pretreated solid fraction. In addition, anaerobic digestion of pretreated liquid fraction was carried out, in order to appraise the effectiveness of calcium hydroxide before steam explosion in a more comprehensive way. In As water is an expensive input in both cultivation of biomass crops and subsequent pretreatment, Chapter 3 addressed the effects of variable soil moisture on biomass growth and composition of biomass sorghum. Moreover, the effect of water stress was related to the characteristics of stem juice for 1st generation ethanol and structural carbohydrates for 2nd generation ethanol. In the frame of chapter 1, calcium hydroxide was proven to be a suitable catalyst for sugarcane bagasse before steam explosion, in order to enhance fibre deconstruction. In chapter 2, effect of calcium hydroxide on switchgrass showed a great potential when ethanol was focused, whereas acid addition produced higher methane yield. Regarding chapter 3, during crop cycle the amount of cellulose, hemicellulose and AIL changed causing a decrease of 2G ethanol amount. Biomass physical and chemical properties involved a lower glucose yield and concentration at the end of enzymatic hydrolysis and, consequently, a lower 2G ethanol concentration at the end of simultaneous saccharification and fermentation, proving that there is strong relationship between structure, chemical composition, and fermentable sugar yield. The significantly higher concentration of ethanol at the early crop stage could be an important incentive to consider biomass sorghum as second crop in the season, to be introduced into some agricultural systems, potentially benefiting farmers and, above all, avoiding the exacerbation of the debate about fuel vs food crops.
Resumo:
Over the past 30 years, unhealthy diets and lifestyles have increased the incidence of noncommunicable diseases and are culprits of diffusion on world’s population of syndromes as obesity or other metabolic disorders, reaching pandemic proportions. In order to comply with such scenario, the food industry has tackled these challenges with different approaches, as the reformulation of foods, fortification of foods, substitution of ingredients and supplements with healthier ingredients, reduced animal protein, reduced fats and improved fibres applications. Although the technological quality of these emerging food products is known, the impact they have on the gut microbiota of consumers remains unclear. In the present PhD thesis, the recipient work was conducted to study different foods with the substitution of the industrial and market components to that of novel green oriented and sustainable ingredients. So far, this thesis included eight representative case studies of the most common substitutions/additions/fortifications in dairy, meat, and vegetable products. The products studied were: (i) a set of breads fortified with polyphenol-rich olive fiber, to replace synthetic antioxidant and preservatives, (ii) a set of Gluten-free breads fortified with algae powder, to fortify the protein content of standard GF products, (iii) different formulations of salami where nitrates were replaced by ascorbic acid and vegetal extract antioxidants and nitrate-reducers starter cultures, (iv) chocolate fiber plus D-Limonene food supplement, as a novel prebiotic formula, (v) hemp seed bran and its alkalase hydrolysate, to introduce as a supplement, (vi) milk with and without lactose, to evaluate the different impact on human colonic microbiota of healthy or lactose-intolerants, (vii) lactose-free whey fermented and/or with probiotics added, to be introduced as an alternative beverage, exploring its impact on human colonic microbiota from healthy or lactose-intolerants, and (viii) antibiotics, to assess whether maternal amoxicillin affects the colon microbiota of piglets.
Resumo:
Today we live in an age where the internet and artificial intelligence allow us to search for information through impressive amounts of data, opening up revolutionary new ways to make sense of reality and understand our world. However, it is still an area of improvement to exploit the full potential of large amounts of explainable information by distilling it automatically in an intuitive and user-centred explanation. For instance, different people (or artificial agents) may search for and request different types of information in a different order, so it is unlikely that a short explanation can suffice for all needs in the most generic case. Moreover, dumping a large portion of explainable information in a one-size-fits-all representation may also be sub-optimal, as the needed information may be scarce and dispersed across hundreds of pages. The aim of this work is to investigate how to automatically generate (user-centred) explanations from heterogeneous and large collections of data, with a focus on the concept of explanation in a broad sense, as a critical artefact for intelligence, regardless of whether it is human or robotic. Our approach builds on and extends Achinstein’s philosophical theory of explanations, where explaining is an illocutionary (i.e., broad but relevant) act of usefully answering questions. Specifically, we provide the theoretical foundations of Explanatory Artificial Intelligence (YAI), formally defining a user-centred explanatory tool and the space of all possible explanations, or explanatory space, generated by it. We present empirical results in support of our theory, showcasing the implementation of YAI tools and strategies for assessing explainability. To justify and evaluate the proposed theories and models, we considered case studies at the intersection of artificial intelligence and law, particularly European legislation. Our tools helped produce better explanations of software documentation and legal texts for humans and complex regulations for reinforcement learning agents.
Resumo:
The main topic of this thesis is confounding in linear regression models. It arises when a relationship between an observed process, the covariate, and an outcome process, the response, is influenced by an unmeasured process, the confounder, associated with both. Consequently, the estimators for the regression coefficients of the measured covariates might be severely biased, less efficient and characterized by misleading interpretations. Confounding is an issue when the primary target of the work is the estimation of the regression parameters. The central point of the dissertation is the evaluation of the sampling properties of parameter estimators. This work aims to extend the spatial confounding framework to general structured settings and to understand the behaviour of confounding as a function of the data generating process structure parameters in several scenarios focusing on the joint covariate-confounder structure. In line with the spatial statistics literature, our purpose is to quantify the sampling properties of the regression coefficient estimators and, in turn, to identify the most prominent quantities depending on the generative mechanism impacting confounding. Once the sampling properties of the estimator conditionally on the covariate process are derived as ratios of dependent quadratic forms in Gaussian random variables, we provide an analytic expression of the marginal sampling properties of the estimator using Carlson’s R function. Additionally, we propose a representative quantity for the magnitude of confounding as a proxy of the bias, its first-order Laplace approximation. To conclude, we work under several frameworks considering spatial and temporal data with specific assumptions regarding the covariance and cross-covariance functions used to generate the processes involved. This study allows us to claim that the variability of the confounder-covariate interaction and of the covariate plays the most relevant role in determining the principal marker of the magnitude of confounding.