12 resultados para Classical orthogonal polynomials of a discrete variable
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This PhD thesis focuses on studying the classical scattering of massive/massless particles toward black holes, and investigating double copy relations between classical observables in gauge theories and gravity. This is done in the Post-Minkowskian approximation i.e. a perturbative expansion of observables controlled by the gravitational coupling constant κ = 32πGN, with GN being the Newtonian coupling constant. The investigation is performed by using the Worldline Quantum Field Theory (WQFT), displaying a worldline path integral describing the scattering objects and a QFT path integral in the Born approximation, describing the intermediate bosons exchanged in the scattering event by the massive/massless particles. We introduce the WQFT, by deriving a relation between the Kosower- Maybee-O’Connell (KMOC) limit of amplitudes and worldline path integrals, then, we use that to study the classical Compton amplitude and higher point amplitudes. We also present a nice application of our formulation to the case of Hard Thermal Loops (HTL), by explicitly evaluating hard thermal currents in gauge theory and gravity. Next we move to the investigation of the classical double copy (CDC), which is a powerful tool to generate integrands for classical observables related to the binary inspiralling problem in General Relativity. In order to use a Bern-Carrasco-Johansson (BCJ) like prescription, straight at the classical level, one has to identify a double copy (DC) kernel, encoding the locality structure of the classical amplitude. Such kernel is evaluated by using a theory where scalar particles interacts through bi-adjoint scalars. We show here how to push forward the classical double copy so to account for spinning particles, in the framework of the WQFT. Here the quantization procedure on the worldline allows us to fully reconstruct the quantum theory on the gravitational side. Next we investigate how to describe the scattering of massless particles off black holes in the WQFT.
Resumo:
We analyze the Waring decompositions of the powers of any quadratic form over the field of complex numbers. Our main objective is to provide detailed information about their rank and border rank. These forms are of significant importance because of the classical decomposition expressing the space of polynomials of a fixed degree as a direct sum of the spaces of harmonic polynomials multiplied by a power of the quadratic form. Using the fact that the spaces of harmonic polynomials are irreducible representations of the special orthogonal group over the field of complex numbers, we show that the apolar ideal of the s-th power of a non-degenerate quadratic form in n variables is generated by the set of harmonic polynomials of degree s+1. We also generalize and improve upon some of the results about real decompositions, provided by B. Reznick in his notes from 1992, focusing on possibly minimal decompositions and providing new ones, both real and complex. We investigate the rank of the second power of a non-degenerate quadratic form in n variables, which is equal to (n^2+n+2)/2 in most cases. We also study the border rank of any power of an arbitrary ternary non-degenerate quadratic form, which we determine explicitly using techniques of apolarity and a specific subscheme contained in its apolar ideal. Based on results about smoothability, we prove that the smoothable rank of the s-th power of such form corresponds exactly to its border rank and to the rank of its middle catalecticant matrix, which is equal to (s+1)(s+2)/2.
Resumo:
By using a symbolic method, known in the literature as the classical umbral calculus, a symbolic representation of Lévy processes is given and a new family of time-space harmonic polynomials with respect to such processes, which includes and generalizes the exponential complete Bell polynomials, is introduced. The usefulness of time-space harmonic polynomials with respect to Lévy processes is that it is a martingale the stochastic process obtained by replacing the indeterminate x of the polynomials with a Lévy process, whereas the Lévy process does not necessarily have this property. Therefore to find such polynomials could be particularly meaningful for applications. This new family includes Hermite polynomials, time-space harmonic with respect to Brownian motion, Poisson-Charlier polynomials with respect to Poisson processes, Laguerre and actuarial polynomials with respect to Gamma processes , Meixner polynomials of the first kind with respect to Pascal processes, Euler, Bernoulli, Krawtchuk, and pseudo-Narumi polynomials with respect to suitable random walks. The role played by cumulants is stressed and brought to the light, either in the symbolic representation of Lévy processes and their infinite divisibility property, either in the generalization, via umbral Kailath-Segall formula, of the well-known formulae giving elementary symmetric polynomials in terms of power sum symmetric polynomials. The expression of the family of time-space harmonic polynomials here introduced has some connections with the so-called moment representation of various families of multivariate polynomials. Such moment representation has been studied here for the first time in connection with the time-space harmonic property with respect to suitable symbolic multivariate Lévy processes. In particular, multivariate Hermite polynomials and their properties have been studied in connection with a symbolic version of the multivariate Brownian motion, while multivariate Bernoulli and Euler polynomials are represented as powers of multivariate polynomials which are time-space harmonic with respect to suitable multivariate Lévy processes.
Resumo:
My PhD project has been focused on the study of the pulsating variable stars in two ultra-faint dwarf spheroidal satellites of the Milky Way, namely, Leo IV and Hercules; and in two fields of the Large Magellanic Cloud (namely, the Gaia South Ecliptic Pole calibration field, and the 30 Doradus region) that were repeatedly observed in the KS band by the VISTA Magellanic Cloud (VMC, PI M.R. Cioni) survey of the Magellanic System.
Resumo:
Model misspecification affects the classical test statistics used to assess the fit of the Item Response Theory (IRT) models. Robust tests have been derived under model misspecification, as the Generalized Lagrange Multiplier and Hausman tests, but their use has not been largely explored in the IRT framework. In the first part of the thesis, we introduce the Generalized Lagrange Multiplier test to detect differential item response functioning in IRT models for binary data under model misspecification. By means of a simulation study and a real data analysis, we compare its performance with the classical Lagrange Multiplier test, computed using the Hessian and the cross-product matrix, and the Generalized Jackknife Score test. The power of these tests is computed empirically and asymptotically. The misspecifications considered are local dependence among items and non-normal distribution of the latent variable. The results highlight that, under mild model misspecification, all tests have good performance while, under strong model misspecification, the performance of the tests deteriorates. None of the tests considered show an overall superior performance than the others. In the second part of the thesis, we extend the Generalized Hausman test to detect non-normality of the latent variable distribution. To build the test, we consider a seminonparametric-IRT model, that assumes a more flexible latent variable distribution. By means of a simulation study and two real applications, we compare the performance of the Generalized Hausman test with the M2 limited information goodness-of-fit test and the Likelihood-Ratio test. Additionally, the information criteria are computed. The Generalized Hausman test has a better performance than the Likelihood-Ratio test in terms of Type I error rates and the M2 test in terms of power. The performance of the Generalized Hausman test and the information criteria deteriorates when the sample size is small and with a few items.
Resumo:
Deformability is often a crucial to the conception of many civil-engineering structural elements. Also, design is all the more burdensome if both long- and short-term deformability has to be considered. In this thesis, long- and short-term deformability has been studied from the material and the structural modelling point of view. Moreover, two materials have been handled: pultruded composites and concrete. A new finite element model for thin-walled beams has been introduced. As a main assumption, cross-sections rigid are considered rigid in their plane; this hypothesis replaces that of the classical beam theory of plane cross-sections in the deformed state. That also allows reducing the total number of degrees of freedom, and therefore making analysis faster compared with twodimensional finite elements. Longitudinal direction warping is left free, allowing describing phenomena such as the shear lag. The new finite-element model has been first applied to concrete thin-walled beams (such as roof high span girders or bridge girders) subject to instantaneous service loadings. Concrete in his cracked state has been considered through a smeared crack model for beams under bending. At a second stage, the FE-model has been extended to the viscoelastic field and applied to pultruded composite beams under sustained loadings. The generalized Maxwell model has been adopted. As far as materials are concerned, long-term creep tests have been carried out on pultruded specimens. Both tension and shear tests have been executed. Some specimen has been strengthened with carbon fibre plies to reduce short- and long- term deformability. Tests have been done in a climate room and specimens kept 2 years under constant load in time. As for concrete, a model for tertiary creep has been proposed. The basic idea is to couple the UMLV linear creep model with a damage model in order to describe nonlinearity. An effective strain tensor, weighting the total and the elasto-damaged strain tensors, controls damage evolution through the damage loading function. Creep strains are related to the effective stresses (defined by damage models) and so associated to the intact material.
Resumo:
The role of mitochondrial dysfunction in cancer has long been a subject of great interest. In this study, such dysfunction has been examined with regards to thyroid oncocytoma, a rare form of cancer, accounting for less than 5% of all thyroid cancers. A peculiar characteristic of thyroid oncocytic cells is the presence of an abnormally large number of mitochondria in the cytoplasm. Such mitochondrial hyperplasia has also been observed in cells derived from patients suffering from mitochondrial encephalomyopathies, where mutations in the mitochondrial DNA(mtDNA) encoding the respiratory complexes result in oxidative phosphorylation dysfunction. An increase in the number of mitochondria occurs in the latter in order to compensate for the respiratory deficiency. This fact spurred the investigation into the presence of analogous mutations in thyroid oncocytic cells. In this study, the only available cell model of thyroid oncocytoma was utilised, the XTC-1 cell line, established from an oncocytic thyroid metastasis to the breast. In order to assess the energetic efficiency of these cells, they were incubated in a medium lacking glucose and supplemented instead with galactose. When subjected to such conditions, glycolysis is effectively inhibited and the cells are forced to use the mitochondria for energy production. Cell viability experiments revealed that XTC-1 cells were unable to survive in galactose medium. This was in marked contrast to the TPC-1 control cell line, a thyroid tumour cell line which does not display the oncocytic phenotype. In agreement with these findings, subsequent experiments assessing the levels of cellular ATP over incubation time in galactose medium, showed a drastic and continual decrease in ATP levels only in the XTC-1 cell line. Furthermore, experiments on digitonin-permeabilised cells revealed that the respiratory dysfunction in the latter was due to a defect in complex I of the respiratory chain. Subsequent experiments using cybrids demonstrated that this defect could be attributed to the mitochondrially-encoded subunits of complex I as opposed to the nuclearencoded subunits. Confirmation came with mtDNA sequencing, which detected the presence of a novel mutation in the ND1 subunit of complex I. In addition, a mutation in the cytochrome b subunit of complex III of the respiratory chain was detected. The fact that XTC-1 cells are unable to survive when incubated in galactose medium is consistent with the fact that many cancers are largely dependent on glycolysis for energy production. Indeed, numerous studies have shown that glycolytic inhibitors are able to induce apoptosis in various cancer cell lines. Subsequent experiments were therefore performed in order to identify the mode of XTC-1 cell death when subjected to the metabolic stress imposed by the forced use of the mitochondria for energy production. Cell shrinkage and mitochondrial fragmentation were observed in the dying cells, which would indicate an apoptotic type of cell death. Analysis of additional parameters however revealed a lack of both DNA fragmentation and caspase activation, thus excluding a classical apoptotic type of cell death. Interestingly, cleavage of the actin component of the cytoskeleton was observed, implicating the action of proteases in this mode of cell demise. However, experiments employing protease inhibitors failed to identify the specific protease involved. It has been reported in the literature that overexpression of Bcl-2 is able to rescue cells presenting a respiratory deficiency. As the XTC-1 cell line is not only respiration-deficient but also exhibits a marked decrease in Bcl-2 expression, it is a perfect model with which to study the relationship between Bcl-2 and oxidative phosphorylation in respiratory-deficient cells. Contrary to the reported literature studies on various cell lines harbouring defects in the respiratory chain, Bcl-2 overexpression was not shown to increase cell survival or rescue the energetic dysfunction in XTC-1 cells. Interestingly however, it had a noticeable impact on cell adhesion and morphology. Whereas XTC-1 cells shrank and detached from the growth surface under conditions of metabolic stress, Bcl-2-overexpressing XTC-1 cells appeared much healthier and were up to 45% more adherent. The target of Bcl-2 in this setting appeared to be the actin cytoskeleton, as the cleavage observed in XTC-1 cells expressing only endogenous levels of Bcl-2, was inhibited in Bcl-2-overexpressing cells. Thus, although unable to rescue XTC-1 cells in terms of cell viability, Bcl-2 is somehow able to stabilise the cytoskeleton, resulting in modifications in cell morphology and adhesion. The mitochondrial respiratory deficiency observed in cancer cells is thought not only to cause an increased dependency on glycolysis but it is also thought to blunt cellular responses to anticancer agents. The effects of several therapeutic agents were thus assessed for their death-inducing ability in XTC-1 cells. Cell viability experiments clearly showed that the cells were more resistant to stimuli which generate reactive oxygen species (tert-butylhydroperoxide) and to mitochondrial calcium-mediated apoptotic stimuli (C6-ceramide), as opposed to stimuli inflicting DNA damage (cisplatin) and damage to protein kinases(staurosporine). Various studies in the literature have reported that the peroxisome proliferator-activated receptor-coactivator 1(PGC-1α), which plays a fundamental role in mitochondrial biogenesis, is also involved in protecting cells against apoptosis caused by the former two types of stimuli. In accordance with these observations, real-time PCR experiments showed that XTC-1 cells express higher mRNA levels of this coactivator than do the control cells, implicating its importance in drug resistance. In conclusion, this study has revealed that XTC-1 cells, like many cancer cell lines, are characterised by a reduced energetic efficiency due to mitochondrial dysfunction. Said dysfunction has been attributed to mutations in respiratory genes encoded by the mitochondrial genome. Although the mechanism of cell demise in conditions of metabolic stress is unclear, the potential of targeting thyroid oncocytic cancers using glycolytic inhibitors has been illustrated. In addition, the discovery of mtDNA mutations in XTC-1 cells has enabled the use of this cell line as a model with which to study the relationship between Bcl-2 overexpression and oxidative phosphorylation in cells harbouring mtDNA mutations and also to investigate the significance of such mutations in establishing resistance to apoptotic stimuli.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Piezoelectrics present an interactive electromechanical behaviour that, especially in recent years, has generated much interest since it renders these materials adapt for use in a variety of electronic and industrial applications like sensors, actuators, transducers, smart structures. Both mechanical and electric loads are generally applied on these devices and can cause high concentrations of stress, particularly in proximity of defects or inhomogeneities, such as flaws, cavities or included particles. A thorough understanding of their fracture behaviour is crucial in order to improve their performances and avoid unexpected failures. Therefore, a considerable number of research works have addressed this topic in the last decades. Most of the theoretical studies on this subject find their analytical background in the complex variable formulation of plane anisotropic elasticity. This theoretical approach bases its main origins in the pioneering works of Muskelishvili and Lekhnitskii who obtained the solution of the elastic problem in terms of independent analytic functions of complex variables. In the present work, the expressions of stresses and elastic and electric displacements are obtained as functions of complex potentials through an analytical formulation which is the application to the piezoelectric static case of an approach introduced for orthotropic materials to solve elastodynamics problems. This method can be considered an alternative to other formalisms currently used, like the Stroh’s formalism. The equilibrium equations are reduced to a first order system involving a six-dimensional vector field. After that, a similarity transformation is induced to reach three independent Cauchy-Riemann systems, so justifying the introduction of the complex variable notation. Closed form expressions of near tip stress and displacement fields are therefore obtained. In the theoretical study of cracked piezoelectric bodies, the issue of assigning consistent electric boundary conditions on the crack faces is of central importance and has been addressed by many researchers. Three different boundary conditions are commonly accepted in literature: the permeable, the impermeable and the semipermeable (“exact”) crack model. This thesis takes into considerations all the three models, comparing the results obtained and analysing the effects of the boundary condition choice on the solution. The influence of load biaxiality and of the application of a remote electric field has been studied, pointing out that both can affect to a various extent the stress fields and the angle of initial crack extension, especially when non-singular terms are retained in the expressions of the electro-elastic solution. Furthermore, two different fracture criteria are applied to the piezoelectric case, and their outcomes are compared and discussed. The work is organized as follows: Chapter 1 briefly introduces the fundamental concepts of Fracture Mechanics. Chapter 2 describes plane elasticity formalisms for an anisotropic continuum (Eshelby-Read-Shockley and Stroh) and introduces for the simplified orthotropic case the alternative formalism we want to propose. Chapter 3 outlines the Linear Theory of Piezoelectricity, its basic relations and electro-elastic equations. Chapter 4 introduces the proposed method for obtaining the expressions of stresses and elastic and electric displacements, given as functions of complex potentials. The solution is obtained in close form and non-singular terms are retained as well. Chapter 5 presents several numerical applications aimed at estimating the effect of load biaxiality, electric field, considered permittivity of the crack. Through the application of fracture criteria the influence of the above listed conditions on the response of the system and in particular on the direction of crack branching is thoroughly discussed.
Resumo:
Supramolecular chemistry is a multidisciplinary field which impinges on other disciplines, focusing on the systems made up of a discrete number of assembled molecular subunits. The forces responsible for the spatial organization are intermolecular reversible interactions. The supramolecular architectures I was interested in are Rotaxanes, mechanically-interlocked architectures consisting of a "dumbbell shaped molecule", threaded through a "macrocycle" where the stoppers at the end of the dumbbell prevent disassociation of components and catenanes, two or more interlocked macrocycles which cannot be separated without breaking the covalent bonds. The aim is to introduce one or more paramagnetic units to use the ESR spectroscopy to investigate complexation properties of these systems cause this technique works in the same time scale of supramolecular assemblies. Chapter 1 underlines the main concepts upon which supramolecular chemistry is based, clarifying the nature of supramolecular interactions and the principles of host-guest chemistry. In chapter 2 it is pointed out the use of ESR spectroscopy to investigate the properties of organic non-covalent assemblies in liquid solution by spin labels and spin probes. The chapter 3 deals with the synthesis of a new class of p-electron-deficient tetracationic cyclophane ring, carrying one or two paramagnetic side-arms based on 2,2,6,6-tetramethylpiperidine-N-oxyl (TEMPO) moiety. In the chapter 4, the Huisgen 1,3-dipolar cycloaddition is exploited to synthesize rotaxanes having paramagnetic cyclodextrins as wheels. In the chapter 5, the catalysis of Huisgen’s cycloaddition by CB[6] is exploited to synthesize paramagnetic CB[6]-based [3]-rotaxanes. In the chapter 6 I reported the first preliminary studies of Actinoid series as a new class of templates in catenanes’ synthesis. Being f-block elements, so having the property of expanding the valence state, they constitute promising candidates as chemical templates offering the possibility to create a complex with coordination number beyond 6.
Resumo:
The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.
Resumo:
The objective of this thesis is the investigation of the Mode-I fracture mechanics parameters of quasi-brittle materials to shed light onto the influence of the width and size of the specimen on the fracture response of notched beams. To further the knowledge on the fracture process, 3D digital image correlation (DIC) was employed. A new method is proposed to determine experimentally the critical value of the crack opening, which is then used to determine the size of the fracture process zone (FPZ). In addition, the Mode-I fracture mechanics parameters are compared with the Mode-II interfacial properties of composites materials that feature as matrices the quasi-brittle materials studied in Mode-I conditions. To investigate the Mode II fracture parameters, single-lap direct shear tests are performed. Notched concrete beams with six cross-sections has been tested using a three-point bending (TPB) test set-up (Mode-I fracture mechanics). Two depths and three widths of the beam are considered. In addition to concrete beams, alkali-activated mortar beams (AAMs) that differ by the type and size of the aggregates have been tested using the same TPB set-up. Two dimensions of AAMs are considered. The load-deflection response obtained from DIC is compared with the load-deflection response obtained from the readings of two linear variable displacement transformers (LVDT). Load responses, peak loads, strain profiles along the ligament from DIC, fracture energy and failure modes of TPB tests are discussed. The Mode-II problem is investigated by testing steel reinforced grout (SRG) composites bonded to masonry and concrete elements under single-lap direct shear tests. Two types of anchorage systems are proposed for SRG reinforced masonry and concrete element to study their effectiveness. An indirect method is proposed to find the interfacial properties, compare them with the Mode-I fracture properties of the matrix and to model the effect of the anchorage.