17 resultados para semi-classical analysis
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this thesis we will investigate some properties of one-dimensional quantum systems. From a theoretical point of view quantum models in one dimension are particularly interesting because they are strongly interacting, since particles cannot avoid each other in their motion, and you we can never ignore collisions. Yet, integrable models often generate new and non-trivial solutions, which could not be found perturbatively. In this dissertation we shall focus on two important aspects of integrable one- dimensional models: Their entanglement properties at equilibrium and their dynamical correlators after a quantum quench. The first part of the thesis will be therefore devoted to the study of the entanglement entropy in one- dimensional integrable systems, with a special focus on the XYZ spin-1/2 chain, which, in addition to being integrable, is also an interacting model. We will derive its Renyi entropies in the thermodynamic limit and its behaviour in different phases and for different values of the mass-gap will be analysed. In the second part of the thesis we will instead study the dynamics of correlators after a quantum quench , which represent a powerful tool to measure how perturbations and signals propagate through a quantum chain. The emphasis will be on the Transverse Field Ising Chain and the O(3) non-linear sigma model, which will be both studied by means of a semi-classical approach. Moreover in the last chapter we will demonstrate a general result about the dynamics of correlation functions of local observables after a quantum quench in integrable systems. In particular we will show that if there are not long-range interactions in the final Hamiltonian, then the dynamics of the model (non equal- time correlations) is described by the same statistical ensemble that describes its statical properties (equal-time correlations).
Resumo:
Questa tesi descrive alcuni studi di messa a punto di metodi di analisi fisici accoppiati con tecniche statistiche multivariate per valutare la qualità e l’autenticità di oli vegetali e prodotti caseari. L’applicazione di strumenti fisici permette di abbattere i costi ed i tempi necessari per le analisi classiche ed allo stesso tempo può fornire un insieme diverso di informazioni che possono riguardare tanto la qualità come l’autenticità di prodotti. Per il buon funzionamento di tali metodi è necessaria la costruzione di modelli statistici robusti che utilizzino set di dati correttamente raccolti e rappresentativi del campo di applicazione. In questo lavoro di tesi sono stati analizzati oli vegetali e alcune tipologie di formaggi (in particolare pecorini per due lavori di ricerca e Parmigiano-Reggiano per un altro). Sono stati utilizzati diversi strumenti di analisi (metodi fisici), in particolare la spettroscopia, l’analisi termica differenziale, il naso elettronico, oltre a metodiche separative tradizionali. I dati ottenuti dalle analisi sono stati trattati mediante diverse tecniche statistiche, soprattutto: minimi quadrati parziali; regressione lineare multipla ed analisi discriminante lineare.
Resumo:
The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.
Resumo:
The physico-chemical characterization, structure-pharmacokinetic and metabolism studies of new semi synthetic analogues of natural bile acids (BAs) drug candidates have been performed. Recent studies discovered a role of BAs as agonists of FXR and TGR5 receptor, thus opening new therapeutic target for the treatment of liver diseases or metabolic disorders. Up to twenty new semisynthetic analogues have been synthesized and studied in order to find promising novel drugs candidates. In order to define the BAs structure-activity relationship, their main physico-chemical properties (solubility, detergency, lipophilicity and affinity with serum albumin) have been measured with validated analytical methodologies. Their metabolism and biodistribution has been studied in “bile fistula rat”, model where each BA is acutely administered through duodenal and femoral infusion and bile collected at different time interval allowing to define the relationship between structure and intestinal absorption and hepatic uptake ,metabolism and systemic spill-over. One of the studied analogues, 6α-ethyl-3α7α-dihydroxy-5β-cholanic acid, analogue of CDCA (INT 747, Obeticholic Acid (OCA)), recently under approval for the treatment of cholestatic liver diseases, requires additional studies to ensure its safety and lack of toxicity when administered to patients with a strong liver impairment. For this purpose, CCl4 inhalation to rat causing hepatic decompensation (cirrhosis) animal model has been developed and used to define the difference of OCA biodistribution in respect to control animals trying to define whether peripheral tissues might be also exposed as a result of toxic plasma levels of OCA, evaluating also the endogenous BAs biodistribution. An accurate and sensitive HPLC-ES-MS/MS method is developed to identify and quantify all BAs in biological matrices (bile, plasma, urine, liver, kidney, intestinal content and tissue) for which a sample pretreatment have been optimized.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
One of the problems in the analysis of nucleus-nucleus collisions is to get information on the value of the impact parameter b. This work consists in the application of pattern recognition techniques aimed at associating values of b to groups of events. To this end, a support vec- tor machine (SVM) classifier is adopted to analyze multifragmentation reactions. This method allows to backtracing the values of b through a particular multidimensional analysis. The SVM classification con- sists of two main phase. In the first one, known as training phase, the classifier learns to discriminate the events that are generated by two different model:Classical Molecular Dynamics (CMD) and Heavy- Ion Phase-Space Exploration (HIPSE) for the reaction: 58Ni +48 Ca at 25 AMeV. To check the classification of events in the second one, known as test phase, what has been learned is tested on new events generated by the same models. These new results have been com- pared to the ones obtained through others techniques of backtracing the impact parameter. Our tests show that, following this approach, the central collisions and peripheral collisions, for the CMD events, are always better classified with respect to the classification by the others techniques of backtracing. We have finally performed the SVM classification on the experimental data measured by NUCL-EX col- laboration with CHIMERA apparatus for the previous reaction.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
Italy registers a fast increase of low income population. Academics and policy makers consider income inequalities as a key determinant for low or inadequate healthy food consumption. Thus the objective is to understand how to overcome the agrofood chain barriers towards healthy food production, commercialisation and consumption for population at risk of poverty (ROP) in Italy. The study adopts a market oriented food chain approach, focusing the research ambit on ROP consumers, processing industries and retailers. The empirical investigation adopts a qualitative methodology with an explorative approach. The actors are investigated through 4 focus groups for consumers and carrying out 27 face to face semi-structured interviews for industries and retailers’ representatives. The results achieved provide the perceptions of each actor integrated into an overall chain approach. The analysis shows that all agrofood actors lack of an adequate level of knowledge towards healthy food definition. Food industries and retailers also show poor awareness about ROP consumers’ segment. In addition they perceive that the high costs for producing healthy food conflict with the low economic performances expected from ROP consumers’ segment. These aspects induce a scarce interest in investing on commercialisation strategies for healthy food for ROP consumers. Further ROP consumers show other notable barriers to adopt healthy diets caused, among others, by a personal strong negative attitude and lack of motivation. The personal barriers are also negatively influenced by several external socio-economic factors. The solutions to overcome the barriers shall rely on the improvement of the agrofood chain internal relations to identify successful strategies for increasing interest on low cost healthy food. In particular the focus should be on improved collaboration on innovation adoption and marketing strategies, considering ROP consumers’ preferences and needs. An external political intervention is instead necessary to fill the knowledge and regulations’ gaps on healthy food issues.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
The clonal distribution of BRAFV600E in papillary thyroid carcinoma (PTC) has been recently debated. No information is currently available about precursor lesions of PTCs. My first aim was to establish whether the BRAFV600E mutation occurs as a subclonal event in PTCs. My second aim was to screen BRAF mutations in histologically benign tissue of cases with BRAFV600E or BRAFwt PTCs in order to identify putative precursor lesions of PTCs. Highly sensitive semi-quantitative methods were used: Allele Specific LNA quantitative PCR (ASLNAqPCR) and 454 Next-Generation Sequencing (NGS). For the first aim 155 consecutive formalin-fixed and paraffin-embedded (FFPE) specimens of PTCs were analyzed. The percentage of mutated cells obtained was normalized to the estimated number of neoplastic cells. Three groups of tumors were identified: a first had a percentage of BRAF mutated neoplastic cells > 80%; a second group showed a number of BRAF mutated neoplastic cells < 30%; a third group had a distribution of BRAFV600E between 30-80%. The large presence of BRAFV600E mutated neoplastic cell sub-populations suggests that BRAFV600E may be acquired early during tumorigenesis: therefore, BRAFV600E can be heterogeneously distributed in PTC. For the second aim, two groups were studied: one consisted of 20 cases with BRAFV600E mutated PTC, the other of 9 BRAFwt PTCs. Seventy-five and 23 histologically benign FFPE thyroid specimens were analyzed from the BRAFV600E mutated and BRAFwt PTC groups, respectively. The screening of BRAF mutations identified BRAFV600E in “atypical” cell foci from both groups of patients. “Unusual” BRAF substitutions were observed in histologically benign thyroid associated with BRAFV600E PTCs. These mutations were very uncommon in the group with BRAFwt PTCs and in BRAFV600E PTCs. Therefore, lesions carrying BRAF mutations may represent “abortive” attempts at cancer development: only BRAFV600E boosts neoplastic transformation to PTC. BRAFV600E mutated “atypical foci” may represent precursor lesions of BRAFV600E mutated PTCs.
Resumo:
In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. This formulation contains the classic shear-deformable GBT available in the literature and contributes an additional description of cross-section warping that is variable along the wall thickness besides along the wall midline. Shear deformation is introduced in such a way that the classical shear strain components of the Timoshenko beam theory are recovered exactly. According to the new kinematics proposed, a reviewed form of the cross-section analysis procedure is devised, based on a unique modal decomposition. Later, a procedure for a posteriori reconstruction of all the three-dimensional stress components in the finite element analysis of thin-walled beams using the GBT is presented. The reconstruction is simple and based on the use of three-dimensional equilibrium equations and of the RCP procedure. Finally, once the stress reconstruction procedure is presented, a study of several existing issues on the constitutive relations in the GBT is carried out. Specifically, a constitutive law based on mirroring the kinematic constraints of the GBT model into a specific stress field assumption is proposed. It is shown that this method is equally valid for isotropic and orthotropic beams and coincides with the conventional GBT approach available in the literature. Later on, an analogous procedure is presented for the case of laminated beams. Lastly, as a way to improve an inherently poor description of shear deformability in the GBT, the introduction of shear correction factors is proposed. Throughout this work, numerous examples are provided to determine the validity of all the proposed contributions to the field.