967 resultados para analisi non standard iperreali infinitesimi
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Ce mémoire de maîtrise traite de la théorie de la ruine, et plus spécialement des modèles actuariels avec surplus dans lesquels sont versés des dividendes. Nous étudions en détail un modèle appelé modèle gamma-omega, qui permet de jouer sur les moments de paiement de dividendes ainsi que sur une ruine non-standard de la compagnie. Plusieurs extensions de la littérature sont faites, motivées par des considérations liées à la solvabilité. La première consiste à adapter des résultats d’un article de 2011 à un nouveau modèle modifié grâce à l’ajout d’une contrainte de solvabilité. La seconde, plus conséquente, consiste à démontrer l’optimalité d’une stratégie de barrière pour le paiement des dividendes dans le modèle gamma-omega. La troisième concerne l’adaptation d’un théorème de 2003 sur l’optimalité des barrières en cas de contrainte de solvabilité, qui n’était pas démontré dans le cas des dividendes périodiques. Nous donnons aussi les résultats analogues à l’article de 2011 en cas de barrière sous la contrainte de solvabilité. Enfin, la dernière concerne deux différentes approches à adopter en cas de passage sous le seuil de ruine. Une liquidation forcée du surplus est mise en place dans un premier cas, en parallèle d’une liquidation à la première opportunité en cas de mauvaises prévisions de dividendes. Un processus d’injection de capital est expérimenté dans le deuxième cas. Nous étudions l’impact de ces solutions sur le montant des dividendes espérés. Des illustrations numériques sont proposées pour chaque section, lorsque cela s’avère pertinent.
Resumo:
Research on the relationship between reproductive work and women´s life trajectories including the experience of labour migration has mainly focused on the case of relatively young mothers who leave behind, or later re-join, their children. While it is true that most women migrate at a younger age, there are a significant number of cases of men and women who move abroad for labour purposes at a more advanced stage, undertaking a late-career migration. This is still an under-estimated and under-researched sub-field that uncovers a varied range of issues, including the global organization of reproductive work and the employment of migrant women as domestic workers late in their lives. By pooling the findings of two qualitative studies, this article focuses on Peruvian and Ukrainian women who seek employment in Spain and Italy when they are well into their forties, or older. A commonality the two groups of women share is that, independently of their level of education and professional experience, more often than not they end up as domestic and care workers. The article initially discusses the reasons for late-career female migration, taking into consideration the structural and personal determinants that have affected Peruvian and Ukrainian women’s careers in their countries of origin and settlement. After this, the focus is set on the characteristics of domestic employment at later life, on the impact on their current lives, including the transnational family organization, and on future labour and retirement prospects. Apart from an evaluation of objective working and living conditions, we discuss women’s personal impressions of being domestic workers in the context of their occupational experiences and family commitments. In this regard, women report varying levels of personal and professional satisfaction, as well as different patterns of continuity-discontinuity in their work and family lives, and of optimism towards the future. Divergences could be, to some extent, explained by the effect of migrants´ transnational social practices and policies of states.
Resumo:
Context: Model atmosphere analyses have been previously undertaken for both Galactic and extragalactic B-type supergiants. By contrast, little attention has been given to a comparison of the properties of single supergiants and those that are members of multiple systems.
Aims: Atmospheric parameters and nitrogen abundances have been estimated for all the B-type supergiants identified in the VLT-FLAMES Tarantula survey. These include both single targets and binary candidates. The results have been analysed to investigate the role of binarity in the evolutionary history of supergiants.
Methods: tlusty non-local thermodynamic equilibrium (LTE) model atmosphere calculations have been used to determine atmospheric parameters and nitrogen abundances for 34 single and 18 binary supergiants. Effective temperatures were deduced using the silicon balance technique, complemented by the helium ionisation in the hotter spectra. Surface gravities were estimated using Balmer line profiles and microturbulent velocities deduced using the silicon spectrum. Nitrogen abundances or upper limits were estimated from the Nii spectrum. The effects of a flux contribution from an unseen secondary were considered for the binary sample. Results. We present the first systematic study of the incidence of binarity for a sample of B-type supergiants across the theoretical terminal age main sequence (TAMS). To account for the distribution of effective temperatures of the B-type supergiants it may be necessary to extend the TAMS to lower temperatures. This is also consistent with the derived distribution of mass discrepancies, projected rotational velocities and nitrogen abundances, provided that stars cooler than this temperature are post-red supergiant objects. For all the supergiants in the Tarantula and in a previous FLAMES survey, the majority have small projected rotational velocities. The distribution peaks at about 50 km s-1 with 65% in the range 30 km s-1 ≤ νe sin i ≤ 60 km s-1. About ten per cent have larger ve sin i (≥100 km s-1), but surprisingly these show little or no nitrogen enhancement. All the cooler supergiants have low projected rotational velocities of ≤70 km s-1 and high nitrogen abundance estimates, implying that either bi-stability braking or evolution on a blue loop may be important. Additionally, there is a lack of cooler binaries, possibly reflecting the small sample sizes. Single-star evolutionary models, which include rotation, can account for all of the nitrogen enhancement in both the single and binary samples. The detailed distribution of nitrogen abundances in the single and binary samples may be different, possibly reflecting differences in their evolutionary history.
Conclusions: The first comparative study of single and binary B-type supergiants has revealed that the main sequence may be significantly wider than previously assumed, extending to Teff = 20 000 K. Some marginal differences in single and binary atmospheric parameters and abundances have been identified, possibly implying non-standard evolution for some of the sample. This sample as a whole has implications for several aspects of our understanding of the evolutionary status of blue supergiants.
Resumo:
The non-standard decoding of the CUG codon in Candida cylindracea raises a number of questions about the evolutionary process of this organism and other species Candida clade for which the codon is ambiguous. In order to find some answers we studied the transcriptome of C. cylindracea, comparing its behavior with that of Saccharomyces cerevisiae (standard decoder) and Candida albicans (ambiguous decoder). The transcriptome characterization was performed using RNA-seq. This approach has several advantages over microarrays and its application is booming. TopHat and Cufflinks were the software used to build the protocol that allowed for gene quantification. About 95% of the reads were mapped on the genome. 3693 genes were analyzed, of which 1338 had a non-standard start codon (TTG/CTG) and the percentage of expressed genes was 99.4%. Most genes have intermediate levels of expression, some have little or no expression and a minority is highly expressed. The distribution profile of the CUG between the three species is different, but it can be significantly associated to gene expression levels: genes with fewer CUGs are the most highly expressed. However, CUG content is not related to the conservation level: more and less conserved genes have, on average, an equal number of CUGs. The most conserved genes are the most expressed. The lipase genes corroborate the results obtained for most genes of C. cylindracea since they are very rich in CUGs and nothing conserved. The reduced amount of CUG codons that was observed in highly expressed genes may be due, possibly, to an insufficient number of tRNA genes to cope with more CUGs without compromising translational efficiency. From the enrichment analysis, it was confirmed that the most conserved genes are associated with basic functions such as translation, pathogenesis and metabolism. From this set, genes with more or less CUGs seem to have different functions. The key issues on the evolutionary phenomenon remain unclear. However, the results are consistent with previous observations and shows a variety of conclusions that in future analyzes should be taken into consideration, since it was the first time that such a study was conducted.
Resumo:
We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet eta, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle theta(23) = 45 degrees and to a CP-violating phase delta = +/-pi/2, while the mixing angle theta(13) remains arbitrary. The symmetries consist of softly broken lepton numbers L-alpha (alpha = e, mu, tau), a non-standard CP symmetry, and three L-2 symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides eta, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.
Resumo:
Planar <110> GaAs nanowires and quantum dots grown by atmospheric MOCVD have been introduced to non-standard growth conditions such as incorporating Zn and growing them on free-standing suspended films and on 10° off-cut substrates. Zn doped nanowires exhibited periodic notching along the axis of the wire that is dependent on Zn/Ga gas phase molar ratios. Planar nanowires grown on suspended thin films give insight into the mobility of the seed particle and change in growth direction. Nanowires that were grown on the off-cut sample exhibit anti-parallel growth direction changes. Quantum dots are grown on suspended thin films and show preferential growth at certain temperatures. Envisioned nanowire applications include twin-plane superlattices, axial pn-junctions, nanowire lasers, and the modulation of nanowire growth direction against an impeding barrier and varying substrate conditions.
A new age of fuel performance code criteria studied through advanced atomistic simulation techniques
Resumo:
A fundamental step in understanding the effects of irradiation on metallic uranium and uranium dioxide ceramic fuels, or any material, must start with the nature of radiation damage on the atomic level. The atomic damage displacement results in a multitude of defects that influence the fuel performance. Nuclear reactions are coupled, in that changing one variable will alter others through feedback. In the field of fuel performance modeling, these difficulties are addressed through the use of empirical models rather than models based on first principles. Empirical models can be used as a predictive code through the careful manipulation of input variables for the limited circumstances that are closely tied to the data used to create the model. While empirical models are efficient and give acceptable results, these results are only applicable within the range of the existing data. This narrow window prevents modeling changes in operating conditions that would invalidate the model as the new operating conditions would not be within the calibration data set. This work is part of a larger effort to correct for this modeling deficiency. Uranium dioxide and metallic uranium fuels are analyzed through a kinetic Monte Carlo code (kMC) as part of an overall effort to generate a stochastic and predictive fuel code. The kMC investigations include sensitivity analysis of point defect concentrations, thermal gradients implemented through a temperature variation mesh-grid, and migration energy values. In this work, fission damage is primarily represented through defects on the oxygen anion sublattice. Results were also compared between the various models. Past studies of kMC point defect migration have not adequately addressed non-standard migration events such as clustering and dissociation of vacancies. As such, the General Utility Lattice Program (GULP) code was utilized to generate new migration energies so that additional non-migration events could be included into kMC code in the future for more comprehensive studies. Defect energies were calculated to generate barrier heights for single vacancy migration, clustering and dissociation of two vacancies, and vacancy migration while under the influence of both an additional oxygen and uranium vacancy.
Resumo:
Ce mémoire de maîtrise traite de la théorie de la ruine, et plus spécialement des modèles actuariels avec surplus dans lesquels sont versés des dividendes. Nous étudions en détail un modèle appelé modèle gamma-omega, qui permet de jouer sur les moments de paiement de dividendes ainsi que sur une ruine non-standard de la compagnie. Plusieurs extensions de la littérature sont faites, motivées par des considérations liées à la solvabilité. La première consiste à adapter des résultats d’un article de 2011 à un nouveau modèle modifié grâce à l’ajout d’une contrainte de solvabilité. La seconde, plus conséquente, consiste à démontrer l’optimalité d’une stratégie de barrière pour le paiement des dividendes dans le modèle gamma-omega. La troisième concerne l’adaptation d’un théorème de 2003 sur l’optimalité des barrières en cas de contrainte de solvabilité, qui n’était pas démontré dans le cas des dividendes périodiques. Nous donnons aussi les résultats analogues à l’article de 2011 en cas de barrière sous la contrainte de solvabilité. Enfin, la dernière concerne deux différentes approches à adopter en cas de passage sous le seuil de ruine. Une liquidation forcée du surplus est mise en place dans un premier cas, en parallèle d’une liquidation à la première opportunité en cas de mauvaises prévisions de dividendes. Un processus d’injection de capital est expérimenté dans le deuxième cas. Nous étudions l’impact de ces solutions sur le montant des dividendes espérés. Des illustrations numériques sont proposées pour chaque section, lorsque cela s’avère pertinent.
Resumo:
Part 20: Health and Care Networks
Resumo:
The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons $H^{\pm}$, two neutral scalars $h$ and $H$, and one pseudoscalar $A$. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of $35.9\ \text{fb}^{-1}$. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a $\text{b}\bar{\text{b}}$ quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic $\phi$ boson decaying into a muon pair in the 130 to 1000 GeV range.
Resumo:
I dati delle indagini sugli apprendimenti degli studenti in Italia rivelano l’esistenza di fragilità nell’acquisizione di competenze essenziali e di differenze tra i risultati conseguiti. Per innovare la didattica al fine di adeguarla ai bisogni degli studenti, gli esperti di Docimologia caldeggiano l’uso di pratiche di valutazione formativa, o formative assessment (FA). In ambito internazionale diversi studi hanno mostrato l'efficacia di tali prassi, mentre in Italia non esistono ricerche sperimentali finalizzate a studiarne l’impatto sugli apprendimenti. Il progetto è entrato all’interno di quest’ambito di studi per controllare l’efficacia di un insieme di pratiche di FA sull'incremento delle abilità di comprensione dei testi degli studenti. Lo scopo è stato perseguito realizzando una sperimentazione in una scuola secondaria di primo grado che ha coinvolto gli studenti di due classi prime, i quali sono stati suddivisi a metà attraverso tecniche di randomizzazione per formare i due gruppi, sperimentale e di controllo. Dopo aver effettuato una rilevazione iniziale delle abilità di comprensione dei testi degli studenti (pre-test), è stato realizzato con quelli del gruppo sperimentale un intervento composto da 15 incontri di FA della durata di due ore ciascuno. Alla fine, sono state effettuate due rilevazioni finali (post-test) utilizzando sia la stessa prova utilizzata come pre-test sia una prova parallela. È stata calcolata la differenza post-test-pre-test per ogni gruppo ed è stato verificato quanto avesse influito la partecipazione all’intervento sperimentale su tale differenza tramite test non parametrici. I risultati hanno mostrato un incremento di abilità lievemente maggiore nel gruppo sperimentale, se confrontato con quello del gruppo di controllo, anche se questa differenza tra i due gruppi non è statisticamente significativa. Sebbene le analisi non abbiano consentito di rifiutare l’ipotesi nulla, la rilevanza di tale progetto risiede nel tentativo di aprire il dibattito sull’efficacia di prassi di FA sugli apprendimenti degli studenti in Italia.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
In this work, we develop a randomized bounded arithmetic for probabilistic computation, following the approach adopted by Buss for non-randomized computation. This work relies on a notion of representability inspired by of Buss' one, but depending on a non-standard quantitative and measurable semantic. Then, we establish that the representable functions are exactly the ones in PPT. Finally, we extend the language of our arithmetic with a measure quantifier, which is true if and only if the quantified formula's semantic has measure greater than a given threshold. This allows us to define purely logical characterizations of standard probabilistic complexity classes such as BPP, RP, co-RP and ZPP.
Resumo:
L’obiettivo iniziale di questo lavoro era quello di studiare il fenomeno della proteolisi nel formaggio, al variare del tempo di stagionatura e di salatura, mediante lo studio dello stato dell’acqua, con una metodica non distruttiva e innovativa per questa tematica: il TD-NMR. I formaggi oggetto di studio sono stati prodotti con un impianto pilota presente in Dipartimento, con lo stesso latte, nella stessa giornata e nelle medesime condizioni di caseificazione. Il primo passo è stato quello di assegnare un nome alle 4 popolazioni di protoni corrispondenti alle 4 curve esponenziali in cui si traducevano i risultati di T2. Dato che gli studi bibliografici consultati non erano concordi su questo aspetto e nessuno aveva svolto esperimenti che potessero confermare le supposizioni formulate, abbiamo proceduto all’analisi di un formaggio simile ai nostri campioni, addizionato di una soluzione dopante a base di Fe(III)Cl2. Questo passaggio ci ha permesso di identificare i tipi di molecole rappresentati dalle 4 popolazioni. Successivamente siamo stati in grado di fare ipotesi concrete sull’evoluzione dei risultati di T2 e intensità relativa al variare del tempo di stagionatura e di salatura. Dalle nostre osservazioni è emerso che è possibile correlare l’andamento di T2 e I a quello di diversi parametri che caratterizzano il formaggio. Le ipotesi emerse da questo studio sono solamente intuizioni preliminari riguardo l’impiego di questo metodo di analisi per il formaggio, che però ha la potenzialità di essere molto utile nella ricerca e anche nell’industria. Si tratta infatti di un metodo caratterizzato da estrema facilità nella preparazione dei campioni, che può essere adattato ad analisi non distruttive e che impiega uno strumento molto economico rispetto ad altri tipi di analisi NMR. Possiamo inoltre concludere che, avendo messo a punto questi aspetti di base, l’interpretazione dei dati sarà senz’altro più semplice rispetto ad altre analisi NMR.