12 resultados para flavor symmetries
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Our scope in this thesis is to propose architectures of CNNs in such a way to model the early visual pathway, including the Lateral Geniculate Nucleus and the Horizontal Connectivity of the primary visual cortex. Moreover, we will show how cortically inspired architectures allow to perform contrast perceptual invariance as well as grouping and the emergence of visual percepts. Particularly, the LGN is modeled with a first layer l0 containing a single filter Ψ0 that pre-filters the image I. Since the RPs of the LGN cells can be modeled as a LoG, we expect to obtain a radially symmetric filter with a similar shape; to this end, we prove the rotational invariance of Ψ0 and we study the influence of this filter to the subsequent layer. Indeed, we compare the statistic distribution of the filters in the second layer l1 of our architecture with the statistic distribution of the RPs of V1 cells of a macaque. Then, we model the horizontal connectivity of V1 implementing a transition kernel K1 to the layer l1. In this setting, we study the vector fields and the association fields induced by the connectivity kernel K1. To this end, we first approximate the filters bank in l1 with a Gabor function and use the parameters just found to re-parameterize the kernel. Thanks to this step, the kernel is now re-parameterized into a sub-Riemmanian space R2 × S1. Now we are able to compare the vector and association fields induced by K1 with the models of the horizontal connectivity.
Resumo:
In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.
Resumo:
Aims: Ripening evaluation of two different Pecorino cheese varieties ripened according either to a traditional method in plant and in cave. Different ripening features have been analyzed in order to evaluate the cave as possible ripening environment with the aim of obtaining a peculiar product which could also establish an added value to the cultural heritage of the local place in which it has been originally manufactured. Methods and Results: Chemical-physical features of Pecorino cheese have been initially analyzed into two different ripening environments and experimentations, among which: pH, weight reduction and subsequent water activity. Furthermore, the microbial composition has been characterized in relationship with the two different ripening environments, undertaking a variety of microbial groups, such as: lactic bacteria, staphylococci, yeasts, lactococci, enterobacteria, enterococci. Besides, an additional analysis for the in-cave adaptability evaluation has been the identification of biogenic amines inside the Pecorino cheese (2-phenilethylamine, putrescine, cadaverine, hystidine, tyramine, spermine and spermidine). Further analysis were undertaken in order to track the lipid profile evolution, reporting the concentration of the cheese free fatty acids in object, in relation with ripening time, environment and production. In order to analyse the flavour compounds present in Pecorino cheese, the SPME-GC-MS technique has been widely employed. As a result, it is confirmed the trend showed by the short-chain free fatty acids, that is to say the fatty acids which are mostly involved in conveying a stronger flavor to the cheese. With the purpose of assessing the protheolytic patterns of the above-mentioned Pecorino cheese in the two different ripening environments and testing methods, the technique SDS-PAGE has been employed into the cheese insoluble fraction, whereas the SDS-PAGE technique has been carried out into the cheese soluble portion. Furthermore, different isolated belonging to various microbial groups have been genotypically characterized though the ITS-PCR technique with the aim to identify the membership species. With reference to lactic bacillus the characterized species are: Lactobacillus brevis, Lactobacillus curvatus and Lactobacillus paraplantarum. With reference to lactococci the predominant species is Lactococcus lactis, coming from the employed starter used in the cheese manufacturing. With reference to enterococcus, the predominant species are Enterococcus faecium and Enterococcus faecalis. Moreover, Streptococcus termophilus and Streptococcus macedonicus have been identified too. For staphylococci the identified species are Staphyilococcus equorum, Staphylococcus saprophyfiticus and Staphylococcus xylosus. Finally, a sensorial analysis has been undertaken through on one side a consumer test made by inexperienced consumers, and on the other side through a panel test achieved by expert consumers. From such test Pecorino cheese ripened in cave were found to be more pleasant in comparison with Pecorino cheese ripened in plant. Conclusions: The proposed approach and the undertaken analysis showed the cave as preferential ripening environment for Pecorino cheese and for the development of a more palatable product and safer for consumers’ health.
Resumo:
The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.
Resumo:
Pig meat quality is determined by several parameters, such as lipid content, tenderness, water-holding capacity, pH, color and flavor, that affect consumers’ acceptance and technological properties of meat. Carcass quality parameters are important for the production of fresh and dry-cure high-quality products, in particular the fat deposition and the lean cut yield. The identification of genes and markers associated with meat and carcass quality traits is of prime interest, for the possibility of improving the traits by marker-assisted selection (MAS) schemes. Therefore, the aim of this thesis was to investigate seven candidate genes for meat and carcass quality traits in pigs. In particular, we focused on genes belonging to the family of the lipid droplet coat proteins perilipins (PLIN1 and PLIN2) and to the calpain/calpastatin system (CAST, CAPN1, CAPN3, CAPNS1) and on the gene encoding for PPARg-coactivator 1A (PPARGC1A). In general, the candidate genes investigation included the protein localization, the detection of polymorphisms, the association analysis with meat and carcass traits and the analysis of the expression level, in order to assess the involvement of the gene in pork quality. Some of the analyzed genes showed effects on various pork traits that are subject to selection in genetic improvement programs, suggesting a possible involvement of the genes in controlling the traits variability. In particular, significant association results have been obtained for PLIN2, CAST and PPARGC1A genes, that are worthwhile of further validation. The obtained results contribute to a better understanding of biological mechanisms important for pig production as well as for a possible use of pig as animal model for studies regarding obesity in humans.
Resumo:
The first part of the thesis concerns the study of inflation in the context of a theory of gravity called "Induced Gravity" in which the gravitational coupling varies in time according to the dynamics of the very same scalar field (the "inflaton") driving inflation, while taking on the value measured today since the end of inflation. Through the analytical and numerical analysis of scalar and tensor cosmological perturbations we show that the model leads to consistent predictions for a broad variety of symmetry-breaking inflaton's potentials, once that a dimensionless parameter entering into the action is properly constrained. We also discuss the average expansion of the Universe after inflation (when the inflaton undergoes coherent oscillations about the minimum of its potential) and determine the effective equation of state. Finally, we analyze the resonant and perturbative decay of the inflaton during (p)reheating. The second part is devoted to the study of a proposal for a quantum theory of gravity dubbed "Horava-Lifshitz (HL) Gravity" which relies on power-counting renormalizability while explicitly breaking Lorentz invariance. We test a pair of variants of the theory ("projectable" and "non-projectable") on a cosmological background and with the inclusion of scalar field matter. By inspecting the quadratic action for the linear scalar cosmological perturbations we determine the actual number of propagating degrees of freedom and realize that the theory, being endowed with less symmetries than General Relativity, does admit an extra gravitational degree of freedom which is potentially unstable. More specifically, we conclude that in the case of projectable HL Gravity the extra mode is either a ghost or a tachyon, whereas in the case of non-projectable HL Gravity the extra mode can be made well-behaved for suitable choices of a pair of free dimensionless parameters and, moreover, turns out to decouple from the low-energy Physics.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
This Thesis focuses on the X-ray study of the inner regions of Active Galactic Nuclei, in particular on the formation of high velocity winds by the accretion disk itself. Constraining AGN winds physical parameters is of paramount importance both for understanding the physics of the accretion/ejection flow onto supermassive black holes, and for quantifying the amount of feedback between the SMBH and its environment across the cosmic time. The sources selected for the present study are BAL, mini-BAL, and NAL QSOs, known to host high-velocity winds associated to the AGN nuclear regions. Observationally, a three-fold strategy has been adopted: - substantial samples of distant sources have been analyzed through spectral, photometric, and statistical techniques, to gain insights into their mean properties as a population; - a moderately sized sample of bright sources has been studied through detailed X-ray spectral analysis, to give a first flavor of the general spectral properties of these sources, also from a temporally resolved point of view; - the best nearby candidate has been thoroughly studied using the most sophisticated spectral analysis techniques applied to a large dataset with a high S/N ratio, to understand the details of the physics of its accretion/ejection flow. There are three main channels through which this Thesis has been developed: - [Archival Studies]: the XMM-Newton public archival data has been extensively used to analyze both a large sample of distant BAL QSOs, and several individual bright sources, either BAL, mini-BAL, or NAL QSOs. - [New Observational Campaign]: I proposed and was awarded with new X-ray pointings of the mini-BAL QSOs PG 1126-041 and PG 1351+640 during the XMM-Newton AO-7 and AO-8. These produced the biggest X-ray observational campaign ever made on a mini-BAL QSO (PG 1126-041), including the longest exposure so far. Thanks to the exceptional dataset, a whealth of informations have been obtained on both the intrinsic continuum and on the complex reprocessing media that happen to be in the inner regions of this AGN. Furthermore, the temporally resolved X-ray spectral analysis field has been finally opened for mini-BAL QSOs. - [Theoretical Studies]: some issues about the connection between theories and observations of AGN accretion disk winds have been investigated, through theoretical arguments and synthetic absorption line profiles studies.
Resumo:
Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.
Resumo:
One of the main targets of the CMS experiment is to search for the Standard Model Higgs boson. The 4-lepton channel (from the Higgs decay h->ZZ->4l, l = e,mu) is one of the most promising. The analysis is based on the identification of two opposite-sign, same-flavor lepton pairs: leptons are required to be isolated and to come from the same primary vertex. The Higgs would be statistically revealed by the presence of a resonance peak in the 4-lepton invariant mass distribution. The 4-lepton analysis at CMS is presented, spanning on its most important aspects: lepton identification, variables of isolation, impact parameter, kinematics, event selection, background control and statistical analysis of results. The search leads to an evidence for a signal presence with a statistical significance of more than four standard deviations. The excess of data, with respect to the background-only predictions, indicates the presence of a new boson, with a mass of about 126 GeV/c2 , decaying to two Z bosons, whose characteristics are compatible with the SM Higgs ones.
Resumo:
The objectives of this PhD research were: i) to evaluate the use of bread making process to increase the content of β-glucans, resistant starch, fructans, dietary fibers and phenolic compounds of kamut khorasan and wheat breads made with flours obtained from kernels at different maturation stage (at milky stage and fully ripe) and ii) to study the impact of whole grains consumption in the human gut. The fermentation and the stages of kernel development or maturation had a great impact on the amount of resistant starch, fructans and β-glucans as well as their interactions resulted highly statistically significant. The amount of fructans was high in kamut bread (2.1g/100g) at the fully ripe stage compared to wheat during industrial fermentation (baker’s yeast). The sourdough increases the content of polyphenols more than industrial fermentation especially in bread made by flour at milky stage. From the analysis of volatile compounds it resulted that the sensors of electronic nose perceived more aromatic compound in kamut products, as well as the SPME-GC-MS, thus we can assume that kamut is more aromatic than wheat, so using it in sourdough process can be a successful approach to improve the bread taste and flavor. The determination of whole grain biormakers such as alkylresorcinols and others using FIE-MS AND GC-tof-MS is a valuable alternative for further metabolic investigations. The decrease of N-acetyl-glucosamine and 3-methyl-hexanedioic acid in kamut faecal samples suggests that kamut can have a role in modulating mucus production/degradation or even gut inflammation. This work gives a new approach to the innovation strategies in bakery functional foods, that can help to choose the right or best combination between stages of kernel maturation-fermentation process and baking temperature.
Resumo:
Pig meat and carcass quality is a complex concept determined by environmental and genetic factors concurring to the phenotypic variation in qualitative characteristics of meat (fat content, tenderness, juiciness, flavor,etc). This thesis shows the results of different investigations to study and to analyze pig meat and carcass quality focusing mainly on genomic; moreover proteomic approach has been also used. The aim was to analyze data from association studies between genes considered as candidate and meat and carcass quality in different pig breeds. The approach was used to detect new SNP in genes functionally associated to the studied traits and to confirm as candidate other genes already known. Five polymorphisms (one new SNP in Calponin 1 gene and four additional polymorphism already known in other genes) were considered on chromosome 2 (SSC2). Calponin 1 (CNN1) was associated to the studied traits and furthermore the results reported confirmed the data already known for Lactate dehydrogenase A (LDHA), Low density lipoprotein receptor (LDLR), Myogenic differentiation 1 (MYOD1) e Ubiquitin-like 5 (UBL5), in Italian Large White pigs. Using an in silico search it was possible to detect on SSC2 a new SNP of Deoxyhypusine synthase (DHPS) gene partially overlapping with WD repeat domain 83 (WDR83) gene and significant for the meat pH variation in Italian Large White (ILW) pigs. Perilipin 1 (PLIN1) mapping on chromosome 7 and Perilipin 2 (PLIN2) mapping on chromosome 1 were studied and the results obtained in Duroc breed have shown significant associations with carcass traits. Moreover a study of protein composition of porcine LD muscle, indicated an effect of temperature treatment of carcass, on proteins of the sarcoplasmic fraction and in particular on PGM1 phosphorylation. Future studies on pig meat quality should be based on the integration of different experimental approaches (genomics, proteomics, transcriptomics, etc).