125 resultados para cog humanoid robot embodied learning phd thesis metaphor pancake reaching vision


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we investigate the role of applied physics in epidemiological surveillance through the application of mathematical models, network science and machine learning. The spread of a communicable disease depends on many biological, social, and health factors. The large masses of data available make it possible, on the one hand, to monitor the evolution and spread of pathogenic organisms; on the other hand, to study the behavior of people, their opinions and habits. Presented here are three lines of research in which an attempt was made to solve real epidemiological problems through data analysis and the use of statistical and mathematical models. In Chapter 1, we applied language-inspired Deep Learning models to transform influenza protein sequences into vectors encoding their information content. We then attempted to reconstruct the antigenic properties of different viral strains using regression models and to identify the mutations responsible for vaccine escape. In Chapter 2, we constructed a compartmental model to describe the spread of a bacterium within a hospital ward. The model was informed and validated on time series of clinical measurements, and a sensitivity analysis was used to assess the impact of different control measures. Finally (Chapter 3) we reconstructed the network of retweets among COVID-19 themed Twitter users in the early months of the SARS-CoV-2 pandemic. By means of community detection algorithms and centrality measures, we characterized users’ attention shifts in the network, showing that scientific communities, initially the most retweeted, lost influence over time to national political communities. In the Conclusion, we highlighted the importance of the work done in light of the main contemporary challenges for epidemiological surveillance. In particular, we present reflections on the importance of nowcasting and forecasting, the relationship between data and scientific research, and the need to unite the different scales of epidemiological surveillance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of distributed and ubiquitous intelligence has emerged over the last years as the mainspring of transformative advancements in mobile radio networks. As we approach the era of “mobile for intelligence”, next-generation wireless networks are poised to undergo significant and profound changes. Notably, the overarching challenge that lies ahead is the development and implementation of integrated communication and learning mechanisms that will enable the realization of autonomous mobile radio networks. The ultimate pursuit of eliminating human-in-the-loop constitutes an ambitious challenge, necessitating a meticulous delineation of the fundamental characteristics that artificial intelligence (AI) should possess to effectively achieve this objective. This challenge represents a paradigm shift in the design, deployment, and operation of wireless networks, where conventional, static configurations give way to dynamic, adaptive, and AI-native systems capable of self-optimization, self-sustainment, and learning. This thesis aims to provide a comprehensive exploration of the fundamental principles and practical approaches required to create autonomous mobile radio networks that seamlessly integrate communication and learning components. The first chapter of this thesis introduces the notion of Predictive Quality of Service (PQoS) and adaptive optimization and expands upon the challenge to achieve adaptable, reliable, and robust network performance in dynamic and ever-changing environments. The subsequent chapter delves into the revolutionary role of generative AI in shaping next-generation autonomous networks. This chapter emphasizes achieving trustworthy uncertainty-aware generation processes with the use of approximate Bayesian methods and aims to show how generative AI can improve generalization while reducing data communication costs. Finally, the thesis embarks on the topic of distributed learning over wireless networks. Distributed learning and its declinations, including multi-agent reinforcement learning systems and federated learning, have the potential to meet the scalability demands of modern data-driven applications, enabling efficient and collaborative model training across dynamic scenarios while ensuring data privacy and reducing communication overhead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anche se l'isteroscopia con la biopsia endometriale è il gold standard nella diagnosi della patologia intracavitaria uterina, l'esperienza dell’isteroscopista è fondamentale per una diagnosi corretta. Il Deep Learning (DL) come metodica di intelligenza artificiale potrebbe essere un aiuto per superare questo limite. Sono disponibili pochi studi con risultati preliminari e mancano ricerche che valutano le prestazioni dei modelli di DL nell'identificazione delle lesioni intrauterine e il possibile aiuto derivato dai fattori clinici. Obiettivo: Sviluppare un modello di DL per identificare e classificare le patologie endocavitarie uterine dalle immagini isteroscopiche. Metodi: È stato eseguito uno studio di coorte retrospettivo osservazionale monocentrico su una serie consecutiva di casi isteroscopici di pazienti con patologia intracavitaria uterina confermata all’esame istologico eseguiti al Policlinico S. Orsola. Le immagini isteroscopiche sono state usate per costruire un modello di DL per la classificazione e l'identificazione delle lesioni intracavitarie con e senza l'aiuto di fattori clinici (età, menopausa, AUB, terapia ormonale e tamoxifene). Come risultati dello studio abbiamo calcolato le metriche diagnostiche del modello di DL nella classificazione e identificazione delle lesioni uterine intracavitarie con e senza l'aiuto dei fattori clinici. Risultati: Abbiamo esaminato 1.500 immagini provenienti da 266 casi: 186 pazienti avevano lesioni focali benigne, 25 lesioni diffuse benigne e 55 lesioni preneoplastiche/neoplastiche. Sia per quanto riguarda la classificazione che l’identificazione, le migliori prestazioni sono state raggiunte con l'aiuto dei fattori clinici, complessivamente con precision dell'80,11%, recall dell'80,11%, specificità del 90,06%, F1 score dell’80,11% e accuratezza dell’86,74% per la classificazione. Per l’identificazione abbiamo ottenuto un rilevamento complessivo dell’85,82%, precision 93,12%, recall del 91,63% ed F1 score del 92,37%. Conclusioni: Il modello DL ha ottenuto una bassa performance nell’identificazione e classificazione delle lesioni intracavitarie uterine dalle immagini isteroscopiche. Anche se la migliore performance diagnostica è stata ottenuta con l’aiuto di fattori clinici specifici, questo miglioramento è stato scarso.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spiking Neural Networks (SNNs) are bio-inspired Artificial Neural Networks (ANNs) utilizing discrete spiking signals, akin to neuron communication in the brain, making them ideal for real-time and energy-efficient Cyber-Physical Systems (CPSs). This thesis explores their potential in Structural Health Monitoring (SHM), leveraging low-cost MEMS accelerometers for early damage detection in motorway bridges. The study focuses on Long Short-Term SNNs (LSNNs), although their complex learning processes pose challenges. Comparing LSNNs with other ANN models and training algorithms for SHM, findings indicate LSNNs' effectiveness in damage identification, comparable to ANNs trained using traditional methods. Additionally, an optimized embedded LSNN implementation demonstrates a 54% reduction in execution time, but with longer pre-processing due to spike-based encoding. Furthermore, SNNs are applied in UAV obstacle avoidance, trained directly using a Reinforcement Learning (RL) algorithm with event-based input from a Dynamic Vision Sensor (DVS). Performance evaluation against Convolutional Neural Networks (CNNs) highlights SNNs' superior energy efficiency, showing a 6x decrease in energy consumption. The study also investigates embedded SNN implementations' latency and throughput in real-world deployments, emphasizing their potential for energy-efficient monitoring systems. This research contributes to advancing SHM and UAV obstacle avoidance through SNNs' efficient information processing and decision-making capabilities within CPS domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis presents the results, achieved at the Aerospace Engineering Department Laboratories of the University of Bologna, concerning the development of a small scale Rotary wing UAVs (RUAVs). In the first part of the work, a mission simulation environment for rotary wing UAVs was developed, as main outcome of the University of Bologna partnership in the CAPECON program (an EU funded research program aimed at studying the UAVs civil applications and economic effectiveness of the potential configuration solutions). The results achieved in cooperation with DLR (German Aerospace Centre) and with an helicopter industrial partners will be described. In the second part of the work, the set-up of a real small scale rotary wing platform was performed. The work was carried out following a series of subsequent logical steps from hardware selection and set-up to final autonomous flight tests. This thesis will focus mainly on the RUAV avionics package set-up, on the onboard software development and final experimental tests. The setup of the electronic package allowed recording of helicopter responses to pilot commands and provided deep insight into the small scale rotorcraft dynamics, facilitating the development of helicopter models and control systems in a Hardware In the Loop (HIL) simulator. A neested PI velocity controller1 was implemented on the onboard computer and autonomous flight tests were performed. Comparison between HIL simulation and experimental results showed good agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This volume is a collection of the work done in a three years-lasting PhD, focused in the analysis of Central and Southern Adriatic marine sediments, deriving from the collection of a borehole and many cores, achieved thanks to the good seismic-stratigraphic knowledge of the study area. The work was made out within European projects EC-EURODELTA (coordinated by Fabio Trincardi, ISMAR-CNR), EC-EUROSTRATAFORM (coordinated by Phil P. E. Weaver, NOC, UK), and PROMESS1 (coordinated by Serge Bernè, IFREMER, France). The analysed sedimentary successions presented highly expanded stratigraphic intervals, particularly for the last 400 kyr, 60 kyr and 6 kyr BP. These three different time-intervals resulted in a tri-partition of the PhD thesis. The study consisted of the analysis of planktic and benthic foraminifers’ assemblages (more than 560 samples analysed), as well as in preparing the material for oxygen and carbon stable isotope analyses, and interpreting and discussing the obtained dataset. The chronologic framework of the last 400 kyr was achieved for borehole PRAD1-2 (within the work-package WP6 of PROMESS1 project), collected in 186.5 m water depth. The proposed chronology derives from a multi-disciplinary approach, consisting of the integration of numerous and independent proxies, some of which analysed by other specialists within the project. The final framework based on: micropaleontology (calcareous nannofossils and foraminifers’ bioevents), climatic cyclicity (foraminifers’ assemblages), geochemistry (oxygen stable isotope, made out on planktic and benthic records), paleomagnetism, radiometric ages (14C AMS), teprhochronology, identification of sapropel-equivalent levels (Se). It’s worth to note the good consistency between the oxygen stable isotope curve obtained for borehole PRAD1-2 and other deeper Mediterranean records. The studied proxies allowed the recognition of all the isotopic intervals from MIS10 to MIS1 in PRAD1-2 record, and the base of the borehole has been ascribed to the early MIS11. Glacial and interglacial intervals identified in the Central Adriatic record have been analysed in detail for the paleo-environmental reconstruction, as well. For instance, glacial stages MIS6, MIS8 and MIS10 present peculiar foraminifers’ assemblages, composed by benthic species typical of polar regions and no longer living in the Central Adriatic nowadays. Moreover, a deepening trend in the paleo-bathymetry during glacial intervals was observed, from MIS10 (inner-shelf environment) to MIS4 (mid-shelf environment).Ten sapropel-equivalent levels have been recognised in PRAD1-2 Central Adriatic record. They showed different planktic foraminifers’ assemblages, which allowed the first distinction of events occurred during warm-climate (Se5, Se7), cold-climate (Se4, Se6 and Se8) and temperate-intermediate-climate (Se1, Se3, Se9, Se’, Se10) conditions, consistently with literature. Cold-climate sapropel equivalents are characterised by the absence of an oligotrophic phase, whereas warm-temeprate-climate sapropel equivalents present both the oligotrophic and the eutrophic phases (except for Se1). Sea floor conditions vary, according to benthic foraminifers’ assemblages, from relatively well oxygenated (Se1, Se3), to dysoxic (Se9, Se’, Se10), to highly dysoxic (Se4, Se6, Se8) to events during which benthic foraminifers are absent (Se5, Se7). These two latter levels are also characterised by the lamination of the sediment, feature never observed in literature in such shallow records. The enhanced stratification of the water column during the events Se8, Se7, Se6, Se5, Se4, and the concurring strong dilution of shallow water, pointed out by the isotope record, lead to the hypothesis of a period of intense precipitation in the Central Adriatic region, possibly due to a northward shift of the African Monsoon. Finally, the expression of Central Adriatic PRAD1-2 Se5 equivalent was compared with the same event, as registered in other Eastern Mediterranean areas. The sequence of substantially the same planktic foraminifers’ bioevents has been consistently recognised, indicating a similar evolution of the water column all over the Eastern Mediterranean; yet, the synchronism of these events cannot be demonstrated. A high resolution analysis of late Holocene (last 6000 years BP) climate change was carried out for the Adriatic area, through the recognition of planktic and benthic foraminifers’ bioevents. In particular, peaks of planktic Globigerinoides sacculifer (four during the last 5500 years BP in the most expanded core) have been interpreted, based on the ecological requirements of this species, as warm-climate, arid intervals, correspondent to periods of relative climatic optimum, such as, for instance, the Medieval Warm Period, the Roman Age, the Late Bronze Age and the Copper Age. Consequently, the minima in the abundance of this biomarker could correspond to relatively cooler and more rainy periods. These conclusions are in good agreement with the isotopic and the pollen data. The Last Occurrence (LO) of G. sacculifer has been dated in this work at an average age of 550 years BP, and it is the best bioevent approximating the base of the Little Ice Age in the Adriatic. Recent literature reports the same bioevent in the Levantine Basin, showing a rather consistent age. Therefore, the LO of G. sacculifer has the potential to be extended to all the Eastern Mediterranean. Within the Little Ice Age, benthic foraminifer V. complanata shows two distinct peaks in the shallower Adriatic cores analysed, collected hundred kilometres apart, inside the mud belt environment. Based on the ecological requirements of this species, these two peaks have been interpreted as the more intense (cold and rainy) oscillations inside the LIA. The chronologic framework of the analysed cores is robust, being based on several range-finding 14C AMS ages, on estimates of the secular variation of the magnetic field, on geochemical estimates of the activity depth of 210Pb short-lived radionuclide (for the core-top ages), and is in good agreement with tephrochronologic, pollen and foraminiferal data. The intra-holocenic climate oscillations find out in the Adriatic have been compared with those pointed out in literature from other records of the Northern Hemisphere, and the chronologic constraint seems quite good. Finally, the sedimentary successions analysed allowed the review and the update of the foraminifers’ ecobiostratigraphy available from literature for the Adriatic region, thanks to the achievement of 16 ecobiozones for the last 60 kyr BP. Some bioevents are restricted to the Central Adriatic (for instance the LO of benthic Hyalinea balthica , approximating the MIS3/MIS2 boundary), others occur all over the Adriatic basin (for instance the LO of planktic Globorotalia inflata during MIS3, individuating Dansgaard-Oeschger cycle 8 (Denekamp)).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theory of aging postulates that aging is a remodeling process where the body of survivors progressively adapts to internal and external damaging agents they are exposed to during several decades. Thus , stress response and adaptation mechanisms play a fundamental role in the aging process where the capability of adaptating effects, certainly, also is related the lifespan of each individual. A key gene linking aging to stress response is indeed p21, an induction of cyclin-dependent kinase inhibitor which triggers cell growth arrest associated with senescence and damage response and notably is involved in the up-regulation of multiple genes that have been associated with senescence or implicated in age-related . This PhD thesis project that has been performed in collaboration with the Roninson Lab at Ordway Research Institute in Albany, NY had two main aims: -the testing the hypothesis that p21 polymorphisms are involved in longevity -Evaluating age-associated differences in gene expression and transcriptional response to p21 and DNA damage In the first project, trough PCR-sequencing and Sequenom strategies, we we found out that there are about 30 polymorphic variants in the p21 gene. In addition, we found an haplotpype located in -5kb region of the p21 promoter whose frequency is ~ 2 fold higher in centenarians than in the general population (Large-scale analysis of haplotype frequencies is currently in progress). Functional studies I carried out on the promoter highilighted that the ―centenarian‖ haplotype doesn’t affect the basal p21 promoter activity or its response to p53. However, there are many other possible physiological conditions in which the centenarian allele of the p21 promoter may potentially show a different response (IL6, IFN,progesterone, vitamin E, Vitamin D etc). In the second part, project #2, trough Microarrays we seeked to evaluate the differences in gene expression between centenarians, elderly, young in dermal fibroblast cultures and their response to p21 and DNA damage. Microarray analysis of gene expression in dermal fibroblast cultures of individuals of different ages yielded a tentative "centenarian signature". A subset of genes that were up- or downregulated in centenarians showed the same response to ectopic expression of p21, yielding a putative "p21-centenarian" signature. Trough RQ-PCR (as well Microarrays studies whose analysis is in progress) we tested the DNA damage response of the p21-centenarian signature genes showing a correlation stress/aging in additional sets of young and old samples treated with p21-inducing drug doxorubicin thus finding for a subset of of them , a response to stress age-related.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis describes set up of technological models for obtaining high health value foods and ingredients that preserve the final product characteristics as well as enrich with nutritional components. In particular, the main object of my research has been Virgin Olive Oil (VOO) and its important antioxidant compounds which differentiate it from all other vegetables oils. It is well known how the qualitative and quantitative presence of phenolic molecules extracted from olives during oil production is fundamental for its oxidative and nutritional quality. For this purpose, agronomic and technological conditions of its production have been investigated. It has also been examined how this fraction can be better preserved during storage. Moreover, its relation with VOO sensorial characteristics and its interaction with a protein in emulsion foods have also been studied. Finally, an experimental work was carried out to determine the antioxidative and heat resistance properties of a new antioxidant (EVS-OL) when used for high temperature frying such as is typically employed for the preparation of french fries. Results of the scientific research have been submitted for a publication and some data has already been published in national and international scientific journals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis has been proposed to validate and then apply innovative analytical methodologies for the determination of compounds with harmful impact on human health, such as biogenic amines and ochratoxin A in wines. Therefore, the influence of production technology (pH, amino acids precursor and use of different malolactic starters) on biogenic amines content in wines was evaluated. An HPLC method for simultaneous determination of amino acids and amines with precolumnderivatization with 9-Fluorenyl-methoxycarbonyl chloride (FMOC-Cl) and UV detection was developed. Initially, the influence of pH, time of derivatization, gradient profile were studied. In order to improve the separation of amino acids and amines and reduce the time of analysis, it was decided to study the influence of different flows and the use of different columns in the chromatographic method. Firstly, a C18 Luna column was used and later two monolithic columns Chromolith in series. It appeared to be suitable for an easy, precise and accurate determination of a relatively large number of amino acids and amines in wines. This method was then applied on different wines produced in the Emilia Romagna region. The investigation permitted to discriminate between red and white wines. Amino acids content is related to the winemaking process. Biogenic amines content in these wines does not represent a possible toxicological problem for human health. The results of the study of influence of technologies and wine composition demonstrated that pH of wines and amino acids content are the most important factors. Particularly wines with pH > 3,5 show higher concentration of biogenic amines than wines with lower pH. The enrichment of wines by nutrients also influences the content of some biogenic amines that are higher in wines added with amino acids precursors. In this study, amino acids and biogenic amines are not statistically affected by strain of lactic acid bacteria inoculated as a starter for malolactic fermentation. An evaluation of different clean-up (SPE-MycoSep; IACs and LLE) and determination methods (HPLC and ELISA) of ochratoxin A was carried out. The results obtained proved that the SPE clean-up are reliable at the same level while the LLE procedures shows lowest recovery. The ELISA method gave a lower determination and a low reproducibility than HPLC method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objects with complex shape and functions have always attracted attention and interest. The morphological diversity and complexity of naturally occurring forms and patterns have been a motivation for humans to copy and adopt ideas from Nature to achieve functional, aesthetic and social value. Biomimetics is addressed to the design and development of new synthetic materials using strategies adopted by living organisms to produce biological materials. In particular, biomineralized tissues are often sophisticate composite materials, in which the components and the interfaces between them have been defined and optimized, and that present unusual and optimal chemical-physical, morphological and mechanical properties. Moreover, biominerals are generally produced by easily traceable raw materials, in aqueous media and at room pressure and temperature, that is through cheap process and materials. Thus, it is not surprising that the idea to mimic those strategies proper of Nature has been employed in several areas of applied sciences, such as for the preparation of liquid crystals, ceramic thin films computer switches and many other advanced materials. On this basis, this PhD thesis is focused on the investigation of the interaction of biologically active ions and molecules with calcium phosphates with the aim to develop new materials for the substitution and repair of skeletal tissue, according to the following lines: I. Modified calcium phosphates. A relevant part of this PhD thesis has been addressed to study the interaction of Strontium with calcium phosphates. It was demonstrated that strontium ion can substitute for calcium into hydroxyapatite, causing appreciable structural and morphological modifications. The detailed structural analysis carried out on the nanocrystals at different strontium content provided new insight into its interaction with the structure of hydroxyapatite. At variance with the behaviour of Sr towards HA, it was found that this ion inhibits the synthesis of octacalcium phosphate. However, it can substitute for calcium in this structure up to 15 atom %, in agreement with the increase of the cell parameters observed on increasing ion concentration. A similar behaviour was found for Magnesium ion, whereas Manganese inhibits the synthesis of octacalcium phosphate and it promotes the precipitation of dicalcium phosphate dehydrate. It was also found that Strontium affects the kinetics of the reaction of hydrolysis of α-TCP. It inhibits the conversion from α-TCP to hydroxyapatite. However, the resulting apatitic phase contains significant amounts of Sr2+ suggesting that the addition of Sr2+ to the composition of α-TCP bone cements could be successfully exploited for its local delivery in bone defects. The hydrolysis of α-TCP has been investigated also in the presence of increasing amounts of gelatin: the results indicated that this biopolymer accelerates the hydrolysis reaction and promotes the conversion of α-TCP into OCP, suggesting that its addition in the composition of calcium phosphate cements can be employed to modulate the OCP/HA ratio, and as a consequence the solubility, of the set cement. II. Deposition of modified calcium phosphates on metallic substrates. Coating with a thin film of calcium phosphates is frequently applied on the surface of metallic implants in order to combine the high mechanical strength of the metal with the excellent bioactivity of the calcium phosphates surface layers. During this PhD thesis, thank to the collaboration with prof. I.N. Mihailescu, head of the Laser-Surface-Plasma Interactions Laboratory (National Institute for Lasers, Plasma and Radiation Physics – Laser Department, Bucharest) Pulsed Laser Deposition has been successfully applied to deposit thin films of Sr substituted HA on Titanium substrates. The synthesized coatings displayed a uniform Sr distribution, a granular surface and a good degree of crystallinity which slightly decreased on increasing Sr content. The results of in vitro tests carried out on osteoblast-like and osteoclast cells suggested that the presence of Sr in HA thin films can enhance the positive effect of HA coatings on osteointegration and bone regeneration, and prevent undesirable bone resorption. The possibility to introduce an active molecule in the implant site was explored using Matrix Assisted Pulsed Laser Evaporation to deposit hydroxyapatite nanocrystals at different content of alendronate, a bisphosphonate widely employed in the treatments of pathological diseases associated to bone loss. The coatings displayed a good degree of crystallinity, and the results of in vitro tests indicated that alendronate promotes proliferation and differentiation of osteoblasts even when incorporated into hydroxyapatite. III. Synthesis of drug carriers with a delayed release modulated by a calcium phosphate coating. A core-shell system for modulated drug delivery and release has been developed through optimization of the experimental conditions to cover gelatin microspheres with a uniform layer of calcium phosphate. The kinetics of the release from uncoated and coated microspheres was investigated using aspirin as a model drug. It was shown that the presence of the calcium phosphate shell delays the release of aspirin and allows to modulate its action.