859 resultados para Early case detection
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.
Resumo:
In this study, conditions of deposition and stratigraphical architecture of Neogene (Tortonian, 11-6,7Ma) sediments of southern central Crete were analysed. In order to improve resolution of paleoclimatic data, new methods were applied to quantify environmental parameters and to increase the chronostratigraphic resolution in shallow water sediments. A relationship between paleoenvironmental change observed on Crete and global processes was established and a depositional model was developed. Based on a detailed analysis of the distribution of non geniculate coralline red algae, index values for water temperature and water depth were established and tested with the distribution patterns of benthic foraminifera and symbiont-bearing corals. Calcite shelled bivalves were sampled from the Algarve coast (southern Portugal) and central Crete and then 87Sr/86Sr was measured. A high resolution chronostratigraphy was developed based on the correlation between fluctuations in Sr ratios in the measured sections and in a late Miocene global seawater Sr isotope reference curve. Applying this method, a time frame was established to compare paleoenvironmental data from southern central Crete with global information on climate change reflected in oxygen isotope data. The comparison between paleotemperature data based on red algae and global oxygen isotope data showed that the employed index values reflect global change in temperature. Data indicate a warm interval during earliest Tortonian, a second short warm interval between 10 and 9,5Ma, a longer climatic optimum between 9 and 8Ma and an interval of increasing temperatures in the latest Tortonian. The distribution of coral reefs and carpets shows that during the warm intervals, the depositional environment became tropical while temperate climates prevailed during the cold interval. Since relative tectonic movements after initial half-graben formation in the early Tortonian were low in southern central Crete, sedimentary successions strongly respond to global sea-level fluctuation. A characteristic sedimentary succession formed during a 3rd order sea-level cycle: It comprises mixed siliciclastic-limestone deposited during sea-level fall and lowstand, homogenous red algal deposits formed during sea-level rise and coral carpets formed during late rise and highstand. Individual beds in the succession reflect glacioeustatic fluctuations that are most prominent in the mixed siliciclastic-limestone interval. These results confirm the fact that sedimentary successions deposited at the critical threshold between temperate and tropical environments develop characteristic changes in depositional systems and biotic associations that can be used to assemble paleoclimatic datasets.
Resumo:
Nanotechnology entails the manufacturing and manipulation of matter at length scales ranging from single atoms to micron-sized objects. The ability to address properties on the biologically-relevant nanometer scale has made nanotechnology attractive for Nanomedicine. This is perceived as a great opportunity in healthcare especially in diagnostics, therapeutics and more in general to develop personalized medicine. Nanomedicine has the potential to enable early detection and prevention, and to improve diagnosis, mass screening, treatment and follow-up of many diseases. From the biological standpoint, nanomaterials match the typical size of naturally occurring functional units or components of living organisms and, for this reason, enable more effective interaction with biological systems. Nanomaterials have the potential to influence the functionality and cell fate in the regeneration of organs and tissues. To this aim, nanotechnology provides an arsenal of techniques for intervening, fabricate, and modulate the environment where cells live and function. Unconventional micro- and nano-fabrication techniques allow patterning biomolecules and biocompatible materials down to the level of a few nanometer feature size. Patterning is not simply a deterministic placement of a material; in a more extended acception it allows a controlled fabrication of structures and gradients of different nature. Gradients are emerging as one of the key factors guiding cell adhesion, proliferation, migration and even differentiation in the case of stem cells. The main goal of this thesis has been to devise a nanotechnology-based strategy and tools to spatially and temporally control biologically-relevant phenomena in-vitro which are important in some fields of medical research.
Resumo:
Der Erfolg einer Schizophrenie-Behandlung ist zum größten Teil abhängig vom Ansprechen des Patienten auf seine antipsychotische Medikation. Welches Medikament und welche Dosis bei einem individuellen Patienten wirksam sind, kann derzeit erst nach mehrwöchiger Behandlung beurteilt werden. Ein Grund für variierendes Therapieansprechen sind variable Plasmakonzentrationen der Antipsychotika. Ziel dieser Arbeit war es, zu untersuchen, in wieweit der Therapieerfolg zu einem frühen Zeitpunkt der Behandlung durch objektive Symptomerfassung vorhersagbar ist und welche Faktoren die hohe Variabilität der Antipsychotikaspiegel im Blut beeinflussen. rnEine 18-monatige naturalistische klinische Studie an schizophrenen Patienten wurde durchgeführt, um folgende Fragen zu beantworten: Kann man das Therapieansprechen prädizieren und welche Instrumente sind dafür geeignet? Die Psychopathologie wurde anhand zweier Messskalen (Brief Psychiatric Rating Scale, BPRS und Clinical Global Impressions, CGI) wöchentlich ermittelt, um die Besserung der Krankheitssymptome im Verlauf von 8 Wochen zu bewerten. Therapiebegleitend wurden noch die Serum-Konzentrationen der Antipsychotika gemessen. Objektive Symptomerfassung durch BPRS oder CGI waren als Messinstrumente geeignet, Therapieansprechen vorherzusagen. Bezogen auf den Behandlungsbeginn war eine Verminderung der Symptome hoch prädiktiv für späteres Therapieversagen oder -ansprechen. Eine Verminderung um mehr als 36,5% auf der BPRS Skala in Woche 2 wurde als signifikanter Schwellenwert für Nichtansprechen ermittelt. Patienten, deren Symptombesserung unterhalb des Schwellenwertes lag, hatten eine 11,2-fach höhere Wahrscheinlichkeit, am Ende der Studie nicht auf ihre medikamentöse Therapie anzusprechen als die Patienten, die sich um mindestens 36,5% verbesserten. Andere Faktoren, wie Alter, Geschlecht, Dauer der Erkrankung oder Anzahl der stationären Aufenthalte hatten keinen Einfluss auf die Prädiktion des Therapieansprechens. Therapeutische Antipsychotika-Spiegel übten einen positiven Einfluss auf die Ansprechrate aus. Bei Patienten mit therapeutischen Spiegeln war das Ansprechen rascher und die Ansprechrate größer als unter denjenigen deren Spiegel außerhalb der therapeutisch üblichen Bereiche lag. rnEine wichtige Voraussetzung für den Einsatz von TDM ist das Vorhandensein einer präzisen, reproduzierbaren, zeit- und kostensparenden analytischen Methode zur quantitativen Bestimmung der untersuchten Substanzen. Die Entwicklung und Validierung einer solchen geeigneten Methode wurde für den Nachweis von Haloperidol vorgenommen. Eine HPLC-Methode mit Säulenschaltung erwies sich für TDM geeignet. rnBasierend auf den Ergebnissen der eigenen klinischen Studie zur Response Prädiktion wurde untersucht, welche Faktoren die Variabilität der Pharmakokinetik von Antipsychotika beeinflussen. Die Variabilität der Pharmakokinetik ist ein Grund für fehlendes oder unzureichendes Ansprechen. Es wurde zum einen der Einfluss der galenischen Formulierung auf die Freisetzung und zum anderen der Einfluss von entzündlichen Prozessen auf die Metabolisierung eines Antipsychotikums untersucht. Dazu wurden Patientendaten retrospektiv ausgewertet.rnDie Analyse von 247 Serumspiegeln von Patienten, die mit Paliperidon in OROS®Formulierung, einer neu eingeführten Retardform, behandelt wurden, zeigte, dass die intraindividuelle Variabilität der Talspiegel (Vk) von Paliperidon 35% betrug. Er war damit vergleichbar wie für nicht retardiertes Risperidon 32% (p=n.s.). Die Retardierung hatte demnach keinen Varianz mindernden Effekt auf die Talspiegel des Antipsychotikums. Der Wirkstoff-Konzentrations-Bereich lag bei 21-55 ng/ml und entsprach ebenfalls nahezu dem therapeutischen Bereich von Risperidon (20-60 ng/ml). rnEntzündliche Prozesse können die Metabolisierung von Medikamenten verändern. Dies wurde bisher für Medikamente nachgewiesen, die über CYP1A2 abgebaut werden. Durch die eigene Analyse von 84 Patienten-Serumspiegeln konnte festgestellt werden, dass die Metabolisierung von Quetiapin während eines entzündlichen Prozesses beeinträchtigt war, wahrscheinlich durch Hemmung von CYP3A4. Dies sprach dafür, dass auch Wirkstoffe, die über CYP3A4 abgebaut werden, während eines entzündlichen Prozesses im Körper in ihrer Pharmakokinetik beeinträchtigt sein können. Aus diesem Grund sollte während einer Infektion unter der Therapie mit Quetiapin besonders auf die Nebenwirkungen geachtet werden und der Serumspiegel sollte in dieser Zeit überwacht werden, um den Patienten vor eventuellen Nebenwirkungen oder sogar Intoxikationen zu schützen. rnDie Befunde dieser Arbeit zeigen, dass bei einer Behandlung schizophrener Patienten mit Antipsychotika die Messung der Psychopathologie zur Vorhersage des Therapieansprechens und die Messung der Blutspiegel zur Identifizierung von Faktoren, die die pharmakokinetische Variabilität bedingen, geeignet sind. Objektive Symptomerfassung und Therapeutisches Drug Monitoring sind demnach Instrumente, die für die Steuerung der antipsychotischen Pharmakotherapie genutzt werden sollten.rn
Resumo:
This thesis is about plant breeding in Early 20th-Century Italy. The stories of the two most prominent Italian plant-breeders of the time, Nazareno Strampelli and Francesco Todaro, are used to explore a fragment of the often-neglected history of Italian agricultural research. While Italy was not at the forefront of agricultural innovation, research programs aimed at varietal innovation did emerge in the country, along with an early diffusion of Mendelism. Using philosophical as well as historical analysis, plant breeding is analysed throughout this thesis as a process: a sequence of steps that lays on practical skills and theoretical assumptions, acting on various elements of production. Systematic plant-breeding programs in Italy started from small individual efforts, attracting more and more resources until they became a crucial part of the fascist regime's infamous agricultural policy. Hybrid varieties developed in the early 20th century survived World War II and are now ancestors of the varieties that are still cultivated today. Despite this relevance, the history of Italian wheat hybrids is today largely forgotten: this thesis is an effort to re-evaluate a part of it. The research did allow previously unknown or neglected facts to emerge, giving a new perspective on the infamous alliance between plant-breeding programs and the fascist regime. This thesis undertakes an analysis of Italian plant-breeding programs as processes. Those processes had a practical as well as a theoretical side, and involved various elements of production. Although a complete history of Italian plant breeding still remains to be written, the Italian case can now be considered along with the other case-studies that other scholars have developed in the history of plant breeding. The hope is that this historical and philosophical analysis will contribute to the on-going effort to understand the history of plants.
Resumo:
Komplementdefizienzen gehen mit einer erhöhten Infektionsanfälligkeit gegenüber bestimmten Krankheitserregern in den ersten Lebensjahren (MBL-Defizienz) und darüber hinaus (C1q- und anderen Komplementdefizienten) einher. Dies unterstreicht die Rolle des Komplementsystems als effektiver Abwehrmechanismus in der Übergangsphase zwischen Verlust des „mütterlichen Nestschutzes“ und Ausreifung der eigenen „erworbenen“ Immunität. Das Auftreten von Autoimmunerkrankungen wie dem SLE-ähnlichen Syndrom bei Defizienzen des Klassischen Weges beleuchten zusätzliche Funktionen des Komplementsystems während der Ausreifung der erworbenen Immunität und als wesentlicher Effektor in der Erkennung apoptotischer Zellen und deren Eliminierung aus dem System.rnHereditäre C1q-Defizienzen gehen mit einer hohen Wahrscheinlichkeit mit einem SLE-ähnlichen Syndrom einher. Sie stellen unter den Defizienzen des Komplementsystems eines Seltenheit dar, ihr klinisches „Gesicht“ ist umso eindrucksvoller. Sie sind von der funktionellen C1q-Defizienz im Rahmen eines erhöhten „turnover“ und in der Folge einer C1q-Autoantokörperbildung abzugrenzen. Ursächlich ist ihnen eine Mutation in einem der drei C1q-Gene, die auf dem Chromosom 1 lokalisiert sind. Homozygote Mutationsträger können den Defekt nicht ausgleichen und zeigen eine C1q-Defizienz mit Verlust der gesamthämolytischen Aktivität CH50. Häufungen treten bei Nachkommen von Geschwister- und Verwandtschaftsehen auf.rnrnIn dieser Arbeit wird der Fall einer Patientin mit einem schweren, frühkindlich einsetzenden, SLE-ähnlichen Syndrom aufgearbeitet. Als Ursache für eine Erkrankung konnte ein hereditärer C1q-Defekt, ohne immunologischem Nachweis eines C1q oer LMQ-C1q, identifiziert werden. Da sich keine der vorab beschriebenen Mutatonsmuster bei der Patientin detektieren ließ, erfolgte die Sequenzierung aller drei C1q-Gene. Dadurch ließ sich ein neues Mutationsmuster darstellen.rnrnDie in dieser Arbeit vorgestellte Mutation unterscheidet sich von den bislang beschriebenen Mutationen dadurch, dass es sich nicht um eine Punktmutation, sonder um eine Deletion von 29 Basen (c283_311) im Exon 2 des C1q-B-Ketten-Gens mit einhergehendem Rasterschub und vorzeitigem Stop-Codon (pMet95TrpfsX8) handelt. Durch die Analyse der Eltern und Geschwister der betroffenen Patientin konnte der Vererbungsweg dargestellt werden. Zudem gelang es die Mutation im Rahmen einer Pränataldiagnostik bei einem „ungeborenen“ Geschwisterkind auszuschließen.rn
Resumo:
Early-Type galaxies (ETGs) are embedded in hot (10^6-10^7 K), X-ray emitting gaseous haloes, produced mainly by stellar winds and heated by Type Ia supernovae explosions, by the thermalization of stellar motions and occasionally by the central super-massive black hole (SMBH). In particular, the thermalization of the stellar motions is due to the interaction between the stellar and the SNIa ejecta and the hot interstellar medium (ISM) already residing in the ETG. A number of different astrophysical phenomena determine the X-ray properties of the hot ISM, such as stellar population formation and evolution, galaxy structure and internal kinematics, Active Galactic Nuclei (AGN) presence, and environmental effects. With the aid of high-resolution hydrodynamical simulations performed on state-of-the-art galaxy models, in this Thesis we focus on the effects of galaxy shape, stellar kinematics and star formation on the evolution of the X-ray coronae of ETGs. Numerical simulations show that the relative importance of flattening and rotation are functions of the galaxy mass: at low galaxy masses, adding flattening and rotation induces a galactic wind, thus lowering the X-ray luminosity; at high galaxy masses the angular momentum conservation keeps the central regions of rotating galaxies at low density, whereas in non-rotating models a denser and brighter atmosphere is formed. The same dependence from the galaxy mass is present in the effects of star formation (SF): in light galaxies SF contributes to increase the spread in Lx, while at high galaxy masses the halo X-ray properties are marginally sensitive to SF effects. In every case, the star formation rate at the present epoch quite agrees with observations, and the massive, cold gaseous discs are partially or completely consumed by SF on a time-scale of few Gyr, excluding the presence of young stellar discs at the present epoch.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
Plant communities on weathered rock and outcrops are characterized by high values in species richness (Dengler 2006) and often persist on small and fragmented surfaces. Yet very few studies have examined the relationships between heterogeneity and plant diversity at small scales, in particular in poor-nutrient and low productive environment (Shmida and Wilson 1985, Lundholm 2003). In order to assess these relationships both in space and time in relationship, two different approaches were employed in the present study, in two gypsum outcrops of Northern Apennine. Diachronic and synchronic samplings from April 2012 to March 2013 were performed. A 50x50 cm plot was used in both samplings such as the sampling unit base. The diachronic survey aims to investigate seasonal patterning of plant diversity by the use of images analysis techniques integrated with field data and considering also seasonal climatic trend, the substrate quality and its variation in time. The purpose of the further, synchronic sampling was to describe plant diversity pattern as a function of the environmental heterogeneity meaning in substrate typologies, soil depth and topographic features. Results showed that responses of diversity pattern depend both on the resources availability, environmental heterogeneity and the manner in which the different taxonomic group access to them during the year. Species richness and Shannon diversity were positively affected by increasing in substrate heterogeneity. Furthermore a good turnover in seasonal species occurrence was detected. This vegetation may be described by the coexistence of three groups of species which created a gradient from early colonization stages, characterized by greater slope and predominance of bare rock, gradually to situation of more developed soil.
Fault detection, diagnosis and active fault tolerant control for a satellite attitude control system
Resumo:
Modern control systems are becoming more and more complex and control algorithms more and more sophisticated. Consequently, Fault Detection and Diagnosis (FDD) and Fault Tolerant Control (FTC) have gained central importance over the past decades, due to the increasing requirements of availability, cost efficiency, reliability and operating safety. This thesis deals with the FDD and FTC problems in a spacecraft Attitude Determination and Control System (ADCS). Firstly, the detailed nonlinear models of the spacecraft attitude dynamics and kinematics are described, along with the dynamic models of the actuators and main external disturbance sources. The considered ADCS is composed of an array of four redundant reaction wheels. A set of sensors provides satellite angular velocity, attitude and flywheel spin rate information. Then, general overviews of the Fault Detection and Isolation (FDI), Fault Estimation (FE) and Fault Tolerant Control (FTC) problems are presented, and the design and implementation of a novel diagnosis system is described. The system consists of a FDI module composed of properly organized model-based residual filters, exploiting the available input and output information for the detection and localization of an occurred fault. A proper fault mapping procedure and the nonlinear geometric approach are exploited to design residual filters explicitly decoupled from the external aerodynamic disturbance and sensitive to specific sets of faults. The subsequent use of suitable adaptive FE algorithms, based on the exploitation of radial basis function neural networks, allows to obtain accurate fault estimations. Finally, this estimation is actively exploited in a FTC scheme to achieve a suitable fault accommodation and guarantee the desired control performances. A standard sliding mode controller is implemented for attitude stabilization and control. Several simulation results are given to highlight the performances of the overall designed system in case of different types of faults affecting the ADCS actuators and sensors.
Resumo:
Breast cancer (BC) is the most often diagnosed cancer entity of women worldwide. No molecular biomarkers are usable in the clinical routine for the early detection of BC. Proteomics is one of the dynamic tools for the successful examination of changes on the protein level. In this thesis different proteomics-based investigations were performed for the detection of protein and autoantibody biomarkers in serum samples of BC and healthy (CTRL) subjects. First, protein levels of candidates from previous profiling studies were investigated via antibody-microarray platform. Three proteins were found in distinct levels in both groups: secretoglobin family 1D member 1, alpha-2 macroglobulin and inter-alpha-trypsin inhibitor heavy chain family member 4. The second part was dedicated to the de novo exploration of potentially immunogenic tumor antigens (TA’s) with immunoprecipitation and Western immunoblotting followed by identification over mass spectrometry. Autoantibody levels were verified in individual serum profiling via the protein microarray platform. Two autoantibody’ cohorts (anti-Histone 2B and anti-Recoverin) were found in different levels in both groups. The findings of this PhD thesis underline deregulated serum protein and autoantibody levels in the presence of BC. Further investigations are needed to confirm the results in an independent study population.
Resumo:
In this thesis two approaches were applied to achieve a double general objective. The first chapter was dedicated to the study of the distribution of the expression of genes of several bitter and fat receptor in several gastrointestinal tracts. A set of 7 genes for bitter taste and for 3 genes for fat taste was amplified with real-time PCR from mRNA extracted from 5 gastrointestinal segments of weaned pigs. The presence of gene expression for several chemosensing receptors for bitter and fat taste in different compartments of the stomach confirms that this organ should be considered a player for the early detection of bolus composition. In the second chapter we investigated in young pigs the distribution of butyrate-sensing olfactory receptor (OR51E1) receptor along the GIT, its relation with some endocrine markers, its variation with age, and after interventions affecting the gut environment and intestinal microbiota in piglets and in different tissues. Our results indicate that OR51E1 is strictly related to the normal GIT enteroendocrine activity. In the third chapter we investigated the differential gene expression between oxyntic and pyloric mucosa in seven starter pigs. The obtained data indicate that there is significant differential gene exression between oxintic of the young pig and pyloric mucosa and further functional studies are needed to confirm their physiological importance. In the last chapter, thymol, that has been proposed as an oral alternative to antibiotics in the feed of pigs and broilers, was introduced directly into the stomach of 8 weaned pigs and sampled for gastric oxyntic and pyloric mucosa. The analysis of the whole transcript expression shoes that the stimulation of gastric proliferative activity and the control of digestive activity by thymol can influence positively gastric maturation and function in the weaned pigs.
Resumo:
Since historical times, coastal areas throughout the eastern Mediterranean are exposed to tsunami hazard. For many decades the knowledge about palaeotsunamis was solely based on historical accounts. However, results from timeline analyses reveal different characteristics affecting the quality of the dataset (i.e. distribution of data, temporal thinning backward of events, local periodization phenomena) that emphasize the fragmentary character of the historical data. As an increasing number of geo-scientific studies give convincing examples of well dated tsunami signatures not reported in catalogues, the non-existing record is a major problem to palaeotsunami research. While the compilation of historical data allows a first approach in the identification of areas vulnerable to tsunamis, it must not be regarded as reliable for hazard assessment. Considering the increasing economic significance of coastal regions (e.g. for mass tourism) and the constantly growing coastal population, our knowledge on the local, regional and supraregional tsunami hazard along Mediterranean coasts has to be improved. For setting up a reliable tsunami risk assessment and developing risk mitigation strategies, it is of major importance (i) to identify areas under risk and (ii) to estimate the intensity and frequency of potential events. This approach is most promising when based on the analysis of palaeotsunami research seeking to detect areas of high palaeotsunami hazard, to calculate recurrence intervals and to document palaeotsunami destructiveness in terms of wave run-up, inundation and long-term coastal change. Within the past few years, geo-scientific studies on palaeotsunami events provided convincing evidence that throughout the Mediterranean ancient harbours were subject to strong tsunami-related disturbance or destruction. Constructed to protect ships from storm and wave activity, harbours provide especially sheltered and quiescent environments and thus turned out to be valuable geo-archives for tsunamigenic high-energy impacts on coastal areas. Directly exposed to the Hellenic Trench and extensive local fault systems, coastal areas in the Ionian Sea and the Gulf of Corinth hold a considerably high risk for tsunami events, respectively.Geo-scientific and geoarcheaological studies carried out in the environs of the ancient harbours of Krane (Cefalonia Island), Lechaion (Corinth, Gulf of Corinth) and Kyllini (western Peloponnese) comprised on-shore and near-shore vibracoring and subsequent sedimentological, geochemical and microfossil analyses of the recovered sediments. Geophysical methods like electrical resistivity tomography and ground penetrating radar were applied in order to detect subsurface structures and to verify stratigraphical patterns derived from vibracores over long distances. The overall geochronological framework of each study area is based on radiocarbon dating of biogenic material and age determination of diagnostic ceramic fragments. Results presented within this study provide distinct evidence of multiple palaeotsunami landfalls for the investigated areas. Tsunami signatures encountered in the environs of Krane, Lechaion and Kyllini include (i) coarse-grained allochthonous marine sediments intersecting silt-dominated quiescent harbour deposits and/or shallow marine environments, (ii) disturbed microfaunal assemblages and/or (iii) distinct geochemical fingerprints as well as (iv) geo-archaeological destruction layers and (v) extensive units of beachrock-type calcarenitic tsunamites. For Krane, geochronological data yielded termini ad or post quem (maximum ages) for tsunami event generations dated to 4150 ± 60 cal BC, ~ 3200 ± 110 cal BC, ~ 650 ± 110 cal BC, and ~ 930 ± 40 cal AD, respectively. Results for Lechaion suggest that the harbour was hit by strong tsunami impacts in the 8th-6th century BC, the 1st-2nd century AD and in the 6th century AD. At Kyllini, the harbour site was affected by tsunami impact in between the late 7th and early 4th cent. BC and between the 4th and 6th cent. AD. In case of Lechaion and Kyllini, the final destruction of the harbour facilities also seems to be related to the tsunami impact. Comparing the tsunami signals obtained for each study areas with geo-scientific data from palaeotsunami events from other sites indicates that the investigated harbour sites represent excellent geo-archives for supra-regional mega-tsunamis.