780 resultados para EXPLOSION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Escasez de agua no necesariamente significa pobreza, como se dedcice de un análisis de áreas geográficas. Hay países relativamente ricos con escasos recursos hidricos y paises pobres con abundancia de agua dulce. La sociedad humana desarrollada dispone de recursos científicos, tecnicos, económicos, institucionales \; politicos para aáecuar la disponibilidad de agua a la demanda y viceversa, de un modo tendente a la sustentabilidad, siempre y cuando las actividades econornicas se modifiquen convenientemente y esa sustentabilidad sea un objetivo social deseado y participadc. El Archioiélago de Canarias esta en la región érida sahariana, aunque con ireas de pluviosidad relativamente elevada en sus vecientes septentrionales afectadas por la circulación de los vientos alisios y masas atlánticas de aire húmedo. La escasez de agua es algo bien asumido e internalizado en muchas de las áreas insulares canarias, en especial :ras la explosión demográfica del siglo XX. No por ello deja de ser una región europea ae economía aceptable y notablemente rica relativa al entorno geográfico próximo. La consecución de agua dulce es el resultado acumulado de un gran esfuerzo económico e imaginativo secular, con matices diferentes en cada isla y en cada parte de una misma isla. Sin embargo subsisten o han aparecido graves disfunciones a causa de la rapida evolución, arraigo de actividades agricolas no sustentables, debilidad insritucional y escasa participación ciudadana en la ~oliticad el agua a largo plazo, en un ambiente científico y técnico aiin por consolidar. No obstante. los logros en captaciór, de aguas subterraneas sor. espectaculares y el avance en desalinización y reutilización son m ~nyoto rios. ABSTRACT: relatively rich with scarce water resources and poor countries that have plenty of freshwater. A developed human society has scientific, technical, economic, institutional and policy resources to adapt water availability to demand, and vice versa, in a way that tends to sustainabílity. This needs modifying conveniently economic activities and making sustainability a wanted and participated social goal. The Archipelago of the Canaries is placed in the Sanaran dry belt, although there are some areas of relatively high rainfall in the north-facing slopes of the isiands, which intersect the circulation of trade winds and atlantic humid air masses. Water scarciTy is something well assumed and internalised in many of the areas of the Canaries, especially afier the demographic explosion of the XX century. But this does not imply poverty; actually it is an Eu8-opeanr egion wlth acceptable economic leve1 and notably rich respect the nearby geographical area. Freshwater wining is tne accumulated result of secular economic and imaginative efforts, which present differences from island to island and even incide the same island. Nowever some serious malfuncrions remain oí have appeared o'ue 10 the fast evolution, persistence of unsustainable agricultura1 activities 2nd still scarce public participation ir) long-term water policies. This happens in a scientific and iechnical environment which is stil! to be consolidated. However there are spectacular achievements in groundw~ter wining, and there are notorious progress in desalination and water reuse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Thesis is devoted to the study of the optical companions of Millisecond Pulsars in Galactic Globular Clusters (GCs) as a part of a large project started at the Department of Astronomy of the Bologna University, in collaboration with other institutions (Astronomical Observatory of Cagliari and Bologna, University of Virginia), specifically dedicated to the study of the environmental effects on passive stellar evolution in galactic GCs. Globular Clusters are very efficient “Kilns” for generating exotic object, such as Millisecond Pulsars (MSP), low mass X-ray binaries(LMXB) or Blue Straggler Stars (BSS). In particular MSPs are formed in binary systems containing a Neutron Star which is spun up through mass accretion from the evolving companion (e.g. Bhattacharia & van den Heuvel 1991). The final stage of this recycling process is either the core of a peeled star (generally an Helium white dwarf) or a very light almos exhausted star, orbiting a very fast rotating Neutron Star (a MSP). Despite the large difference in total mass between the disk of the Galaxy and the Galactic GC system (up a factor 103), the percentage of fast rotating pulsar in binary systems found in the latter is very higher. MSPs in GCs show spin periods in the range 1.3 ÷ 30ms, slowdown rates ˙P 1019 s/s and a lower magnetic field, respect to ”normal” radio pulsars, B 108 gauss . The high probability of disruption of a binary systems after a supernova explosion, explain why we expect only a low percentage of recycled millisecond pulsars respect to the whole pulsar population. In fact only the 10% of the known 1800 radio pulsars are radio MSPs. Is not surprising, that MSP are overabundant in GCs respect to Galactic field, since in the Galactic Disk, MSPs can only form through the evolution of primordial binaries, and only if the binary survives to the supernova explosion which lead to the neutron star formation. On the other hand, the extremely high stellar density in the core of GCs, relative to most of the rest of the Galaxy, favors the formation of several different binary systems, suitable for the recycling of NSs (Davies at al. 1998). In this thesis we will present the properties two millisecond pulsars companions discovered in two globular clusters, the Helium white dwarf orbiting the MSP PSR 1911-5958A in NGC 6752 and the second case of a tidally deformed star orbiting an eclipsing millisecond pulsar, PSR J1701-3006B in NGC6266

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente studio ha avuto l’obiettivo di indagare la produzione di bioetanolo di seconda generazione a partire dagli scarti lignocellulosici della canna da zucchero (bagassa), facendo riscorso al processo enzimatico. L’attività di ricerca è stata svolta presso il Dipartimento di Ingegneria Chimica dell’Università di Lund (Svezia) all’interno di rapporti scambio con l’Università di Bologna. Il principale scopo è consistito nel valutare la produzione di etanolo in funzione delle condizioni operative con cui è stata condotta la saccarificazione e fermentazione enzimatica (SSF) della bagassa, materia prima che è stata sottoposta al pretrattamento di Steam Explosion (STEX) con aggiunta di SO2 come catalizzatore acido. Successivamente, i dati ottenuti in laboratorio dalla SSF sono stati utilizzati per implementare, in ambiente AspenPlus®, il flowsheet di un impianto che simula tutti gli aspetti della produzione di etanolo, al fine di studiarne il rendimento energetico dell’intero processo. La produzione di combustibili alternativi alle fonti fossili oggigiorno riveste primaria importanza sia nella limitazione dell’effetto serra sia nel minimizzare gli effetti di shock geopolitici sulle forniture strategiche di un Paese. Il settore dei trasporti in continua crescita, consuma nei paesi industrializzati circa un terzo del fabbisogno di fonti fossili. In questo contesto la produzione di bioetanolo può portare benefici per sia per l’ambiente che per l’economia qualora valutazioni del ciclo di vita del combustibile ne certifichino l’efficacia energetica e il potenziale di mitigazione dell’effetto serra. Numerosi studi mettono in risalto i pregi ambientali del bioetanolo, tuttavia è opportuno fare distinzioni sul processo di produzione e sul materiale di partenza utilizzato per comprendere appieno le reali potenzialità del sistema well-to-wheel del biocombustibile. Il bioetanolo di prima generazione ottenuto dalla trasformazione dell’amido (mais) e delle melasse (barbabietola e canna da zucchero) ha mostrato diversi svantaggi: primo, per via della competizione tra l’industria alimentare e dei biocarburanti, in secondo luogo poiché le sole piantagioni non hanno la potenzialità di soddisfare domande crescenti di bioetanolo. In aggiunta sono state mostrate forti perplessità in merito alla efficienza energetica e del ciclo di vita del bioetanolo da mais, da cui si ottiene quasi la metà della produzione di mondiale di etanolo (27 G litri/anno). L’utilizzo di materiali lignocellulosici come scarti agricolturali e dell’industria forestale, rifiuti urbani, softwood e hardwood, al contrario delle precedenti colture, non presentano gli svantaggi sopra menzionati e per tale motivo il bioetanolo prodotto dalla lignocellulosa viene denominato di seconda generazione. Tuttavia i metodi per produrlo risultano più complessi rispetto ai precedenti per via della difficoltà di rendere biodisponibili gli zuccheri contenuti nella lignocellulosa; per tale motivo è richiesto sia un pretrattamento che l’idrolisi enzimatica. La bagassa è un substrato ottimale per la produzione di bioetanolo di seconda generazione in quanto è disponibile in grandi quantità e ha già mostrato buone rese in etanolo se sottoposta a SSF. La bagassa tal quale è stata inizialmente essiccata all’aria e il contenuto d’acqua corretto al 60%; successivamente è stata posta a contatto per 30 minuti col catalizzatore acido SO2 (2%), al termine dei quali è stata pretrattata nel reattore STEX (10L, 200°C e 5 minuti) in 6 lotti da 1.638kg su peso umido. Lo slurry ottenuto è stato sottoposto a SSF batch (35°C e pH 5) utilizzando enzimi cellulolitici per l’idrolisi e lievito di birra ordinario (Saccharomyces cerevisiae) come consorzio microbico per la fermentazione. Un obiettivo della indagine è stato studiare il rendimento della SSF variando il medium di nutrienti, la concentrazione dei solidi (WIS 5%, 7.5%, 10%) e il carico di zuccheri. Dai risultati è emersa sia una buona attività enzimatica di depolimerizzazione della cellulosa che un elevato rendimento di fermentazione, anche per via della bassa concentrazione di inibitori prodotti nello stadio di pretrattamento come acido acetico, furfuraldeide e HMF. Tuttavia la concentrazione di etanolo raggiunta non è stata valutata sufficientemente alta per condurre a scala pilota un eventuale distillazione con bassi costi energetici. Pertanto, sono stati condotti ulteriori esperimenti SSF batch con addizione di melassa da barbabietola (Beta vulgaris), studiandone preventivamente i rendimenti attraverso fermentazioni alle stesse condizioni della SSF. I risultati ottenuti hanno suggerito che con ulteriori accorgimenti si potranno raggiungere gli obiettivi preposti. E’ stato inoltre indagato il rendimento energetico del processo di produzione di bioetanolo mediante SSF di bagassa con aggiunta di melassa in funzione delle variabili più significative. Per la modellazione si è fatto ricorso al software AspenPlus®, conducendo l’analisi di sensitività del mix energetico in uscita dall’impianto al variare del rendimento di SSF e dell’addizione di saccarosio. Dalle simulazioni è emerso che, al netto del fabbisogno entalpico di autosostentamento, l’efficienza energetica del processo varia tra 0.20 e 0.53 a seconda delle condizioni; inoltre, è stata costruita la curva dei costi energetici di distillazione per litro di etanolo prodotto in funzione delle concentrazioni di etanolo in uscita dalla fermentazione. Infine sono già stati individuati fattori su cui è possibile agire per ottenere ulteriori miglioramenti sia in laboratorio che nella modellazione di processo e, di conseguenza, produrre con alta efficienza energetica bioetanolo ad elevato potenziale di mitigazione dell’effetto serra.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sterne mit einer Anfangsmasse zwischen etwa 8 und 25 Sonnenmassen enden ihre Existenz mit einer gewaltigen Explosion, einer Typ II Supernova. Die hierbei entstehende Hoch-Entropie-Blase ist ein Bereich am Rande des sich bildenden Neutronensterns und gilt als möglicher Ort für den r-Prozess. Wegen der hohen Temperatur T innerhalb der Blase ist die Materie dort vollkommen photodesintegriert. Das Verhältnis von Neutronen zu Protonen wird durch die Elektronenhäufigkeit Ye beschrieben. Die thermodynamische Entwicklung des Systems wird durch die Entropie S gegeben. Da die Expansion der Blase schnell vonstatten geht, kann sie als adiabatisch betrachtet werden. Die Entropie S ist dann proportional zu T^3/rho, wobei rho die Dichte darstellt. Die explizite Zeitentwicklung von T und rho sowie die Prozessdauer hängen von Vexp, der Expansionsgeschwindigkeit der Blase, ab. Der erste Teil dieser Dissertation beschäftigt sich mit dem Prozess der Reaktionen mit geladenen Teilchen, dem alpha-Prozess. Dieser Prozess endet bei Temperaturen von etwa 3 mal 10^9 K, dem sogenannten "alpha-reichen" Freezeout, wobei überwiegend alpha-Teilchen, freie Neutronen sowie ein kleiner Anteil von mittelschweren "Saat"-Kernen im Massenbereich um A=100 gebildet werden. Das Verhältnis von freien Neutronen zu Saatkernen Yn/Yseed ist entscheidend für den möglichen Ablauf eines r-Prozesses. Der zweite Teil dieser Arbeit beschäftigt sich mit dem eigentlichen r-Prozess, der bei Neutronenanzahldichten von bis zu 10^27 Neutronen pro cm^3 stattfindet, und innerhalb von maximal 400 ms sehr neutronenreiche "Progenitor"-Isotope von Elementen bis zum Thorium und Uran bildet. Bei dem sich anschliessendem Ausfrieren der Neutroneneinfangreaktionen bei 10^9 K und 10^20 Neutronen pro cm^3 erfolgt dann der beta-Rückzerfall der ursprünglichen r-Prozesskerne zum Tal der Stabilität. Diese Nicht-Gleichgewichts-Phase wird in der vorliegenden Arbeit in einer Parameterstudie eingehend untersucht. Abschliessend werden astrophysikalische Bedingungen definiert, unter denen die gesamte Verteilung der solaren r-Prozess-Isotopenhäufigkeiten reproduziert werden können.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioconversion of ferulic acid to vanillin represents an attractive opportunity for replacing synthetic vanillin with a bio-based product, that can be label “natural”, according to current food regulations. Ferulic acid is an abundant phenolic compound in cereals processing by-products, such as wheat bran, where it is linked to the cell wall constituents. In this work, the possibility of producing vanillin from ferulic acid released enzymatically from wheat bran was investigated by using resting cells of Pseudomonas fluorescens strain BF13-1p4 carrying an insertional inactivation of vdh gene and ech and fcs BF13 genes on a low copy number plasmid. Process parameters were optimized both for the biomass production phase and the bioconversion phase using food-grade ferulic acid as substrate and the approach of changing one variable while fixing the others at a certain level followed by the response surface methodology (RSM). Under optimized conditions, vanillin up to 8.46 mM (1.4 g/L) was achieved, whereas highest productivity was 0.53 mmoles vanillin L-1 h-1). Cocktails of a number of commercial enzyme (amylases, xylanases, proteases, feruloyl esterases) combined with bran pre-treatment with steam explosion and instant controlled pressure drop technology were then tested for the release of ferulic acid from wheat bran. The highest ferulic acid release was limited to 15-20 % of the ferulic acid occurring in bran, depending on the treatment conditions. Ferulic acid 1 mM in enzymatic hydrolyzates could be bioconverted into vanillin with molar yield (55.1%) and selectivity (68%) comparable to those obtained with food-grade ferulic acid after purification from reducing sugars with a non polar adsorption resin. Further improvement of ferulic acid recovery from wheat bran is however required to make more attractive the production of natural vanillin from this by-product.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supernovae are among the most energetic events occurring in the universe and are so far the only verified extrasolar source of neutrinos. As the explosion mechanism is still not well understood, recording a burst of neutrinos from such a stellar explosion would be an important benchmark for particle physics as well as for the core collapse models. The neutrino telescope IceCube is located at the Geographic South Pole and monitors the antarctic glacier for Cherenkov photons. Even though it was conceived for the detection of high energy neutrinos, it is capable of identifying a burst of low energy neutrinos ejected from a supernova in the Milky Way by exploiting the low photomultiplier noise in the antarctic ice and extracting a collective rate increase. A signal Monte Carlo specifically developed for water Cherenkov telescopes is presented. With its help, we will investigate how well IceCube can distinguish between core collapse models and oscillation scenarios. In the second part, nine years of data taken with the IceCube precursor AMANDA will be analyzed. Intensive data cleaning methods will be presented along with a background simulation. From the result, an upper limit on the expected occurrence of supernovae within the Milky Way will be determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernkollaps-Supernovae werden von einem massiven Ausbruch niederenergetischer Neutrinos begleitet. Sie zählen zu den energiereichsten Erscheinungen im Universum und stellen die derzeit einzig bekannte Quelle extrasolarer Neutrinos dar.rnDie Detektion einer solchen Neutrinosignatur würde zu einem tieferen Verständnis des bislang unzureichend bekannten stellaren Explosionsmechanismus führen. rnDarüber hinaus würden neue Einblicke in den Bereich der Teilchenphysik und der Supernova-Modellierung ermöglicht. Das sich zur Zeit am geographischen Südpol im Aufbau befindliche Neutrinoteleskop IceCube wird 2011 fertig gestellt sein.rnIceCube besteht im endgültigen Ausbau aus 5160 Photovervielfachern, die sich in gitterförmiger Anordnung in Tiefen zwischen 1450m und 2450m unter der Eisoberfläche befinden. Durch den Nachweis von Tscherenkow-Photonenrnim antarktischen Gletscher ist es in der Lage, galaktische Supernovae über einen kollektiven Anstieg der Rauschraten in seinen Photonenvervielfachern nachzuweisen.rnIn dieser Arbeit werden verschiedene Studien zur Implementierung einer künstlichen Totzeit vorgestellt, welche korreliertes Rauschen unterdrücken und somit das Signal-Untergund-Verhältnis maximieren würden.rnEin weiterer Teil dieser Dissertation bestand in der Integration der Supernova-Datenakquise eine neue Experiment-Steuerungssoftware.rnFür den Analyseteil der Arbeit wurde ein Monte-Carlo für IceCube entwickelt und Neutinooszillations-Mechanismen und eine Reihe von Signalmodellen integriert. Ein Likelihoodhypothesen-Test wurde verwendet, um die Unterscheidbarkeit verschiedener Supernova- beziehungsweise Neutrinooszillations-Szenarien zu untersuchen. Desweiteren wurde analysiert inwieweit sich Schock-Anregungen und QCD-Phasenübergnag im Verlauf des Explosionsprozesses detektieren lassen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

According to the latest statistics projections formulated by Eurostat, the proportion of elderly EU-27’s population aged over 65 years old is predicted to increase from 17.5 % in 2011 to 29.5 % by 2060. This "population explosion" makes extremely important to identify the different genetic and molecular mechanisms which underpin the morbidity and mortality along with new strategies able to counteract or slow down its progress. In this scenario fits the European Project MARK-AGE whose aim was to identify a robust set of biomarkers of human ageing able to discriminate between chronological and biological ageing and to derive a model for healthy ageing through the analysis of three populations from different European countries, supposed to be characterized by different ageing rate: 1. Subjects representing the “Normal” or “Physiological” aging. 2. Subjects representing the “successful” or “decelerate” aging 3. Subjects representing the “accelerated” aging. The aim of this work was to recruit and characterize volunteers, to perform an accurate analysis of the health status of elderly recruited subjects (60-79 years) verifying any possible dissimilarity in their aging trajectories, to identify a set of robust ageing biomarkers and investigate possible correlations between ageing biomarkers and health status of recruited volunteers. The model proposed by MARK-AGE Project regarding different ageing trajectories has been confirmed and several ageing biomarkers have been identified.