945 resultados para Hitsauksen suurtehomenetelmät, High productive welding processes
Resumo:
The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn
Resumo:
The study defines a new farm classification and identifies the arable land management. These aspects and several indicators are taken into account to estimate the sustainability level of farms, for organic and conventional regimes. The data source is Italian Farm Account Data Network (RICA) for years 2007-2011, which samples structural and economical information. An environmental data has been added to the previous one to better describe the farm context. The new farm classification describes holding by general informations and farm structure. The general information are: adopted regime and farm location in terms of administrative region, slope and phyto-climatic zone. The farm structures describe the presence of main productive processes and land covers, which are recorded by FADN database. The farms, grouped by homogeneous farm structure or farm typology, are evaluated in terms of sustainability. The farm model MAD has been used to estimate a list of indicators. They describe especially environmental and economical areas of sustainability. Finally arable lands are taken into account to identify arable land managements and crop rotations. Each arable land has been classified by crop pattern. Then crop rotation management has been analysed by spatial and temporal approaches. The analysis reports a high variability inside regimes. The farm structure influences indicators level more than regimes, and it is not always possible to compare the two regimes. However some differences between organic and conventional agriculture have been found. Organic farm structures report different frequency and geographical location than conventional ones. Also different connections among arable lands and farm structures have been identified.
Resumo:
The Ivrea Zone in northern Italy has been the focus of numerous petrological, geochemical and structural studies. It is commonly inferred to represent an almost complete section through the mid to lower continental crust, in which metamorphism and partial melting of the abundant metapelites was the result of magmatic underplating by a large volume of mantle-derived magma. This study concerns amphibolite and granulite facies metamorphism in the Ivrea Zone with focus on metapelites and metapsammites/metagreywackes from Val Strona di Omegna and metapelites from Val Sesia and Val Strona di Postua, with the aim to better constrain their metamorphic evolution as well as their pressure and temperature conditions via phase equilibria modelling.rnrnIn Val Strona di Omegna, the metapelites show a structural and mineralogical change from mica-schists with the common assemblage bi-mu-sill-pl-q-ilm ± liq at the lowest grades, through metatexitic migmatites (g-sill-bi-ksp-pl-q-ilm-liq) at intermediate grades, to complex diatexitic migmatites (g-sill-ru-bi-ksp-pl-q-ilm-liq) at the highest grades. Within this section several mappable isograds occur, including the first appearance of K-feldspar in the metapelites, the first appearance of orthopyroxene in the metabasites and the disappearance of prograde biotite from the metapelites. The inferred onset of partial melting in the metapelites occurs around Massiola. The prograde suprasolidus evolution of the metapelites is consistent with melting via the breakdown of first muscovite then biotite. Maximum modelled melt fractions of 30–40 % are predicted at the highest grade. The regional metamorphic field gradient in Val Strona di Omegna is constrained to range from conditions of 3.5–6.5 kbar at T = 650–730 °C to P > 9 kbar at T > 900 °C. The peak P–T estimates, particularly for granulite facies conditions, are significantly higher (around 100 °C) than those of most previous studies. In Val Sesia and Val Strona di Postua to the south the exposure is more restricted. P–T estimates for the metapelites are 750–850 °C and 5–6.5 kbar in Val Sesia and approximately 800–900 °C and 5.5–7 kbar in Val Strona di Postua. These results show similar temperatures but lower pressure than metapelites in Val Strona di Omegna. Metapelites in Val Sesia in contact with the Mafic Complex exhibit a metatexitic structure, while in Val Strona di Postua diatexitic structures occur. Further, metapelites at the contact with the Mafic Complex contain cordierite (± spinel) that overprint the regional metamorphic assemblages and are interpreted to have formed during contact metamorphism related to intrusion of the Mafic Complex. The lower pressures in the high-grade rocks in Val Sesia and Val Strona di Postua are consistent with some decompression from the regional metamorphic peak prior to the intrusion of the Mafic Complex, suggesting the rocks followed a clockwise P–T path. In contrast, the metapelites in Val Strona di Omegna, especially in the granulite facies, do not contain any cordierite or any evidence for a contact metamorphic overprint. The extrapolated granulite facies mineral isograds are cut by the rocks of the Mafic Complex to the south. Therefore, the Mafic Complex cannot have caused the regional metamorphism and it is unlikely that the Mafic Complex occurs in Val Strona di Omegna.
Resumo:
Tonalite-trondhjemite-granodiorite (TTG) gneisses form up to two-thirds of the preserved Archean continental crust and there is considerable debate regarding the primary magmatic processes of the generation of these rocks. The popular theories indicate that these rocks were formed by partial melting of basaltic oceanic crust which was previously metamorphosed to garnet-amphibolite and/or eclogite facies conditions either at the base of thick oceanic crust or by subduction processes.rnThis study investigates a new aspect regarding the source rock for Archean continental crust which is inferred to have had a bulk compostion richer in magnesium (picrite) than present-day basaltic oceanic crust. This difference is supposed to originate from a higher geothermal gradient in the early Archean which may have induced higher degrees of partial melting in the mantle, which resulted in a thicker and more magnesian oceanic crust. rnThe methods used to investigate the role of a more MgO-rich source rock in the formation of TTG-like melts in the context of this new approach are mineral equilibria calculations with the software THERMOCALC and high-pressure experiments conducted from 10–20 kbar and 900–1100 °C, both combined in a forward modelling approach. Initially, P–T pseudosections for natural rock compositions with increasing MgO contents were calculated in the system NCFMASHTO (Na2O–CaO–FeO–MgO–Al2O3–SiO2–H2O–TiO2) to ascertain the metamorphic products from rocks with increasing MgO contents from a MORB up to a komatiite. A small number of previous experiments on komatiites showed the development of pyroxenite instead of eclogite and garnet-amphibolite during metamorphism and established that melts of these pyroxenites are of basaltic composition, thus again building oceanic crust instead of continental crust.rnThe P–T pseudosections calculated represent a continuous development of their metamorphic products from amphibolites and eclogites towards pyroxenites. On the basis of these calculations and the changes within the range of compositions, three picritic Models of Archean Oceanic Crust (MAOC) were established with different MgO contents (11, 13 and 15 wt%) ranging between basalt and komatiite. The thermodynamic modelling for MAOC 11, 13 and 15 at supersolidus conditions is imprecise since no appropriate melt model for metabasic rocks is currently available and the melt model for metapelitic rocks resulted in unsatisfactory calculations. The partially molten region is therfore covered by high-pressure experiments. The results of the experiments show a transition from predominantly tonalitic melts in MAOC 11 to basaltic melts in MAOC 15 and a solidus moving towards higher temperatures with increasing magnesium in the bulk composition. Tonalitic melts were generated in MAOC 11 and 13 at pressures up to 12.5 kbar in the presence of garnet, clinopyroxene, plagioclase plus/minus quartz (plus/minus orthopyroxene in the presence of quartz and at lower pressures) in the absence of amphibole but it could not be explicitly indicated whether the tonalitic melts coexisting with an eclogitic residue and rutile at 20 kbar do belong to the Archean TTG suite. Basaltic melts were generated predominantly in the presence of granulite facies residues such as amphibole plus/minus garnet, plagioclase, orthopyroxene that lack quartz in all MAOC compositions at pressures up to 15 kbar. rnThe tonalitic melts generated in MAOC 11 and 13 indicate that thicker oceanic crust with more magnesium than that of a modern basalt is also a viable source for the generation of TTG-like melts and therefore continental crust in the Archean. The experimental results are related to different geologic settings as a function of pressure. The favoured setting for the generation of early TTG-like melts at 15 kbar is the base of an oceanic crust thicker than existing today or by melting of slabs in shallow subduction zones, both without interaction of tonalic melts with the mantle. Tonalitic melts at 20 kbar may have been generated below the plagioclase stability by slab melting in deeper subduction zones that have developed with time during the progressive cooling of the Earth, but it is unlikely that those melts reached lower pressure levels without further mantle interaction.rn
Resumo:
During the PhD program in chemistry, curriculum in environmental chemistry, at the University of Bologna the sustainability of industry was investigated through the application of the LCA methodology. The efforts were focused on the chemical sector in order to investigate reactions dealing with the Green Chemistry and Green Engineering principles, evaluating their sustainability in comparison with traditional pathways by a life cycle perspective. The environmental benefits associated with a reduction in the synthesis steps and the use of renewable feedstock were assessed through a holistic approach selecting two case studies with high relevance from an industrial point of view: the synthesis of acrylonitrile and the production of acrolein. The current approach wants to represent a standardized application of LCA methodology to the chemical sector, which could be extended to several case studies, and also an improvement of the current databases, since the lack of data to fill the inventories of the chemical productions represent a huge limitation, difficult to overcome and that can affects negatively the results of the studies. Results emerged from the analyses confirms that the sustainability in the chemical sector should be evaluated from a cradle-to-gate approach, considering all the stages and flows involved in each pathways in order to avoid shifting the environmental burdens from a steps to another. Moreover, if possible, LCA should be supported by other tools able to investigate the other two dimensions of sustainability represented by the social and economic issues.
Resumo:
The investigation of phylogenetic diversity and functionality of complex microbial communities in relation to changes in the environmental conditions represents a major challenge of microbial ecology research. Nowadays, particular attention is paid to microbial communities occurring at environmental sites contaminated by recalcitrant and toxic organic compounds. Extended research has evidenced that such communities evolve some metabolic abilities leading to the partial degradation or complete mineralization of the contaminants. Determination of such biodegradation potential can be the starting point for the development of cost effective biotechnological processes for the bioremediation of contaminated matrices. This work showed how metagenomics-based microbial ecology investigations supported the choice or the development of three different bioremediation strategies. First, PCR-DGGE and PCR-cloning approaches served the molecular characterization of microbial communities enriched through sequential development stages of an aerobic cometabolic process for the treatment of groundwater contaminated by chlorinated aliphatic hydrocarbons inside an immobilized-biomass packed bed bioreactor (PBR). In this case the analyses revealed homogeneous growth and structure of immobilized communities throughout the PBR and the occurrence of dominant microbial phylotypes of the genera Rhodococcus, Comamonas and Acidovorax, which probably drive the biodegradation process. The same molecular approaches were employed to characterize sludge microbial communities selected and enriched during the treatment of municipal wastewater coupled with the production of polyhydroxyalkanoates (PHA). Known PHA-accumulating microorganisms identified were affiliated with the genera Zooglea, Acidovorax and Hydrogenophaga. Finally, the molecular investigation concerned communities of polycyclic aromatic hydrocarbon (PAH) contaminated soil subjected to rhizoremediation with willow roots or fertilization-based treatments. The metabolic ability to biodegrade naphthalene, as a representative model for PAH, was assessed by means of stable isotope probing in combination with high-throughput sequencing analysis. The phylogenetic diversity of microbial populations able to derive carbon from naphthalene was evaluated as a function of the type of treatment.
Resumo:
In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.
Resumo:
Since its discovery, top quark has represented one of the most investigated field in particle physics. The aim of this thesis is the reconstruction of hadronic top with high transverse momentum (boosted) with the Template Overlap Method (TOM). Because of the high energy, the decay products of boosted tops are partially or totally overlapped and thus they are contained in a single large radius jet (fat-jet). TOM compares the internal energy distributions of the candidate fat-jet to a sample of tops obtained by a MC simulation (template). The algorithm is based on the definition of an overlap function, which quantifies the level of agreement between the fat-jet and the template, allowing an efficient discrimination of signal from the background contributions. A working point has been decided in order to obtain a signal efficiency close to 90% and a corresponding background rejection at 70%. TOM performances have been tested on MC samples in the muon channel and compared with the previous methods present in literature. All the methods will be merged in a multivariate analysis to give a global top tagging which will be included in ttbar production differential cross section performed on the data acquired in 2012 at sqrt(s)=8 TeV in high phase space region, where new physics processes could be possible. Due to its peculiarity to increase the pT, the Template Overlap Method will play a crucial role in the next data taking at sqrt(s)=13 TeV, where the almost totality of the tops will be produced at high energy, making the standard reconstruction methods inefficient.
Resumo:
Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.
Resumo:
The EBPR (Enhanced Biological Phosphorus Removal) is a type of secondary treatment in WWTPs (WasteWater Treatment Plants), quite largely used in full-scale plants worldwide. The phosphorus occurring in aquatic systems in high amounts can cause eutrophication and consequently the death of fauna and flora. A specific biomass is used in order to remove the phosphorus, the so-called PAOs (Polyphosphate Accumulating Organisms) that accumulate the phosphorus in form of polyphosphate in their cells. Some of these organisms, the so-called DPAO (Denitrifying Polyphosphate Accumulating Organisms) use as electron acceptor the nitrate or nitrite, contributing in this way also to the removal of these compounds from the wastewater, but there could be side reactions leading to the formation of nitrous oxides. The aim of this project was to simulate in laboratory scale a EBPR, acclimatizing and enriching the specialized biomass. Two bioreactors were operated as Sequencing Batch Reactors, one enriched in Accumulibacter, the other in Tetrasphaera (both PAOs): Tetrasphaera microorganisms are able to uptake aminoacids as carbon source, Accumulibacter uptake organic carbon (volatile fatty acids, VFA). In order to measure the removal of COD, phosphorus and nitrogen-derivate compounds, different analysis were performed: spectrophotometric measure of phosphorus, nitrate, nitrite and ammonia concentrations, TOC (Total Organic Carbon, measuring the carbon consumption), VFA via HPLC (High Performance Liquid Chromatography), total and volatile suspended solids following standard methods APHA, qualitative microorganism population via FISH (Fluorescence In Situ Hybridization). Batch test were also performed to monitor the NOx production. Both specialized populations accumulated as a result of SBR operations; however, Accumulibacter were found to uptake phosphates at higher extents. Both populations were able to remove efficiently nitrates and organic compounds occurring in the feeding. The experimental work was carried out at FCT of Universidade Nova de Lisboa (FCT-UNL) from February to July 2014.
Resumo:
Future experiments in nuclear and particle physics are moving towards the high luminosity regime in order to access rare processes. In this framework, particle detectors require high rate capability together with excellent timing resolution for precise event reconstruction. In order to achieve this, the development of dedicated FrontEnd Electronics (FEE) for detectors has become increasingly challenging and expensive. Thus, a current trend in R&D is towards flexible FEE that can be easily adapted to a great variety of detectors, without impairing the required high performance. This thesis reports on a novel FEE for two different detector types: imaging Cherenkov counters and plastic scintillator arrays. The former requires high sensitivity and precision for detection of single photon signals, while the latter is characterized by slower and larger signals typical of scintillation processes. The FEE design was developed using high-bandwidth preamplifiers and fast discriminators which provide Time-over-Threshold (ToT). The use of discriminators allowed for low power consumption, minimal dead-times and self-triggering capabilities, all fundamental aspects for high rate applications. The output signals of the FEE are readout by a high precision TDC system based on FPGA. The performed full characterization of the analogue signals under realistic conditions proved that the ToT information can be used in a novel way for charge measurements or walk corrections, thus improving the obtainable timing resolution. Detailed laboratory investigations proved the feasibility of the ToT method. The full readout chain was investigated in test experiments at the Mainz Microtron: high counting rates per channel of several MHz were achieved, and a timing resolution of better than 100 ps after walk correction based on ToT was obtained. Ongoing applications to fast Time-of-Flight counters and future developments of FEE have been also recently investigated.
Resumo:
Perceptual closure refers to the coherent perception of an object under circumstances when the visual information is incomplete. Although the perceptual closure index observed in electroencephalography reflects that an object has been recognized, the full spatiotemporal dynamics of cortical source activity underlying perceptual closure processing remain unknown so far. To address this question, we recorded magnetoencephalographic activity in 15 subjects (11 females) during a visual closure task and performed beamforming over a sequence of successive short time windows to localize high-frequency gamma-band activity (60–100 Hz). Two-tone images of human faces (Mooney faces) were used to examine perceptual closure. Event-related fields exhibited a magnetic closure index between 250 and 325 ms. Time-frequency analyses revealed sustained high-frequency gamma-band activity associated with the processing of Mooney stimuli; closure-related gamma-band activity was observed between 200 and 300 ms over occipitotemporal channels. Time-resolved source reconstruction revealed an early (0–200 ms) coactivation of caudal inferior temporal gyrus (cITG) and regions in posterior parietal cortex (PPC). At the time of perceptual closure (200–400 ms), the activation in cITG extended to the fusiform gyrus, if a face was perceived. Our data provide the first electrophysiological evidence that perceptual closure for Mooney faces starts with an interaction between areas related to processing of three-dimensional structure from shading cues (cITG) and areas associated with the activation of long-term memory templates (PPC). Later, at the moment of perceptual closure, inferior temporal cortex areas specialized for the perceived object are activated, i.e., the fusiform gyrus related to face processing for Mooney stimuli.
Resumo:
In the past few decades the impacts of climate warming have been significant in alpine glaciated regions. Many valley glaciers formerly linked as distributary glaciers to high-level icecaps have decoupled at their icefalls, exposing major escarpments and generating a suite of dynamic landforrns dominated by mass wasting. Ice-dominated landforms, here termed icy debris fans, develop rapidly by ice avalanching, rockfall, and icy debris flow. Field-based reconnaissance studies at two alpine settings, the Wrangell Mountains of Alaska and the Southern Alps of New Zealand, provide a preliminary morphogenetic model of spatial and temporal evolution of icy debris fans in a range of alpine settings. The influence of these processes on landform evolution is largely unrecognized in the literature dealing with post-glacial landform adjustment known as the paraglacial. A better understanding of these dynamic processes will be increasingly important because of the extreme geohazards characterizing these areas. Our field studies show that after glacier decoupling, icy debris fans begin to form along the base of bedrock escarpments at the mouths of catchments and prograde over valley glaciers. The presence of a distinct catchment, apex, and fan morphology distinguishes these landforms from other landforms common in periglacial hillslope settings receiving abundant clastic debris and ice. Ice avalanching is the most abundant process involved in icy debris fan formation. Fans developed below weakly incised catchments are dominated by ice avalanching and are composed primarily of ice with minor lithic detritus. Typically, avalanches fall into the fan catchments where sediments transform into grainflows that flow onto the fans. Once on the fans, avalanche deposits ablate rapidly, flattening and concentrating lithic fragments at the surface. Icy debris fans may become thick enough to become glaciers with splay crevasse systems. Fans developed below larger, more complex catchments are composed of higher proportions of lithic detritus resulting from temporary storage of ice and lithic detritus deposits within the catchment. Episodic outbursts of meltwater from the icecap may mix with the stored sediments and mobilize icy debris flows (mixture of ice and lithic clasts) onto the fans. Our observations indicate that the entire evolutionary cycle of icy debris fans probably occurs during an early paraglacial interval (i.e., decades to 100 years). Observations comparing avalanche frequency, volume, and fan morphologic evolution at the Alaska site between 2006 and 2010 illustrate complex response between icy debris fans even within the same cirque - where one fan may be growing while others are downwasting because of differences in ice supply controlled by their respective catchments and icecap contributions. As ice supply from the icecap diminishes through time, icy debris fans rapidly downwaste and eventually evolve into talus cones that receive occasional but ephemeral ice avalanches.