866 resultados para Technological developments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questo lavoro di tesi è stato suddiviso in tre parti. L’argomento principale è stato lo “Studio della componente antiossidante di oli ottenuti da olive mediante l’utilizzo di diversi sistemi e parametri tecnologici”. E’ ben noto come la qualità ossidativa di un olio di oliva dipenda oltre che dalla sua composizione in acidi grassi, dalla presenza di composti caratterizzati da un elevata attività antiossidante, ovvero le sostanze fenoliche. I composti fenolici contribuiscono quindi in maniera preponderante alla shelf life dell’olio extravergine di oliva. Inoltre sono state riscontrate delle forti correlazione tra alcune di queste sostanze e gli attributi sensoriali positivi di amaro e piccante. E’ poi da sottolineare come il potere antiossidante dei composti fenolici degli oli vergini di oliva, sia stato negli ultimi anni oggetto di considerevole interesse, poiché correlato alla protezione da alcune patologie come ad esempio quelle vascolari, degenerative e tumorali. Il contenuto delle sostanze fenoliche negli oli di oliva dipende da diversi fattori: cultivar, metodo di coltivazione, grado di maturazione delle olive e ovviamente dalle operazioni tecnologiche poiché possono variare il quantitativo di questi composti estratto. Alla luce di quanto appena detto abbiamo valutato l’influenza dei fattori agronomici (metodi di agricoltura biologica, integrata e convenzionale) e tecnologici (riduzione della temperatura della materia prima, aggiunta di coadiuvanti in fase di frangitura e di gramolatura, confronto tra tre oli extravergini di oliva ottenuti mediante diversi sistemi tecnologici) sul contenuto in composti fenolici di oli edibili ottenuti da olive (paper 1-3-4). Oltre alle sostanze fenoliche, negli oli di oliva sono presenti altri composti caratterizzati da proprietà chimiche e nutrizionali, tra questi vi sono i fitosteroli, ovvero gli steroli tipici del mondo vegetale, che rappresentano la frazione dell’insaponificabile quantitativamente più importante dopo gli idrocarburi. La composizione quali-quantitativa degli steroli di un olio di oliva è una delle caratteristiche analitiche più importanti nella valutazione della sua genuinità; infatti la frazione sterolica è significativamente diversa in funzione dell’origine botanica e perciò viene utilizzata per distinguere tra di loro gli oli e le loro miscele. Il principale sterolo nell’olio di oliva è il β- sitosterolo, la presenza di questo composto in quantità inferiore al 90% è un indice approssimativo dell’aggiunta di un qualsiasi altro olio. Il β-sitosterolo è una sostanza importante dal punto di vista della salute, poiché si oppone all’assorbimento del colesterolo. Mentre in letteratura si trovano numerosi lavori relativi al potere antiossidante di una serie di composti presenti nell’olio vergine di oliva (i già citati polifenoli, ma anche carotenoidi e tocoferoli) e ricerche che dimostrano invece come altri composti possano promuovere l’ossidazione dei lipidi, per quanto riguarda il potere antiossidante degli steroli e dei 4- metilsteroli, vi sono ancora poche informazioni. Per questo è stata da noi valutata la composizione sterolica in oli extravergini di oliva ottenuti con diverse tecnologie di estrazione e l’influenza di questa sostanza sulla loro stabilità ossidativa (paper 2). E’ stato recentemente riportato in letteratura come lipidi cellulari evidenziati attraverso la spettroscopia di risonanza nucleare magnetica (NMR) rivestano una importanza strategica da un punto di vista funzionale e metabolico. Questi lipidi, da un lato un lato sono stati associati allo sviluppo di cellule neoplastiche maligne e alla morte cellulare, dall’altro sono risultati anche messaggeri di processi benigni quali l’attivazione e la proliferazione di un normale processo di crescita cellulare. Nell’ambito di questa ricerca è nata una collaborazione tra il Dipartimento di Biochimica “G. Moruzzi” ed il Dipartimento di Scienze degli Alimenti dell’Università di Bologna. Infatti, il gruppo di lipochimica del Dipartimento di Scienze degli Alimenti, a cui fa capo il Prof. Giovanni Lercker, da sempre si occupa dello studio delle frazioni lipidiche, mediante le principali tecniche cromatografiche. L’obiettivo di questa collaborazione è stato quello di caratterizzare la componente lipidica totale estratta dai tessuti renali umani sani e neoplastici, mediante l’utilizzo combinato di diverse tecniche analitiche: la risonanza magnetica nucleare (1H e 13C RMN), la cromatografia su strato sottile (TLC), la cromatografia liquida ad alta prestazione (HPLC) e la gas cromatografia (GC) (paper 5-6-7)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This PhD thesis describes set up of technological models for obtaining high health value foods and ingredients that preserve the final product characteristics as well as enrich with nutritional components. In particular, the main object of my research has been Virgin Olive Oil (VOO) and its important antioxidant compounds which differentiate it from all other vegetables oils. It is well known how the qualitative and quantitative presence of phenolic molecules extracted from olives during oil production is fundamental for its oxidative and nutritional quality. For this purpose, agronomic and technological conditions of its production have been investigated. It has also been examined how this fraction can be better preserved during storage. Moreover, its relation with VOO sensorial characteristics and its interaction with a protein in emulsion foods have also been studied. Finally, an experimental work was carried out to determine the antioxidative and heat resistance properties of a new antioxidant (EVS-OL) when used for high temperature frying such as is typically employed for the preparation of french fries. Results of the scientific research have been submitted for a publication and some data has already been published in national and international scientific journals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theory of the 3D multipole probability tomography method (3D GPT) to image source poles, dipoles, quadrupoles and octopoles, of a geophysical vector or scalar field dataset is developed. A geophysical dataset is assumed to be the response of an aggregation of poles, dipoles, quadrupoles and octopoles. These physical sources are used to reconstruct without a priori assumptions the most probable position and shape of the true geophysical buried sources, by determining the location of their centres and critical points of their boundaries, as corners, wedges and vertices. This theory, then, is adapted to the geoelectrical, gravity and self potential methods. A few synthetic examples using simple geometries and three field examples are discussed in order to demonstrate the notably enhanced resolution power of the new approach. At first, the application to a field example related to a dipole–dipole geoelectrical survey carried out in the archaeological park of Pompei is presented. The survey was finalised to recognize remains of the ancient Roman urban network including roads, squares and buildings, which were buried under the thick pyroclastic cover fallen during the 79 AD Vesuvius eruption. The revealed anomaly structures are ascribed to wellpreserved remnants of some aligned walls of Roman edifices, buried and partially destroyed by the 79 AD Vesuvius pyroclastic fall. Then, a field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging as accurately as possible the differential mass density structure within the first few km of depth inside the volcanic apparatus. An assemblage of vertical prismatic blocks appears to be the most probable gravity model of the Etna apparatus within the first 5 km of depth below sea level. Finally, an experimental SP dataset collected in the Mt. Somma-Vesuvius volcanic district (Naples, Italy) is elaborated in order to define location and shape of the sources of two SP anomalies of opposite sign detected in the northwestern sector of the surveyed area. The modelled sources are interpreted as the polarization state induced by an intense hydrothermal convective flow mechanism within the volcanic apparatus, from the free surface down to about 3 km of depth b.s.l..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipids are important components that contribute very significantly to nutritional and technological quality of foods because they are the least stable macro-components in foods, due to high susceptibility to oxidation. When rancidity take place, it makes food unhealthy and unacceptable for consumers. Thus, the presence of antioxidants, naturally present of added to foods, is required to enhance shelf life of foods. Moreover, antioxidant like phenolic compounds play an important role in human health enhancing the functionality of foods. The aim of this PhD project was the study of lipid quality and lipid oxidation in different vegetable foods focusing on analytical and technological aspects in order to figure out the effects of lipid composition and bioactive compounds (phenolic compounds, omega-3 fatty acids and dietary fiber) addition on their shelf life. In addition, bioavailability and antioxidant effects of phenolic compounds in human and animals, respectively, were evaluated after consumption of vegetable foods. The first section of the work was focused on the evaluation of lipid quality impact on technological behaviour of vegetable foods. Because of that, cocoa butter with different melting point were evaluated by chromatographic techniques (GC, TLC) and the sample with the higher melting point showed the presence of fatty acids, triglycerides, 2-monoglycerides and FT-IR profile different from genuine cocoa butter, meaning an adding of foreign fat (lauric-fat) not allowed by the law. Looking at lipid quality of other vegetable foods, an accelerated shelf life test (OXITEST®), was used to evaluate of lipid stability to oxidation in tarallini snacks made up using different lipid matrices (sunflower oil, extravirgin olive oil and a blend of extravirgin olive oil and lard). The results showed a good ability of OXITEST® to discriminate between lipid unsaturation and different cooking times, without any samples fat extraction. In the second section, the role of bioactive compounds on cereal based food shelf life was studied in different bakeries by GC, spectrophotometric methods and capillary electrophoresis. It was examined the relationships between phenolic compounds, added with flour, and lipid oxidation of tarallini and frollini. Both products showed an increase in lipid oxidation during storage and antioxidant effects on lipid oxidation were not as expected. Furthermore, the influence of enrichment in polyunsaturated fatty acids on lipid oxidation of pasta was evaluated. The results proved that LC n-3 PUFA were not significantly implicated in the onset of oxidation in spaghetti stored under daylight and accelerated oxidation in a laboratory heater. The importance of phenolic compounds as antioxidant in humans and rats was also studied, by HPLC/MS in the latter section. For this purpose, apigenin and apigenin glycosides excretion was investigated in six women’s urine in a 24 hours study. After a single dose of steamed artichokes, both aglicone and glucuronide metabolites were recovered in 24 h urine. Moreover, the effect of whole grain durum wheat bread and whole grain Kamut® khorasan bread in rats were evaluated. Both cereals were good sources of antioxidants but Kamut® bread fed animals had a better response to stress than wheat durum fed, especially when a sourdough bread was supplied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study aims at assessing the innovation strategies adopted within a regional economic system, the Italian region Emilia-Romagna, as it faced the challenges of a changing international scenario. As the strengthening of the regional innovative capabilities is regarded as a keystone to foster a new phase of economic growth, it is important also to understand how the local industrial, institutional, and academic actors have tackled the problem of innovation in the recent past. In this study we explore the approaches to innovation and the strategies adopted by the main regional actors through three different case studies. Chapter 1 provides a general survey of the innovative performance of the regional industries over the past two decades, as it emerges from statistical data and systematic comparisons at the national and European levels. The chapter also discusses the innovation policies that the regional government set up since 2001 in order to strengthen the collaboration among local economic actors, including universities and research centres. As mechanics is the most important regional industry, chapter 2 analyses the combination of knowledge and practices utilized in the period 1960s-1990s in the design of a particular kind of machinery produced by G.D S.p.A., a world-leader in the market of tobacco packaging machines. G.D is based in Bologna, the region’s capital, and is at the centre of the most important Italian packaging district. In chapter 3 the attention turns to the institutional level, focusing on how the local public administrations, and the local, publicly-owned utility companies have dealt with the creation of new telematic networks on the regional territory during the 1990s and 2000s. Finally, chapter 4 assesses the technology transfer carried out by the main university of the region – the University of Bologna – by focusing on the patenting activities involving its research personnel in the period 1960-2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Main objective of the dissertation is to illustrate how social and educational aspects (in close interaction with other multifunctional aspects in organic agriculture) which are developed on different multifunctional organic farms in Italy and Netherlands, as well as established agricultural policy frameworks in these countries, can be compared with the situation in Croatian organics and can contribute to further developent of organic issues in the Repubic of Croatia. So, through different chapters, the dissertation describes the performance of organic agriculture sectors in Italy, Netherlands and Croatia within the national agricultural policy frameworks, it analyzes the role of national institutions and policy in Croatia in connection with Croatia's status of candidate country for enterance into EU and harmonization of legislation with the CAP, as well as analyzes what is the role of national authorities, universities, research centres, but also of private initiatives, NGOs and cooperatives in organic agriculture in Netherlands, Italy and Croatia. Its main part describes how social and educational aspects are interacting with other multifunctional aspects in organic agriculture and analyzes the benefits and contribution of multifunctional activites performed on organic farms to education, healthy nourishment, environment protection and health care. It also assess the strengths and weaknesses of organic agriculture in all researched countries. The dissertation concludes with development opportunities for multifunctional organic agriculture in Croatia, as well as giving perspectives and recommendations for different approaches on the basis of experiences learned from successful EU models accompanied with some personal ideas and proposals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gegenstand der vorliegenden Arbeit ist die Überarbeitung der Richtlinie 89/552/EWG des Rates zur Koordinierung bestimmter Rechts- und Verwaltungsvorschriften der Mitgliedstaaten über die Ausübung der Fernsehtätigkeit, welche aus praktikablen Gründen meist als „(EG-)Fernsehrichtlinie“ bezeichnet wird. Sie bildet den Eckpfeiler der audiovisuellen Politik der EU. Seit Erlass der Fernsehrichtlinie im Jahre 1989 bewirkt der technologische Fortschritt jedoch zunehmend enorme Veränderungen nicht nur im Bereich des klassischen Fernsehens, sondern auch und vor allem im Bereich der neuen Medien. Ausgangspunkt hierfür ist die Verbesserung der Digitaltechnologie, die ihrerseits wiederum technische Konvergenzprozesse begünstigt. Diese Entwicklungen führen nicht nur zu einer Vervielfachung von Übertragungskapazitäten und –techniken, sondern ermöglichen neben neuen Formen audiovisueller Angebote auch die Entstehung neuer Dienste. Unsere Medienlandschaft steht vor „epochalen Umbrüchen“. Im Hinblick auf diese Vorgänge wird seit geraumer Zeit eine Überarbeitung der EG-Fernsehrichtlinie angestrebt, um dem technologischen Fortschritt auch „regulatorisch“ gerecht werden zu können. Diesem Überarbeitungsprozess möchte sich die vorliegende Arbeit widmen, indem sie die Fernsehrichtlinie in einem ersten Teil sowohl inhaltlich wie auch hinsichtlich ihrer Entstehungsgeschichte und der zu ihr ergangenen EuGH-Entscheidungen erläutert. Anschließend werden alle Überarbeitungsvorgänge der Fernsehrichtlinie seit 1997 dargestellt, um sodann die aktuellen Reformansätze analysieren und bewerten zu können. Aus zeitlichen Gründen (der neue Richtlinienvorschlag der Kommission vom 13. Dezember 2005 wurde ca. 2 Wochen vor dem Abgabetermin der Arbeit verabschiedet) sind die Ausführungen zum Entwurf der neuen „Richtlinie über audiovisuelle Mediendienste“ allerdings relativ knapp gehalten.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work was based on the synthesis and characterization of innovative crystals for biomedical and technological applications. Different types of syntheses were developed in order to obtain crystals with high photocatalytic properties. A hydrothermal synthesis was also processed to correlate the chemical-physical characteristics with synthesis parameters obtaining synthesis of nanoparticles of titanium dioxide with different morphology, size and crystalline phase depending on the variation of the synthesis parameters. Also a synthesis in water at 80 °C temperature and low pressure was developed from which anatase containing a small percentage of brookite nanoparticles were obtained, presenting a high photocatalytic activity. These particles have been used to obtain the microcrystals formed by an inorganic core of hydroxyapatite surface covered by TiO2 nanoparticles. Micrometer material with higher photocatalytic has been produced. The same nanoparticles have been functionalized with resorcinol oxidized in order to increase the photocatalytic efficiency. Photodegradation test results have confirmed this increase. Finally, synthetic nanoparticles with a waterless synthesis using formic acid and octanol, through esterification "in situ" were synthesized. Nanoparticles superficially covered by carboxylic residues able to bind a wide range of molecules to obtain further photocatalytic properties were obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reactive halogen compounds are known to play an important role in a wide variety of atmospheric processes such as atmospheric oxidation capacity and coastal new particle formation. In this work, novel analytical approaches combining diffusion denuder/impinger sampling techniques with gas chromatographic–mass spectrometric (GC–MS) determination are developed to measure activated chlorine compounds (HOCl and Cl2), activated bromine compounds (HOBr, Br2, BrCl, and BrI), activated iodine compounds (HOI and ICl), and molecular iodine (I2). The denuder/GC–MS methods have been used to field measurements in the marine boundary layer (MBL). High mixing ratios (of the order of 100 ppt) of activated halogen compounds and I2 are observed in the coastal MBL in Ireland, which explains the ozone destruction observed. The emission of I2 is found to correlate inversely with tidal height and correlate positively with the levels of O3 in the surrounding air. In addition the release is found to be dominated by algae species compositions and biomass density, which proves the “hot-spot” hypothesis of atmospheric iodine chemistry. The observations of elevated I2 concentrations substantially support the existence of higher concentrations of littoral iodine oxides and thus the connection to the strong ultra-fine particle formation events in the coastal MBL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Die vorliegende Dissertation behandelt die Gesamtgesteinsanalyse stabiler Siliziumisotope mit Hilfe einer „Multi Collector-ICP-MS“. Die Analysen fanden in Kooperation mit dem „Royal Museum for Central Africa“ in Belgien statt. Einer der Schwerpunkte des ersten Kapitels ist die erstmalige Analyse des δ30Si –Wertes an einem konventionellen Nu PlasmaTM „Multi-Collector ICP-MS“ Instrument, durch die Eliminierung der den 30Si “peak” überlagernden 14N16O Interferenz. Die Analyse von δ30Si wurde durch technische Modifikationen der Anlage erreicht, welche eine höherer Massenauflösung ermöglichten. Die sorgsame Charakterisierung eines adäquaten Referenzmaterials ist unabdingbar für die Abschätzung der Genauigkeit einer Messung. Die Bestimmung der „U.S. Geological Survey“ Referenzmaterialien bildet den zweiten Schwerpunkt dieses Kapitales. Die Analyse zweier hawaiianischer Standards (BHVO-1 and BHVO-2), belegt die präzise und genaue δ30Si Bestimmung und bietet Vergleichsdaten als Qualitätskontrolle für andere Labore. Das zweite Kapitel befasst sich mit kombinierter Silizium-/Sauerstoffisotope zur Untersuchung der Entstehung der Silizifizierung vulkanischer Gesteine des „Barberton Greenstone Belt“, Südafrika. Im Gegensatz zu heute, war die Silizifizierung der Oberflächennahen Schichten, einschließlich der „Chert“ Bildung, weitverbreitete Prozesse am präkambrischen Ozeanboden. Diese Horizonte sind Zeugen einer extremen Siliziummobilisierung in der Frühzeit der Erde. Dieses Kapitel behandelt die Analyse von Silizium- und Sauerstoffisotopen an drei unterschiedlichen Gesteinsprofilen mit unterschiedlich stark silizifizierten Basalten und überlagernden geschichteten „Cherts“ der 3.54, 3.45 und 3.33 Mill. Jr. alten Theespruit, Kromberg und Hooggenoeg Formationen. Siliziumisotope, Sauerstoffisotope und die SiO2-Gehalte demonstrieren in allen drei Gesteinsprofilen eine positive Korrelation mit dem Silizifizierungsgrad, jedoch mit unterschiedlichen Steigungen der δ30Si-δ18O-Verhältnisse. Meerwasser wird als Quelle des Siliziums für den Silizifizierungsprozess betrachtet. Berechnungen haben gezeigt, dass eine klassische Wasser-Gestein Wechselwirkung die Siliziumisotopenvariation nicht beeinflussen kann, da die Konzentration von Si im Meerwasser zu gering ist (49 ppm). Die Daten stimmen mit einer Zwei-Endglieder-Komponentenmischung überein, mit Basalt und „Chert“ als jeweilige Endglieder. Unsere gegenwärtigen Daten an den „Cherts“ bestätigen einen Anstieg der Isotopenzusammensetzung über der Zeit. Mögliche Faktoren, die für unterschiedliche Steigungen der δ30Si-δ18O Verhältnisse verantwortlich sein könnten sind Veränderungen in der Meerwasserisotopie, der Wassertemperatur oder sekundäre Alterationseffekte. Das letzte Kapitel beinhaltet potentielle Variationen in der Quellregion archaischer Granitoide: die Si-Isotopen Perspektive. Natriumhaltige Tonalit-Trondhjemit-Granodiorit (TTG) Intrusiva repräsentieren große Anteile der archaischen Kruste. Im Gegensatz dazu ist die heutige Kruste kaliumhaltiger (GMS-Gruppe: Granit-Monzonite-Syenite). Prozesse, die zu dem Wechsel von natriumhaltiger zu kaliumhaltiger Kruste führten sind die Thematik diesen Kapitels. Siliziumisotopenmessungen wurden hier kombiniert mit Haupt- und Spurenelementanalysen an unterschiedlichen Generationen der 3.55 bis 3.10 Mill. Yr. alten TTG und GMS Intrusiva aus dem Arbeitsgebiet. Die δ30Si-Werte in den unterschiedlichen Plutonit Generationen zeigen einen leichten Anstieg der Isotopie mit der Zeit, wobei natriumhaltige Intrusiva die niedrigste Si-Isotopenzusammensetzung aufweisen. Der leichte Anstieg in der Siliziumisotopenzusammensetzung über die Zeit könnte auf unterschiedliche Temperaturbedingungen in der Quellregion der Granitoide hinweisen. Die Entstehung von Na-reichen, leichten d30Si Granitoiden würde demnach bei höheren Temperaturen erfolgen. Die Ähnlichkeit der δ30Si-Werte in archaischen K-reichen Plutoniten und phanerozoischen K-reichen Plutoniten wird ebenfalls deutlich.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work investigates the influence of chemical reactions on the release of elements from target-ion source units for ISOL facilities. Methods employed are thermochromatography, yield and hold-up time measurements; adsorption enthalpies have been determined for Ag and In. The results obtained with these methods are consistent. Elements exhibit reversible or irreversible reactions on different surfaces (Tantalum, quartz, sapphire). The interactions with surfaces inside the target-ion source unit can be used to improve the quality of radioactive ion beams. Spectroscopic data obtained at CERN-ISOLDE using a medium-temperature quartz transfer line show the effectivity of selective adsorption for beam purification. New gamma lines of 131Cd have been observed and a tentative decay scheme is presented.