949 resultados para cog humanoid robot embodied learning phd thesis metaphor pancake reaching vision
Resumo:
A monitorização ambulatorial do eletrocardiograma (ECG) permite seguir as atividades cotidianas do paciente durante períodos de 24 horas (ou ainda maiores) possibilitando o estudo de casos que pudessem ter episódios arrítmicos fatais. Entretanto, o maior desafio tecnológico que este tipo de monitorização enfrenta é a perda de informação pela presença de ruídos e artefatos quando o paciente se move. A análise do intervalo QT de despolarização e repolarização ventricular do eletrocardiograma superficial é uma técnica não invasiva com um grande valor para a diagnose e prognósticos de cardiopatias e neuropatias, assim como para a predição da morte cardíaca súbita. A análise do desvio padrão do intervalo QT proporciona informação sobre a dispersão (temporal ou espacial) da repolarização ventricular, entretanto a influencia do ruído provoca erros na detecção do final da onda T que são apreciáveis devido ao fato dos valores pequenos do desvio padrão do QT tanto para sujeitos patológicos e quanto para os sãos. O objetivo geral desta tese é melhorar os métodos de processamento do sinal de ECG ambulatorial usando inteligência computacional, especificamente os métodos relacionados com a detecção do final da onda T, e os de reconhecimento morfológico de batimentos que invalidam a análise da variabilidade do intervalo QT. É proposto e validado (em termos de exatidão e precisão) um novo método e algoritmo para estimar o final da onda T baseado no calculo de áreas de trapézios, empregando sinais da base de dados QT da Physionet. O desempenho do método proposto foi testado e comparado com um dos métodos mais usados para detectar o final da onda T: o método baseado no limiar na primeira derivada. O método de inteligência computacional sugerido combina a extração de características usando o método de análise de componentes principais não lineares e a rede neural de tipo perceptron multicamada. O método de áreas de trapézios teve um bom desempenho em condições ruidosas e não depende de nenhum limiar empírico, sendo adequado para situações com níveis de elevados de ruído de banda larga. O método de reconhecimento morfológico de batimentos foi avaliado com sinais ambulatoriais com e sem artefatos pertencentes a bases de dados de prestigio internacional, e mostrou um bom desempenho.
Resumo:
A presente dissertação elegeu como objeto de estudo a produção acadêmica brasileira sobre Tráfico de Pessoas, principalmente o tráfico de mulheres para fins de exploração sexual no Brasil. A pergunta orientadora da investigação foi: Como a perspectiva metodológica de articulação das categorias classe social, raça/etnia, gênero vem sendo trabalhada nas dissertações e teses que têm como objeto de estudo o tráfico de pessoas? As categorias de análise desta pesquisa foram: trafico de pessoas, tráfico de mulheres, gênero, classe social, raça/etnia. Objetivou-se conhecer como a perspectiva metodológica de articulação das categorias classe social, raça/etnia, gênero vem sendo trabalhada nas dissertações e teses que têm como objeto de estudo o tráfico de pessoas e tráfico de mulheres. A pesquisa desenvolvida foi ancorada na abordagem qualitativa com ênfase na revisão e análise bibliográfica de obras sobre as categorias e o conteúdo das dissertações e teses. De um universo de 20 (vinte) dissertações de mestrado e 01 (uma) tese de doutorado, 13 (treze) dissertações e uma tese, foram selecionadas. A abordagem metodológica pautada no materialismo histórico dialético vinculado pela articulação classe social, gênero, raça/etnia permitiu observar os avanços e os limites da referida proposta metodológica no exame dos estudos sobre tráfico de pessoas e de mulheres no Brasil. Os resultados permitiram concluir que embora a metodologia da articulação classe social, gênero, raça/etnia seja reconhecida pelos autores inexiste um aprofundamento analítico da proposta, observando-se uma centralidade na categoria gênero e na terminologia pobreza, como principais determinações do tráfico de pessoas, particularmente do tráfico de mulheres.
Resumo:
Pós-graduação em Geografia - IGCE
Resumo:
Pós-graduação em Desenvolvimento Humano e Tecnologias - IBRC
Resumo:
Pós-graduação em Geografia - FCT
Resumo:
Pós-graduação em História - FCLAS
Resumo:
This paper aims some academic contributions of José Marques de Melo for the study and propagation of Folkcomunication Theory, developed by Luiz Beltrão, in 1967, in his PhD thesis. Marques de Melo, disciple of Beltrão, is one of the most representative names on the set of international scientific studies of communication. Divided in three parts, this text covers theoretical contributions (conception and classification of genders, formats and types of taxonomy of Folkcomunication); dialogues with other thinkers (McLuhan, Morin and Freire) and empirical contribution (internet as a Folk tool). We emphasize that the contributions of Marques de Melo are fundamentals for the new generations to understand the importance of the Folkcomunication and see it as a complex communicational system.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.
Resumo:
Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.
Resumo:
This PhD thesis presents the results, achieved at the Aerospace Engineering Department Laboratories of the University of Bologna, concerning the development of a small scale Rotary wing UAVs (RUAVs). In the first part of the work, a mission simulation environment for rotary wing UAVs was developed, as main outcome of the University of Bologna partnership in the CAPECON program (an EU funded research program aimed at studying the UAVs civil applications and economic effectiveness of the potential configuration solutions). The results achieved in cooperation with DLR (German Aerospace Centre) and with an helicopter industrial partners will be described. In the second part of the work, the set-up of a real small scale rotary wing platform was performed. The work was carried out following a series of subsequent logical steps from hardware selection and set-up to final autonomous flight tests. This thesis will focus mainly on the RUAV avionics package set-up, on the onboard software development and final experimental tests. The setup of the electronic package allowed recording of helicopter responses to pilot commands and provided deep insight into the small scale rotorcraft dynamics, facilitating the development of helicopter models and control systems in a Hardware In the Loop (HIL) simulator. A neested PI velocity controller1 was implemented on the onboard computer and autonomous flight tests were performed. Comparison between HIL simulation and experimental results showed good agreement.
Resumo:
This volume is a collection of the work done in a three years-lasting PhD, focused in the analysis of Central and Southern Adriatic marine sediments, deriving from the collection of a borehole and many cores, achieved thanks to the good seismic-stratigraphic knowledge of the study area. The work was made out within European projects EC-EURODELTA (coordinated by Fabio Trincardi, ISMAR-CNR), EC-EUROSTRATAFORM (coordinated by Phil P. E. Weaver, NOC, UK), and PROMESS1 (coordinated by Serge Bernè, IFREMER, France). The analysed sedimentary successions presented highly expanded stratigraphic intervals, particularly for the last 400 kyr, 60 kyr and 6 kyr BP. These three different time-intervals resulted in a tri-partition of the PhD thesis. The study consisted of the analysis of planktic and benthic foraminifers’ assemblages (more than 560 samples analysed), as well as in preparing the material for oxygen and carbon stable isotope analyses, and interpreting and discussing the obtained dataset. The chronologic framework of the last 400 kyr was achieved for borehole PRAD1-2 (within the work-package WP6 of PROMESS1 project), collected in 186.5 m water depth. The proposed chronology derives from a multi-disciplinary approach, consisting of the integration of numerous and independent proxies, some of which analysed by other specialists within the project. The final framework based on: micropaleontology (calcareous nannofossils and foraminifers’ bioevents), climatic cyclicity (foraminifers’ assemblages), geochemistry (oxygen stable isotope, made out on planktic and benthic records), paleomagnetism, radiometric ages (14C AMS), teprhochronology, identification of sapropel-equivalent levels (Se). It’s worth to note the good consistency between the oxygen stable isotope curve obtained for borehole PRAD1-2 and other deeper Mediterranean records. The studied proxies allowed the recognition of all the isotopic intervals from MIS10 to MIS1 in PRAD1-2 record, and the base of the borehole has been ascribed to the early MIS11. Glacial and interglacial intervals identified in the Central Adriatic record have been analysed in detail for the paleo-environmental reconstruction, as well. For instance, glacial stages MIS6, MIS8 and MIS10 present peculiar foraminifers’ assemblages, composed by benthic species typical of polar regions and no longer living in the Central Adriatic nowadays. Moreover, a deepening trend in the paleo-bathymetry during glacial intervals was observed, from MIS10 (inner-shelf environment) to MIS4 (mid-shelf environment).Ten sapropel-equivalent levels have been recognised in PRAD1-2 Central Adriatic record. They showed different planktic foraminifers’ assemblages, which allowed the first distinction of events occurred during warm-climate (Se5, Se7), cold-climate (Se4, Se6 and Se8) and temperate-intermediate-climate (Se1, Se3, Se9, Se’, Se10) conditions, consistently with literature. Cold-climate sapropel equivalents are characterised by the absence of an oligotrophic phase, whereas warm-temeprate-climate sapropel equivalents present both the oligotrophic and the eutrophic phases (except for Se1). Sea floor conditions vary, according to benthic foraminifers’ assemblages, from relatively well oxygenated (Se1, Se3), to dysoxic (Se9, Se’, Se10), to highly dysoxic (Se4, Se6, Se8) to events during which benthic foraminifers are absent (Se5, Se7). These two latter levels are also characterised by the lamination of the sediment, feature never observed in literature in such shallow records. The enhanced stratification of the water column during the events Se8, Se7, Se6, Se5, Se4, and the concurring strong dilution of shallow water, pointed out by the isotope record, lead to the hypothesis of a period of intense precipitation in the Central Adriatic region, possibly due to a northward shift of the African Monsoon. Finally, the expression of Central Adriatic PRAD1-2 Se5 equivalent was compared with the same event, as registered in other Eastern Mediterranean areas. The sequence of substantially the same planktic foraminifers’ bioevents has been consistently recognised, indicating a similar evolution of the water column all over the Eastern Mediterranean; yet, the synchronism of these events cannot be demonstrated. A high resolution analysis of late Holocene (last 6000 years BP) climate change was carried out for the Adriatic area, through the recognition of planktic and benthic foraminifers’ bioevents. In particular, peaks of planktic Globigerinoides sacculifer (four during the last 5500 years BP in the most expanded core) have been interpreted, based on the ecological requirements of this species, as warm-climate, arid intervals, correspondent to periods of relative climatic optimum, such as, for instance, the Medieval Warm Period, the Roman Age, the Late Bronze Age and the Copper Age. Consequently, the minima in the abundance of this biomarker could correspond to relatively cooler and more rainy periods. These conclusions are in good agreement with the isotopic and the pollen data. The Last Occurrence (LO) of G. sacculifer has been dated in this work at an average age of 550 years BP, and it is the best bioevent approximating the base of the Little Ice Age in the Adriatic. Recent literature reports the same bioevent in the Levantine Basin, showing a rather consistent age. Therefore, the LO of G. sacculifer has the potential to be extended to all the Eastern Mediterranean. Within the Little Ice Age, benthic foraminifer V. complanata shows two distinct peaks in the shallower Adriatic cores analysed, collected hundred kilometres apart, inside the mud belt environment. Based on the ecological requirements of this species, these two peaks have been interpreted as the more intense (cold and rainy) oscillations inside the LIA. The chronologic framework of the analysed cores is robust, being based on several range-finding 14C AMS ages, on estimates of the secular variation of the magnetic field, on geochemical estimates of the activity depth of 210Pb short-lived radionuclide (for the core-top ages), and is in good agreement with tephrochronologic, pollen and foraminiferal data. The intra-holocenic climate oscillations find out in the Adriatic have been compared with those pointed out in literature from other records of the Northern Hemisphere, and the chronologic constraint seems quite good. Finally, the sedimentary successions analysed allowed the review and the update of the foraminifers’ ecobiostratigraphy available from literature for the Adriatic region, thanks to the achievement of 16 ecobiozones for the last 60 kyr BP. Some bioevents are restricted to the Central Adriatic (for instance the LO of benthic Hyalinea balthica , approximating the MIS3/MIS2 boundary), others occur all over the Adriatic basin (for instance the LO of planktic Globorotalia inflata during MIS3, individuating Dansgaard-Oeschger cycle 8 (Denekamp)).
Resumo:
Reinforced concrete columns might fail because of buckling of the longitudinal reinforcing bar when exposed to earthquake motions. Depending on the hoop stiffness and the length-over-diameter ratio, the instability can be local (in between two subsequent hoops) or global (the buckling length comprises several hoop spacings). To get insight into the topic, an extensive literary research of 19 existing models has been carried out including different approaches and assumptions which yield different results. Finite element fiberanalysis was carried out to study the local buckling behavior with varying length-over-diameter and initial imperfection-over-diameter ratios. The comparison of the analytical results with some experimental results shows good agreement before the post buckling behavior undergoes large deformation. Furthermore, different global buckling analysis cases were run considering the influence of different parameters; for certain hoop stiffnesses and length-over-diameter ratios local buckling was encountered. A parametric study yields an adimensional critical stress in function of a stiffness ratio characterized by the reinforcement configuration. Colonne in cemento armato possono collassare per via dell’instabilità dell’armatura longitudinale se sottoposte all’azione di un sisma. In funzione della rigidezza dei ferri trasversali e del rapporto lunghezza d’inflessione-diametro, l’instabilità può essere locale (fra due staffe adiacenti) o globale (la lunghezza d’instabilità comprende alcune staffe). Per introdurre alla materia, è proposta un’esauriente ricerca bibliografica di 19 modelli esistenti che include approcci e ipotesi differenti che portano a risultati distinti. Tramite un’analisi a fibre e elementi finiti si è studiata l’instabilità locale con vari rapporti lunghezza d’inflessione-diametro e imperfezione iniziale-diametro. Il confronto dei risultati analitici con quelli sperimentali mostra una buona coincidenza fino al raggiungimento di grandi spostamenti. Inoltre, il caso d’instabilità globale è stato simulato valutando l’influenza di vari parametri; per certe configurazioni di rigidezza delle staffe e lunghezza d’inflessione-diametro si hanno ottenuto casi di instabilità locale. Uno studio parametrico ha permesso di ottenere un carico critico adimensionale in funzione del rapporto di rigidezza dato dalle caratteristiche dell’armatura.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.