939 resultados para Gravitational potential energy


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of the thesis concerns the study of inflation in the context of a theory of gravity called "Induced Gravity" in which the gravitational coupling varies in time according to the dynamics of the very same scalar field (the "inflaton") driving inflation, while taking on the value measured today since the end of inflation. Through the analytical and numerical analysis of scalar and tensor cosmological perturbations we show that the model leads to consistent predictions for a broad variety of symmetry-breaking inflaton's potentials, once that a dimensionless parameter entering into the action is properly constrained. We also discuss the average expansion of the Universe after inflation (when the inflaton undergoes coherent oscillations about the minimum of its potential) and determine the effective equation of state. Finally, we analyze the resonant and perturbative decay of the inflaton during (p)reheating. The second part is devoted to the study of a proposal for a quantum theory of gravity dubbed "Horava-Lifshitz (HL) Gravity" which relies on power-counting renormalizability while explicitly breaking Lorentz invariance. We test a pair of variants of the theory ("projectable" and "non-projectable") on a cosmological background and with the inclusion of scalar field matter. By inspecting the quadratic action for the linear scalar cosmological perturbations we determine the actual number of propagating degrees of freedom and realize that the theory, being endowed with less symmetries than General Relativity, does admit an extra gravitational degree of freedom which is potentially unstable. More specifically, we conclude that in the case of projectable HL Gravity the extra mode is either a ghost or a tachyon, whereas in the case of non-projectable HL Gravity the extra mode can be made well-behaved for suitable choices of a pair of free dimensionless parameters and, moreover, turns out to decouple from the low-energy Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microalgae are sun - light cell factories that convert carbon dioxide to biofuels, foods, feeds, and other bioproducts. The concept of microalgae cultivation as an integrated system in wastewater treatment has optimized the potential of the microalgae - based biofuel production. These microorganisms contains lipids, polysaccharides, proteins, pigments and other cell compounds, and their biomass can provide different kinds of biofuels such as biodiesel, biomethane and ethanol. The algal biomass application strongly depends on the cell composition and the production of biofuels appears to be economically convenient only in conjunction with wastewater treatment. The aim of this research thesis was to investigate a biological wastewater system on a laboratory scale growing a newly isolated freshwater microalgae, Desmodesmus communis, in effluents generated by a local wastewater reclamation facility in Cesena (Emilia Romagna, Italy) in batch and semi - continuous cultures. This work showed the potential utilization of this microorganism in an algae - based wastewater treatment; Desmodesmus communis had a great capacity to grow in the wastewater, competing with other microorganisms naturally present and adapting to various environmental conditions such as different irradiance levels and nutrient concentrations. The nutrient removal efficiency was characterized at different hydraulic retention times as well as the algal growth rate and biomass composition in terms of proteins, polysaccharides, total lipids and total fatty acids (TFAs) which are considered the substrate for biodiesel production. The biochemical analyses were coupled with the biomass elemental analysis which specified the amount of carbon and nitrogen in the algal biomass. Furthermore photosynthetic investigations were carried out to better correlate the environmental conditions with the physiology responses of the cells and consequently get more information to optimize the growth rate and the increase of TFAs and C/N ratio, cellular compounds and biomass parameter which are fundamental in the biomass energy recovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we investigate the influence of dark energy on structure formation, within five different cosmological models, namely a concordance $\Lambda$CDM model, two models with dynamical dark energy, viewed as a quintessence scalar field (using a RP and a SUGRA potential form) and two extended quintessence models (EQp and EQn) where the quintessence scalar field interacts non-minimally with gravity (scalar-tensor theories). We adopted for all models the normalization of the matter power spectrum $\sigma_{8}$ to match the CMB data. For each model, we perform hydrodynamical simulations in a cosmological box of $(300 \ {\rm{Mpc}} \ h^{-1})^{3}$ including baryons and allowing for cooling and star formation. We find that, in models with dynamical dark energy, the evolving cosmological background leads to different star formation rates and different formation histories of galaxy clusters, but the baryon physics is not affected in a relevant way. We investigate several proxies for the cluster mass function based on X-ray observables like temperature, luminosity, $M_{gas}$, and $Y_{X}$. We confirm that the overall baryon fraction is almost independent of the dark energy models within few percentage points. The same is true for the gas fraction. This evidence reinforces the use of galaxy clusters as cosmological probe of the matter and energy content of the Universe. We also study the $c-M$ relation in the different cosmological scenarios, using both dark matter only and hydrodynamical simulations. We find that the normalization of the $c-M$ relation is directly linked to $\sigma_{8}$ and the evolution of the density perturbations for $\Lambda$CDM, RP and SUGRA, while for EQp and EQn it depends also on the evolution of the linear density contrast. These differences in the $c-M$ relation provide another way to use galaxy clusters to constrain the underlying cosmology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we focus on the preparation of energy transfer-based quantum dot (QD)-dye hybrid systems. Two kinds of QD-dye hybrid systems have been successfully synthesized: QD-silica-dye and QD-dye hybrid systems.rn rnIn the QD-silica-dye hybrid system, multishell CdSe/CdS/ZnS QDs were adsorbed onto monodisperse Stöber silica particles with an outer silica shell of thickness 2 - 24 nm containing organic dye molecules (Texas Red). The thickness of this dye layer has a strong effect on the total sensitized acceptor emission, which is explained by the increase in the number of dye molecules homogeneously distributed within the silica shell, in combination with an enhanced surface adsorption of QDs with increasing dye amount. Our conclusions were underlined by comparison of the experimental results with Monte-Carlo simulations, and by control experiments confirming attractive interactions between QDs and Texas Red freely dissolved in solution. rnrnNew QD-dye hybrid system consisting of multishell QDs and organic perylene dyes have been synthesized. We developed a versatile approach to assemble extraordinarily stable QD-dye hybrids, which uses dicarboxylate anchors to bind rylene dyes to QD. This system yields a good basis to study the energy transfer between QD and dye because of its simple and compact design: there is no third kind of molecule linking QD and dye; no spacer; and the affinity of the functional group to the QD surface is strong. The FRET signal was measured for these complexes as a function of both dye to QD ratio and center-to-center distance between QD and dye by controlling number of covered ZnS layers. Data showed that fluorescence resonance energy transfer (FRET) was the dominant mechanism of the energy transfer in our QD-dye hybrid system. FRET efficiency can be controlled by not only adjusting the number of dyes on the QD surface or the QD to dye distance, but also properly choosing different dye and QD components. Due to the strong stability, our QD-dye complexes can also be easily transferred into water. Our approach can apply to not only dye molecules but also other organic molecules. As an example, the QDs have been complexed with calixarene molecules and the QD-calixarene complexes also have potential for QD-based energy transfer study. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has witnessed the establishment of a Standard Cosmological Model, which is based on two fundamental assumptions: the first one is the existence of a new non relativistic kind of particles, i. e. the Dark Matter (DM) that provides the potential wells in which structures create, while the second one is presence of the Dark Energy (DE), the simplest form of which is represented by the Cosmological Constant Λ, that sources the acceleration in the expansion of our Universe. These two features are summarized by the acronym ΛCDM, which is an abbreviation used to refer to the present Standard Cosmological Model. Although the Standard Cosmological Model shows a remarkably successful agreement with most of the available observations, it presents some longstanding unsolved problems. A possible way to solve these problems is represented by the introduction of a dynamical Dark Energy, in the form of the scalar field ϕ. In the coupled DE models, the scalar field ϕ features a direct interaction with matter in different regimes. Cosmic voids are large under-dense regions in the Universe devoided of matter. Being nearby empty of matter their dynamics is supposed to be dominated by DE, to the nature of which the properties of cosmic voids should be very sensitive. This thesis work is devoted to the statistical and geometrical analysis of cosmic voids in large N-body simulations of structure formation in the context of alternative competing cosmological models. In particular we used the ZOBOV code (see ref. Neyrinck 2008), a publicly available void finder algorithm, to identify voids in the Halos catalogues extraxted from CoDECS simulations (see ref. Baldi 2012 ). The CoDECS are the largest N-body simulations to date of interacting Dark Energy (DE) models. We identify suitable criteria to produce voids catalogues with the aim of comparing the properties of these objects in interacting DE scenarios to the standard ΛCDM model, at different redshifts. This thesis work is organized as follows: in chapter 1, the Standard Cosmological Model as well as the main properties of cosmic voids are intro- duced. In chapter 2, we will present the scalar field scenario. In chapter 3 the tools, the methods and the criteria by which a voids catalogue is created are described while in chapter 4 we discuss the statistical properties of cosmic voids included in our catalogues. In chapter 5 the geometrical properties of the catalogued cosmic voids are presented by means of their stacked profiles. In chapter 6 we summarized our results and we propose further developments of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methane yield of ligno-cellulosic substrates (i.e. dedicated energy crops and agricultural residues) may be limited by their composition and structural features. Hence, biomass pre-treatments are envisaged to overcome this constraint. This thesis aimed at: i) assessing biomass and methane yield of dedicated energy crops; ii) evaluating the effects of hydrothermal pre-treatments on methane yield of Arundo; iii) investigating the effects of NaOH pre-treatments and iv) acid pre-treatments on chemical composition, physical structure and methane yield of two dedicated energy crops and one agricultural residue. Three multi-annual species (Arundo, Switchgrass and Sorghum Silk), three sorghum hybrids (Trudan Headless, B133 and S506) and a maize, as reference for AD, were studied in the frame of point i). Results exhibit the remarkable variation in biomass yield, chemical characteristics and potential methane yield. The six species alternative to maize deserve attention in view of a low need of external inputs but necessitate improvements in biodegradability. In the frame of point ii), Arundo was subjected to hydrothermal pre-treatments at different temperature, time and acid catalyst (with and without H2SO4). Pre-treatments determined a variable effect on methane yield: pre-treatments without acid catalyst achieved up to +23% CH4 output, while pre-treatments with H2SO4 catalyst incurred a methanogenic inhibition. Two biomass crops (Arundo and B133) and an agricultural residue (Barley straw) were subject to NaOH and acid pre-treatments, in the frame of point iii) and iv), respectively. Different pre-treatments determined a change of chemical and physical structure and an increase of methane yield: up to +30% and up to +62% CH4 output in Arundo with NaOH and acid pre-treatments, respectively. It is thereby demonstrated that pre-treatments can actually enhance biodegradability and subsequent CH4 output of ligno-cellulosic substrates, although pre-treatment viability needs to be evaluated at the level of full scale biogas plants in a perspective of profitable implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die Nuklearmedizin ist ein modernes und effektives Werkzeug zur Erkennung und Behandlung von onkologischen Erkrankungen. Molekulare Bildgebung, die auf dem Einsatz von Radiopharmaka basiert, beinhaltet die Einzel-Photonen-Emissions-Tomographie (SPECT) und Positronenemissions¬tomographie (PET) und ermöglicht die nicht-invasive Visualisierung von Tumoren auf nano-und picomolarer Ebene.rnDerzeit werden viele neue Tracer für die genauere Lokalisierung von kleinen Tumoren und Metastasen eingeführt und hinsichtlich ihrer Eignung untersucht. Die meisten von ihnen sind Protein-basierte Biomoleküle, die die Natur selbst als Antigene für die Tumorzellen produziert. Dabei spielen Antikörper und Antikörper-Fragmente eine wichtige Rolle in der Tumor-Diagnostik und Behandlung. Die PET-Bildgebung mit Antikörpern und Antikörperfragmenten bezeichnet man als immuno-PET. Ein wichtiger Aspekt hierbei ist, dass entsprechende Radiopharmaka benötigt werden, deren Halbwertszeit mit der Halbwertszeit der Biomoleküle korreliert ist.rnIn neueren Arbeiten wird 90Nb als potenzieller Kandidat für die Anwendung in der immuno-PET vorgeschlagen. Seine Halbwertszeit von 14,6 Stunden ist geeignet für die Anwendung mit Antikörperfragmenten und einige intakten Antikörpern. 90Nb hat eine relativ hohen Anteil an Positronenemission von 53% und eine optimale Energie für die β+-Emission von 0,35 MeV, die sowohl eine hohe Qualität der Bildgebung als auch eine niedrige Aktivitätsmenge des Radionuklids ermöglicht.rnErsten grundlegende Untersuchungen zeigten: i) dass 90Nb in ausreichender Menge und Reinheit durch Protonen-Bombardierung des natürlichen Zirkonium Targets produziert, ii) aus dem Targetmaterial in entsprechender radiochemischer Reinheit isoliert und iii) zur Markierung des monoklonalen Antikörpers (Rituximab) verwendet werden kann und iv) dieser 90Nb-markierte mAb eine hohe in vitro Stabilität besitzt. Desweiteren wurde eine alternative und schnelle Abtrennungsmethode entwickelt, die es erlaubt 90Nb, mit einer geeigneten radiochemischen und radionuklidischen Reinheit für eine anschließende Markierung von Biomolekülen in einer Stunde zu aufzureinigen. Schließlich wurden erstmals 90Nb-markierte Biomolekülen in vivo untersucht. Desweiteren wurden auch Experimente durchgeführt, um den optimalen bifunktionellen Chelatbildner (BFC) für 90Niob zu finden. Mehrere BFC wurden hinsichtlich Komplexbildung mit NbV untersucht. Desferrioxamin (Df) erwies sich als geeignetster Chelator für 90Nb. Der monoklonale Antikörper Bevacizumab (Avastin®) wurde mit 90Nb markiert und eine Biodistributionsstudie und eine PET-Untersuchung durchgeführt. Alle diese Ergebnisse zeigten, dass 90Nb ein vielversprechendes Radionuklid für die Immuno-PET ist, welches sogar für weitere kommerzielle Anwendungen in der klinischen Routine geeignet zu sein scheint.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work considers the reconstruction of strong gravitational lenses from their observed effects on the light distribution of background sources. After reviewing the formalism of gravitational lensing and the most common and relevant lens models, new analytical results on the elliptical power law lens are presented, including new expressions for the deflection, potential, shear and magnification, which naturally lead to a fast numerical scheme for practical calculation. The main part of the thesis investigates lens reconstruction with extended sources by means of the forward reconstruction method, in which the lenses and sources are given by parametric models. The numerical realities of the problem make it necessary to find targeted optimisations for the forward method, in order to make it feasible for general applications to modern, high resolution images. The result of these optimisations is presented in the \textsc{Lensed} algorithm. Subsequently, a number of tests for general forward reconstruction methods are created to decouple the influence of sourced from lens reconstructions, in order to objectively demonstrate the constraining power of the reconstruction. The final chapters on lens reconstruction contain two sample applications of the forward method. One is the analysis of images from a strong lensing survey. Such surveys today contain $\sim 100$ strong lenses, and much larger sample sizes are expected in the future, making it necessary to quickly and reliably analyse catalogues of lenses with a fixed model. The second application deals with the opposite situation of a single observation that is to be confronted with different lens models, where the forward method allows for natural model-building. This is demonstrated using an example reconstruction of the ``Cosmic Horseshoe''. An appendix presents an independent work on the use of weak gravitational lensing to investigate theories of modified gravity which exhibit screening in the non-linear regime of structure formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the year 2013, the detection of a diffuse astrophysical neutrino flux with the IceCube neutrino telescope – constructed at the geographic South Pole – was announced by the IceCube collaboration. However, the origin of these neutrinos is still unknown as no sources have been identified to this day. Promising neutrino source candidates are blazars, which are a subclass of active galactic nuclei with radio jets pointing towards the Earth. In this thesis, the neutrino flux from blazars is tested with a maximum likelihood stacking approach, analyzing the combined emission from uniform groups of objects. The stacking enhances the sensitivity w.r.t. the still unsuccessful single source searches. The analysis utilizes four years of IceCube data including one year from the completed detector. As all results presented in this work are compatible with background, upper limits on the neutrino flux are given. It is shown that, under certain conditions, some hadronic blazar models can be challenged or even rejected. Moreover, the sensitivity of this analysis – and any other future IceCube point source search – was enhanced by the development of a new angular reconstruction method. It is based on a detailed simulation of the photon propagation in the Antarctic ice. The median resolution for muon tracks, induced by high-energy neutrinos, is improved for all neutrino energies above IceCube’s lower threshold at 0.1TeV. By reprocessing the detector data and simulation from the year 2010, it is shown that the new method improves IceCube’s discovery potential by 20% to 30% depending on the declination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Purpose: To further evaluate the use of microbeam irradiation (MBI) as a potential means of non-invasive brain tumor treatment by investigating the induction of a bystander effect in non-irradiated tissue. Methods: Adult rats were irradiated with 35 or 350 Gy at the European Synchotron Research Facility (ESRF), using homogenous (broad beam) irradiation (HI) or a high energy microbeam delivered to the right brain hemisphere only. The proteome of the frontal lobes were then analyzed using two-dimensional electrophoresis (2-DE) and mass spectrometry. Results: HI resulted in proteomic responses indicative of tumourigenesis; increased albumin, aconitase and triosphosphate isomerase (TPI), and decreased dihydrolipoyldehydrogenase (DLD). The MBI bystander effect proteomic changes were indicative of reactive oxygen species mediated apoptosis; reduced TPI, prohibitin and tubulin and increased glial fibrillary acidic protein (GFAP). These potentially anti-tumourigenic apoptotic proteomic changes are also associated with neurodegeneration. However the bystander effect also increased heat shock protein (HSP) 71 turnover. HSP 71 is known to protect against all of the neurological disorders characterized by the bystander effect proteome changes. Conclusions: These results indicate that the collective interaction of these MBI-induced bystander effect proteins and their mediation by HSP 71, may confer a protective effect which now warrants additional experimental attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land surface temperature (LST) plays a key role in governing the land surface energy budget, and measurements or estimates of LST are an integral part of many land surface models and methods to estimate land surface sensible heat (H) and latent heat fluxes. In particular, the LST anchors the potential temperature profile in Monin-Obukhov similarity theory, from which H can be derived. Brutsaert has made important contributions to our understanding the nature of surface temperature measurements as well as the practical but theoretically sound use of LST in this framework. His work has coincided with the wide-spread availability of remotely sensed LST measurements. Use of remotely sensed LST estimates inevitably involves complicating factors, such as: varying spatial and temporal scales in measurements, theory, and models; spatial variability of LST and H; the relationship between measurements of LST and the temperature felt by the atmosphere; and the need to correct satellite-based radiometric LST measurements for the radiative effects of the atmosphere. This paper reviews the progress made in research in these areas by tracing and commenting on Brutsaert's contributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glucocorticoids play an essential role in the regulation of key physiological processes, including immunomodulation, brain function, energy metabolism, electrolyte balance and blood pressure. Exposure to naturally occurring compounds or industrial chemicals that impair glucocorticoid action may contribute to the increasing incidence of cognitive deficits, immune disorders and metabolic diseases. Potentially, "glucocorticoid disruptors" can interfere with various steps of hormone action, e.g. hormone synthesis, binding to plasma proteins, delivery to target cells, pre-receptor regulation of the ratio of active versus inactive hormones, glucocorticoid receptor (GR) function, or export and degradation of glucocorticoids. Several recent studies indicate that such chemicals exist and that some of them can cause multiple toxic effects by interfering with different steps of hormone action. For example, increasing evidence suggests that organotins disturb glucocorticoid action by altering the function of factors that regulate the expression of 11beta-hydroxysteroid dehydrogenase (11beta-HSD) pre-receptor enzymes, by direct inhibition of 11beta-HSD2-dependent inactivation of glucocorticoids, and by blocking GR activation. These observations emphasize on the complexity of the toxic effects caused by such compounds and on the need of suitable test systems to assess their effects on each relevant step.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A transmission electron microscope (TEM) accessory, the energy filter, enables the establishment of a method for elemental microanalysis, the electron energy-loss spectroscopy (EELS). In conventional TEM, unscattered, elastic, and inelastic scattered electrons contribute to image information. Energy-filtering TEM (EFTEM) allows elemental analysis at the ultrastructural level by using selected inelastic scattered electrons. EELS is an excellent method for elemental microanalysis and nanoanalysis with good sensitivity and accuracy. However, it is a complex method whose potential is seldom completely exploited, especially for biological specimens. In addition to spectral analysis, parallel-EELS, we present two different imaging techniques in this chapter, namely electron spectroscopic imaging (ESI) and image-EELS. We aim to introduce these techniques in this chapter with the elemental microanalysis of titanium. Ultrafine, 22-nm titanium dioxide particles are used in an inhalation study in rats to investigate the distribution of nanoparticles in lung tissue.