964 resultados para source and sink
Resumo:
Contrafreeloading occurs when animals spend time and effort to obtain food in the presence of freely available food. There are several interpretations for such an apparent contradiction to optimal foraging models, with an emphasis either on the need to gather and update information about the environment or on the value of performing species-typical responses. Evidence suggests that both gathering information about the environment and the expression of species-typical behaviour are important for the welfare of captive animals. The aim of the present study was to assess the existence of contrafreeloading in maned wolves (Chrysocyon brachyurus), in a situation where animals could get food directly from a "free" source and/or search and handle hidden food items, an alternative that requires more effort and is probably more similar to natural foraging conditions. Eight captive, pair-housed maned wolves were given weekly choice tests in which they could obtain food either by approaching the usual food tray in one section of the enclosure (Tray), and/or by searching for food at variable sites amongst the vegetation in the other section of the enclosure (Scattered). Results indicate that maned wolves spent more time in the Scattered than in the Tray section of the enclosure (P = 0.02) and that they obtained about half of the food from that section (48.54% +/- SE 0.69). Our results, the first to demonstrate contrafreeloading in maned wolves, have implications for the husbandry and welfare of this endangered species. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Achira (Canna indica L.) is a plant native to the Andes in South America, a starchy source, and its cultivation has expanded to different tropical countries, like Brazil. In order to evaluate the potential of this species, starch and flours with different particle size were obtained from Brazilian achira rhizomes. Proximal analyses, size distribution, SEM, swelling power, solubility, DSC, XRD analysis, and FTIR were performed for characterization of these materials. Flours showed high dietary fiber content (16.532.2% db) and high concentration of starch in the case of the smaller particle size fraction. Significant differences in protein and starch content, swelling power, solubility, and thermal properties were observed between the Brazilian and the Colombian starch. All the studied materials displayed the B-type XRD pattern with relative crystallinity of 20.1% for the flour and between 27.0 and 28.0% for the starches. Results showed that the starch and flour produced from achira rhizomes have great technological potential for use as functional ingredient in the food industry.
Resumo:
The discussions on the future of cataloging has received increased attention in the last ten years, mainly due to the impact of rapid development of information and communication technologies in the same period, which has provided access to the Web anytime, anywhere. These discussions revolve around the need for a new bibliographic framework to meet the demand of this new reality in the digital environment, ie how libraries can process, store, deliver, share and integrate their collections (physical, digital or scanned), in current post-PC era? Faced with this question, Open Access, Open Source and Open Standards are three concepts that need to receive greater attention in the field of Library and Information Science, as it is believed to be fundamental elements for the change of paradigm of descriptive representation, currently based conceptually on physical item rather than intellectual work. This paper aims to raise and discuss such issues and instigate information professionals, especially librarians, to think, discuss and propose initiatives for such problems, contributing and sharing ideas and possible solutions, in multidisciplinary teams. At the end is suggested the effective creation of multidisciplinary and inter-institutional study groups on the future of cataloging and its impact on national collections, in order to contribute to the area of descriptive representation in national and international level
Resumo:
Various organisms have been characterized by molecular methods, including fungi of the genus Cryptococcus. The purposes of this study were: to determine the discriminatory potential of the RAPD (Random Amplified Polymorphic DNA) primers, the pattern of similarity of the Cryptococcus species, and discuss their useful application in epidemiological studies. We analyzed 10 isolates of each specie/group: C. albidus, C. laurentii complex, C. neoformans var. grubii, all from environmental source, and two ATCC strains, C. neoformans var. grubii ATCC 90112, and C. neoformans var. neoformans ATCC 28957 by RAPD-PCR using the primers CAV1, CAV2, ZAP19, ZAP20, OPB11 and SEQ6. The primers showed a good discriminatory power, revealing important differences between them and between species; the SEQ6 primer discriminated a larger number of isolates of three species. Isolates of C. laurentii showed greater genetic diversity than other species revealed by all six primers. Isolates of C. neoformans were more homogeneous. Only the primer CAV2 showed no amplification of DNA bands for C. albidus. It was concluded that the use of limited number of carefully selected primers allowed the discrimination of different isolates, and some primers (e.g., CAV2 for C. albidus) may not to be applied to some species.
Resumo:
In the present thesis a thourough multiwavelength analysis of a number of galaxy clusters known to be experiencing a merger event is presented. The bulk of the thesis consists in the analysis of deep radio observations of six merging clusters, which host extended radio emission on the cluster scale. A composite optical and X–ray analysis is performed in order to obtain a detailed and comprehensive picture of the cluster dynamics and possibly derive hints about the properties of the ongoing merger, such as the involved mass ratio, geometry and time scale. The combination of the high quality radio, optical and X–ray data allows us to investigate the implications of the ongoing merger for the cluster radio properties, focusing on the phenomenon of cluster scale diffuse radio sources, known as radio halos and relics. A total number of six merging clusters was selected for the present study: A3562, A697, A209, A521, RXCJ 1314.4–2515 and RXCJ 2003.5–2323. All of them were known, or suspected, to possess extended radio emission on the cluster scale, in the form of a radio halo and/or a relic. High sensitivity radio observations were carried out for all clusters using the Giant Metrewave Radio Telescope (GMRT) at low frequency (i.e. ≤ 610 MHz), in order to test the presence of a diffuse radio source and/or analyse in detail the properties of the hosted extended radio emission. For three clusters, the GMRT information was combined with higher frequency data from Very Large Array (VLA) observations. A re–analysis of the optical and X–ray data available in the public archives was carried out for all sources. Propriety deep XMM–Newton and Chandra observations were used to investigate the merger dynamics in A3562. Thanks to our multiwavelength analysis, we were able to confirm the existence of a radio halo and/or a relic in all clusters, and to connect their properties and origin to the reconstructed merging scenario for most of the investigated cases. • The existence of a small size and low power radio halo in A3562 was successfully explained in the theoretical framework of the particle re–acceleration model for the origin of radio halos, which invokes the re–acceleration of pre–existing relativistic electrons in the intracluster medium by merger–driven turbulence. • A giant radio halo was found in the massive galaxy cluster A209, which has likely undergone a past major merger and is currently experiencing a new merging process in a direction roughly orthogonal to the old merger axis. A giant radio halo was also detected in A697, whose optical and X–ray properties may be suggestive of a strong merger event along the line of sight. Given the cluster mass and the kind of merger, the existence of a giant radio halo in both clusters is expected in the framework of the re–acceleration scenario. • A radio relic was detected at the outskirts of A521, a highly dynamically disturbed cluster which is accreting a number of small mass concentrations. A possible explanation for its origin requires the presence of a merger–driven shock front at the location of the source. The spectral properties of the relic may support such interpretation and require a Mach number M < ∼ 3 for the shock. • The galaxy cluster RXCJ 1314.4–2515 is exceptional and unique in hosting two peripheral relic sources, extending on the Mpc scale, and a central small size radio halo. The existence of these sources requires the presence of an ongoing energetic merger. Our combined optical and X–ray investigation suggests that a strong merging process between two or more massive subclumps may be ongoing in this cluster. Thanks to forthcoming optical and X–ray observations, we will reconstruct in detail the merger dynamics and derive its energetics, to be related to the energy necessary for the particle re–acceleration in this cluster. • Finally, RXCJ 2003.5–2323 was found to possess a giant radio halo. This source is among the largest, most powerful and most distant (z=0.317) halos imaged so far. Unlike other radio halos, it shows a very peculiar morphology with bright clumps and filaments of emission, whose origin might be related to the relatively high redshift of the hosting cluster. Although very little optical and X–ray information is available about the cluster dynamical stage, the results of our optical analysis suggest the presence of two massive substructures which may be interacting with the cluster. Forthcoming observations in the optical and X–ray bands will allow us to confirm the expected high merging activity in this cluster. Throughout the present thesis a cosmology with H0 = 70 km s−1 Mpc−1, m=0.3 and =0.7 is assumed.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
This thesis proposes design methods and test tools, for optical systems, which may be used in an industrial environment, where not only precision and reliability but also ease of use is important. The approach to the problem has been conceived to be as general as possible, although in the present work, the design of a portable device for automatic identification applications has been studied, because this doctorate has been funded by Datalogic Scanning Group s.r.l., a world-class producer of barcode readers. The main functional components of the complete device are: electro-optical imaging, illumination and pattern generator systems. For what concerns the electro-optical imaging system, a characterization tool and an analysis one has been developed to check if the desired performance of the system has been achieved. Moreover, two design tools for optimizing the imaging system have been implemented. The first optimizes just the core of the system, the optical part, improving its performance ignoring all other contributions and generating a good starting point for the optimization of the whole complex system. The second tool optimizes the system taking into account its behavior with a model as near as possible to reality including optics, electronics and detection. For what concerns the illumination and the pattern generator systems, two tools have been implemented. The first allows the design of free-form lenses described by an arbitrary analytical function exited by an incoherent source and is able to provide custom illumination conditions for all kind of applications. The second tool consists of a new method to design Diffractive Optical Elements excited by a coherent source for large pattern angles using the Iterative Fourier Transform Algorithm. Validation of the design tools has been obtained, whenever possible, comparing the performance of the designed systems with those of fabricated prototypes. In other cases simulations have been used.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
Copper and Zn are essential micronutrients for plants, animals, and humans; however, they may also be pollutants if they occur at high concentrations in soil. Therefore, knowledge of Cu and Zn cycling in soils is required both for guaranteeing proper nutrition and to control possible risks arising from pollution.rnThe overall objective of my study was to test if Cu and Zn stable isotope ratios can be used to investigate into the biogeochemistry, source and transport of these metals in soils. The use of stable isotope ratios might be especially suitable to trace long-term processes occurring during soil genesis and transport of pollutants through the soil. In detail, I aimed to answer the questions, whether (1) Cu stable isotopes are fractionated during complexation with humic acid, (2) 65Cu values can be a tracer for soil genetic processes in redoximorphic soils (3) 65Cu values can help to understand soil genetic processes under oxic weathering conditions, and (4) 65Cu and 66Zn values can act as tracers of sources and transport of Cu and Zn in polluted soils.rnTo answer these questions, I ran adsorption experiments at different pH values in the laboratory and modelled Cu adsorption to humic acid. Furthermore, eight soils were sampled representing different redox and weathering regimes of which two were influenced by stagnic water, two by groundwater, two by oxic weathering (Cambisols), and two by podzolation. In all horizons of these soils, I determined selected basic soil properties, partitioned Cu into seven operationally defined fractions and determined Cu concentrations and Cu isotope ratios (65Cu values). Finally, three additional soils were sampled along a deposition gradient at different distances to a Cu smelter in Slovakia and analyzed together with bedrock and waste material from the smelter for selected basic soil properties, Cu and Zn concentrations and 65Cu and 66Zn values.rnMy results demonstrated that (1) Copper was fractionated during adsorption on humic acid resulting in an isotope fractionation between the immobilized humic acid and the solution (65CuIHA-solution) of 0.26 ± 0.11‰ (2SD) and that the extent of fractionation was independent of pH and involved functional groups of the humic acid. (2) Soil genesis and plant cycling causes measurable Cu isotope fractionation in hydromorphic soils. The results suggested that an increasing number of redox cycles depleted 63Cu with increasing depth resulting in heavier 65Cu values. (3) Organic horizons usually had isotopically lighter Cu than mineral soils presumably because of the preferred uptake and recycling of 63Cu by plants. (4) In a strongly developed Podzol, eluviation zones had lighter and illuviation zones heavier 65Cu values because of the higher stability of organo-65Cu complexes compared to organo-63Cu complexes. In the Cambisols and a little developed Podzol, oxic weathering caused increasingly lighter 65Cu values with increasing depth, resulting in the opposite depth trend as in redoximorphic soils, because of the preferential vertical transport of 63Cu. (5) The 66Zn values were fractionated during the smelting process and isotopically light Zn was emitted allowing source identification of Zn pollution while 65Cu values were unaffected by the smelting and Cu emissions isotopically indistinguishable from soil. The 65Cu values in polluted soils became lighter down to a depth of 0.4 m indicating isotope fractionation during transport and a transport depth of 0.4 m in 60 years. 66Zn values had an opposite depth trend becoming heavier with depth because of fractionation by plant cycling, speciation changes, and mixing of native and smelter-derived Zn. rnCopper showed measurable isotope fractionation of approximately 1‰ in unpolluted soils, allowing to draw conclusions on plant cycling, transport, and redox processes occurring during soil genesis and 65Cu and 66Zn values in contaminated soils allow for conclusions on sources (in my study only possible for Zn), biogeochemical behavior, and depth of dislocation of Cu and Zn pollution in soil. I conclude that stable Cu and Zn isotope ratios are a suitable novel tool to trace long-term processes in soils which are difficult to assess otherwise.rn
Resumo:
A critical point in the analysis of ground displacements time series is the development of data driven methods that allow the different sources that generate the observed displacements to be discerned and characterised. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows reducing the dimensionality of the data space maintaining most of the variance of the dataset explained. Anyway, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. The Independent Component Analysis (ICA) is a popular technique adopted to approach this problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, I use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here I present the application of the vbICA technique to GPS position time series. First, I use vbICA on synthetic data that simulate a seismic cycle (interseismic + coseismic + postseismic + seasonal + noise) and a volcanic source, and I study the ability of the algorithm to recover the original (known) sources of deformation. Secondly, I apply vbICA to different tectonically active scenarios, such as the 2009 L'Aquila (central Italy) earthquake, the 2012 Emilia (northern Italy) seismic sequence, and the 2006 Guerrero (Mexico) Slow Slip Event (SSE).
Resumo:
Biosensors find wide application in clinical diagnostics, bioprocess control and environmental monitoring. They should not only show high specificity and reproducibility but also a high sensitivity and stability of the signal. Therefore, I introduce a novel sensor technology based on plasmonic nanoparticles which overcomes both of these limitations. Plasmonic nanoparticles exhibit strong absorption and scattering in the visible and near-infrared spectral range. The plasmon resonance, the collective coherent oscillation mode of the conduction band electrons against the positively charged ionic lattice, is sensitive to the local environment of the particle. I monitor these changes in the resonance wavelength by a new dark-field spectroscopy technique. Due to a strong light source and a highly sensitive detector a temporal resolution in the microsecond regime is possible in combination with a high spectral stability. This opens a window to investigate dynamics on the molecular level and to gain knowledge about fundamental biological processes.rnFirst, I investigate adsorption at the non-equilibrium as well as at the equilibrium state. I show the temporal evolution of single adsorption events of fibrinogen on the surface of the sensor on a millisecond timescale. Fibrinogen is a blood plasma protein with a unique shape that plays a central role in blood coagulation and is always involved in cell-biomaterial interactions. Further, I monitor equilibrium coverage fluctuations of sodium dodecyl sulfate and demonstrate a new approach to quantify the characteristic rate constants which is independent of mass transfer interference and long term drifts of the measured signal. This method has been investigated theoretically by Monte-Carlo simulations but so far there has been no sensor technology with a sufficient signal-to-noise ratio.rnSecond, I apply plasmonic nanoparticles as sensors for the determination of diffusion coefficients. Thereby, the sensing volume of a single, immobilized nanorod is used as detection volume. When a diffusing particle enters the detection volume a shift in the resonance wavelength is introduced. As no labeling of the analyte is necessary the hydrodynamic radius and thus the diffusion properties are not altered and can be studied in their natural form. In comparison to the conventional Fluorescence Correlation Spectroscopy technique a volume reduction by a factor of 5000-10000 is reached.
Resumo:
Während der letzten Jahre wurde für Spinfilter-Detektoren ein wesentlicher Schritt in Richtung stark erhöhter Effizienz vollzogen. Das ist eine wichtige Voraussetzung für spinaufgelöste Messungen mit Hilfe von modernen Elektronensp ektrometern und Impulsmikroskopen. In dieser Doktorarbeit wurden bisherige Arbeiten der parallel abbildenden Technik weiterentwickelt, die darauf beruht, dass ein elektronenoptisches Bild unter Ausnutzung der k-parallel Erhaltung in der Niedrigenergie-Elektronenbeugung auch nach einer Reflektion an einer kristallinen Oberfläche erhalten bleibt. Frühere Messungen basierend auf der spekularen Reflexion an einerrnW(001) Oberfläche [Kolbe et al., 2011; Tusche et al., 2011] wurden auf einenrnviel größeren Parameterbereich erweitert und mit Ir(001) wurde ein neues System untersucht, welches eine sehr viel längere Lebensdauer der gereinigten Kristalloberfläche im UHV aufweist. Die Streuenergie- und Einfallswinkel-“Landschaft” der Spinempfindlichkeit S und der Reflektivität I/I0 von gestreuten Elektronen wurde im Bereich von 13.7 - 36.7 eV Streuenergie und 30◦ - 60◦ Streuwinkel gemessen. Die dazu neu aufgebaute Messanordnung umfasst eine spinpolarisierte GaAs Elektronenquellernund einen drehbaren Elektronendetektor (Delayline Detektor) zur ortsauflösenden Detektion der gestreuten Elektronen. Die Ergebnisse zeigen mehrere Regionen mit hoher Asymmetrie und großem Gütefaktor (figure of merit FoM), definiert als S2 · I/I0. Diese Regionen eröffnen einen Weg für eine deutliche Verbesserung der Vielkanal-Spinfiltertechnik für die Elektronenspektroskopie und Impulsmikroskopie. Im praktischen Einsatz erwies sich die Ir(001)-Einkristalloberfläche in Bezug auf längere Lebensdauer im UHV (ca. 1 Messtag), verbunden mit hoher FOM als sehr vielversprechend. Der Ir(001)-Detektor wurde in Verbindung mit einem Halbkugelanalysator bei einem zeitaufgelösten Experiment im Femtosekunden-Bereich am Freie-Elektronen-Laser FLASH bei DESY eingesetzt. Als gute Arbeitspunkte erwiesen sich 45◦ Streuwinkel und 39 eV Streuenergie, mit einer nutzbaren Energiebreite von 5 eV, sowie 10 eV Streuenergie mit einem schmaleren Profil von < 1 eV aber etwa 10× größerer Gütefunktion. Die Spinasymmetrie erreicht Werte bis 70 %, was den Einfluss von apparativen Asymmetrien deutlich reduziert. Die resultierende Messungen und Energie-Winkel-Landschaft zeigt recht gute Übereinstimmung mit der Theorie (relativistic layer-KKR SPLEED code [Braun et al., 2013; Feder et al.,rn2012])
Resumo:
The importance of the β-amino nitroalkanes is due to their high versatility allowing a straightforward entry to a variety of nitrogen-containing chiral building blocks; furthermore obtaining them in enantiopure form allows their use in the synthesis of biologically active compounds or their utilization as chiral ligands for different uses. In this work, a reaction for obtaining enantiopure β-amino nitroalkanes through asymmetric organocatalysis has been developed. The synthetic strategy adopted for the obtainment of these compounds was based on an asymmetric reduction of β-amino nitroolefins in a transfer hydrogenation reaction, involving an Hantzsch ester as hydrogen source and a chiral thiourea as organic catalyst. After the optimization of the reaction conditions over the β-acyl-amino nitrostyrene, we tested the reaction generality over other aromatic compound and for Boc protected substrate both aromatic and aliphatic. A scale-up of the reaction was also performed.
Resumo:
Although the physiological and pharmacological evidences suggest a role for angiotensin II (Ang II) with the mammalian heart, the source and precise location of Ang II are unknown. To visualize and quantitate Ang II in atria, ventricular walls and interventricular septum of the rat and human heart and to explore the feasibility of local Ang II production and function, we investigated by different methods the expression of proteins involved in the generation and function of Ang II. We found mRNA of angiotensinogen (Ang-N), of angiotensin converting enzyme, of the angiotensin type receptors AT(1A) and AT(2) (AT(1B) not detected) as well as of cathepsin D in any part of the hearts. No renin mRNA was traceable. Ang-N mRNA was visualized by in situ hybridization in atrial ganglial neurons. Ang II and dopamine- -hydroxylase (D H) were either colocalized inside the same neuronal cell or the neurons were specialized for Ang II or D H. Within these neurons, the vesicular acetylcholine transporter (VAChT) was neither colocalized with Ang II nor D H, but VAChT-staining was found with synapses en passant encircle these neuronal cells. The fibers containing Ang II exhibited with blood vessels and with cardiomyocytes supposedly angiotensinergic synapses en passant. In rat heart, right atrial median Ang II concentration appeared higher than septal and ventricular Ang II. The distinct colocalization of neuronal Ang II with D H in the heart may indicate that Ang II participates together with norepinephrine in the regulation of cardiac functions: Produced as a cardiac neurotransmitter Ang II may have inotropic, chronotropic or dromotropic effects in atria and ventricles and contributes to blood pressure regulation.
Resumo:
This paper provides an analysis of the key term aidagara (“betweenness”) in the philosophical ethics of Watsuji Tetsurō (1889-1960), in response to and in light of the recent movement in Japanese Buddhist studies known as “Critical Buddhism.” The Critical Buddhist call for a turn away from “topical” or intuitionist thinking and towards (properly Buddhist) “critical” thinking, while problematic in its bipolarity, raises the important issue of the place of “reason” versus “intuition” in Japanese Buddhist ethics. In this paper, a comparison of Watsuji’s “ontological quest” with that of Martin Heidegger (1889-1976), Watsuji’s primary Western source and foil, is followed by an evaluation of a corresponding search for an “ontology of social existence” undertaken by Tanabe Hajime (1885-1962). Ultimately, the philosophico-religious writings of Watsuji Tetsurō allow for the “return” of aesthesis as a modality of social being that is truly dimensionalized, and thus falls prey neither to the verticality of topicalism nor the limiting objectivity of criticalism.