936 resultados para delayed match-to-sample
Resumo:
Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the source, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.
Resumo:
El propósito de este proyecto de �fin de carrera es la caracterización e instrumentación de un sensor de ultrasonidos modelado por el tutor de este proyecto: Don César Briso Rodrí��guez. Una vez realizado el modelado de dicho sensor, simulando tanto sus caracter�í�sticas f�í�sicas, como sus caracterí��sticas eléctricas, se procede a la intrumentación y uso del mismo. La parte de intrumentaci�ón incluye tanto la electrónica que ser��á necesaria para la excitación del piezoeléctrico, en el modo de emisi�ón, como para la recepción de los pulsos el�éctricos generados por el sensor, como respuesta a los ecos recibidos, y su adecuación a niveles de señal correctos para la adquisici�ón, en el modo de escucha. Tras la adecuaci�ón de las señales para la adquisici�ón, éstas ser�án digitalizadas, tratadas y representadas por pantalla en un PC, a trav�es de una tarjeta de adquisición de datos por puerto USB encargada del muestreo de las señales de respuesta ya tratadas y su posterior enví��o al software de control y representaci�ón desarrollado en este proyecto. El entorno de usuario, el software de control de la tarjeta de adquisición y el software de tratamiento y representaci�ón se ha desarrollado con Visual Basic 2008 y las utilidades gr�áfi�cas de las librer��ías OpenGL. ABSTRACT The purpose of this project is to limit the characterization and implementation of an ultrasonic sensor modeled by Mr. C�ésar Briso Rodr��íguez. Once the sensor modeling by simulating physical characteristics and electrical characteristics, we proceed to the instrumentation and use. This section includes electronic instrumentation that would be necessary for the piezoelectric excitation in the emission mode and for receiving electrical pulses generated by the sensor in response to the received echoes, and matching signal levels right to acquire, in the reception mode. After the adjustment of the signals for the acquisition, these signals will be digitalized, processed and represented on the screen on a PC through a data acquisition card by USB port. Acquisition card is able to sample the response signals and transmit the samples to representation and control software developed in this project. The user interface, the acquisition card control software and processing and representation software has been developed with Visual Basic 2008 and OpenGL graphical libraries.
Resumo:
Energy conversion in solar cells incorporating ZnTeO base layers is presented. The ZnTeO base layers incorporate intermediate electronic states located approximately 0.4eV below the conduction band edge as a result of the substitution of O in Te sites in the ZnTe lattice. Cells with ZnTeO base layers demonstrate optical response at energies lower than the ZnTe bandedge, a feature that is absent in reference cells with ZnTe base layers. Quantum efficiency is significantly improved with the incorporation of ZnSe emitter/window layers and transition from growth on GaAs substrates to GaSb substrates with a near lattice match to ZnTe.
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
Los recientes desarrollos tecnológicos permiten la transición de la oceanografía observacional desde un concepto basado en buques a uno basado en sistemas autónomos en red. Este último, propone que la forma más eficiente y efectiva de observar el océano es con una red de plataformas autónomas distribuidas espacialmente y complementadas con sistemas de medición remota. Debido a su maniobrabilidad y autonomía, los planeadores submarinos están jugando un papel relevante en este concepto de observaciones en red. Los planeadores submarinos fueron específicamente diseñados para muestrear vastas zonas del océano. Estos son robots con forma de torpedo que hacen uso de su forma hidrodinámica, alas y cambios de flotabilidad para generar movimientos horizontales y verticales en la columna de agua. Un sensor que mide conductividad, temperatura y profundidad (CTD) constituye un equipamiento estándar en la plataforma. Esto se debe a que ciertas variables dinámicas del Océano se pueden derivar de la temperatura, profundidad y salinidad. Esta última se puede estimar a partir de las medidas de temperatura y conductividad. La integración de sensores CTD en planeadores submarinos no esta exenta de desafíos. Uno de ellos está relacionado con la precisión de los valores de salinidad derivados de las muestras de temperatura y conductividad. Específicamente, las estimaciones de salinidad están significativamente degradadas por el retardo térmico existente, entre la temperatura medida y la temperatura real dentro de la celda de conductividad del sensor. Esta deficiencia depende de las particularidades del flujo de entrada al sensor, su geometría y, también se ha postulado, del calor acumulado en las capas de aislamiento externo del sensor. Los efectos del retardo térmico se suelen mitigar mediante el control del flujo de entrada al sensor. Esto se obtiene generalmente mediante el bombeo de agua a través del sensor o manteniendo constante y conocida su velocidad. Aunque recientemente se han incorporado sistemas de bombeo en los CTDs a bordo de los planeadores submarinos, todavía existen plataformas equipadas con CTDs sin dichos sistemas. En estos casos, la estimación de la salinidad supone condiciones de flujo de entrada al sensor, razonablemente controladas e imperturbadas. Esta Tesis investiga el impacto, si existe, que la hidrodinámica de los planeadores submarinos pudiera tener en la eficiencia de los sensores CTD. Específicamente, se investiga primero la localización del sensor CTD (externo al fuselaje) relativa a la capa límite desarrollada a lo largo del cuerpo del planeador. Esto se lleva a cabo mediante la utilización de un modelo acoplado de fluido no viscoso con un modelo de capa límite implementado por el autor, así como mediante un programa comercial de dinámica de fluidos computacional (CFD). Los resultados indican, en ambos casos, que el sensor CTD se encuentra fuera de la capa límite, siendo las condiciones del flujo de entrada las mismas que las del flujo sin perturbar. Todavía, la velocidad del flujo de entrada al sensor CTD es la velocidad de la plataforma, la cual depende de su hidrodinámica. Por tal motivo, la investigación se ha extendido para averiguar el efecto que la velocidad de la plataforma tiene en la eficiencia del sensor CTD. Con este propósito, se ha desarrollado un modelo en elementos finitos del comportamiento hidrodinámico y térmico del flujo dentro del CTD. Los resultados numéricos indican que el retardo térmico, atribuidos originalmente a la acumulación de calor en la estructura del sensor, se debe fundamentalmente a la interacción del flujo que atraviesa la celda de conductividad con la geometría interna de la misma. Esta interacción es distinta a distintas velocidades del planeador submarino. Específicamente, a velocidades bajas del planeador (0.2 m/s), la mezcla del flujo entrante con las masas de agua remanentes en el interior de la celda, se ralentiza debido a la generación de remolinos. Se obtienen entonces desviaciones significantes entre la salinidad real y aquella estimada. En cambio, a velocidades más altas del planeador (0.4 m/s) los procesos de mezcla se incrementan debido a la turbulencia e inestabilidades. En consecuencia, la respuesta del sensor CTD es mas rápida y las estimaciones de la salinidad mas precisas que en el caso anterior. Para completar el trabajo, los resultados numéricos se han validado con pruebas experimentales. Específicamente, se ha construido un modelo a escala del sensor CTD para obtener la confirmación experimental de los modelos numéricos. Haciendo uso del principio de similaridad de la dinámica que gobierna los fluidos incompresibles, los experimentos se han realizado con flujos de aire. Esto simplifica significativamente la puesta experimental y facilita su realización en condiciones con medios limitados. Las pruebas experimentales han confirmado cualitativamente los resultados numéricos. Más aun, se sugiere en esta Tesis que la respuesta del sensor CTD mejoraría significativamente añadiendo un generador de turbulencia en localizaciones adecuadas al interno de la celda de conductividad. ABSTRACT Recent technological developments allow the transition of observational oceanography from a ship-based to a networking concept. The latter suggests that the most efficient and effective way to observe the Ocean is through a fleet of spatially distributed autonomous platforms complemented by remote sensing. Due to their maneuverability, autonomy and endurance at sea, underwater gliders are already playing a significant role in this networking observational approach. Underwater gliders were specifically designed to sample vast areas of the Ocean. These are robots with a torpedo shape that make use of their hydrodynamic shape, wings and buoyancy changes to induce horizontal and vertical motions through the water column. A sensor to measure the conductivity, temperature and depth (CTD) is a standard payload of this platform. This is because certain ocean dynamic variables can be derived from temperature, depth and salinity. The latter can be inferred from measurements of temperature and conductivity. Integrating CTD sensors in glider platforms is not exempted of challenges. One of them, concerns to the accuracy of the salinity values derived from the sampled conductivity and temperature. Specifically, salinity estimates are significantly degraded by the thermal lag response existing between the measured temperature and the real temperature inside the conductivity cell of the sensor. This deficiency depends on the particularities of the inflow to the sensor, its geometry and, it has also been hypothesized, on the heat accumulated by the sensor coating layers. The effects of thermal lag are usually mitigated by controlling the inflow conditions through the sensor. Controlling inflow conditions is usually achieved by pumping the water through the sensor or by keeping constant and known its diving speed. Although pumping systems have been recently implemented in CTD sensors on board gliders, there are still platforms with unpumped CTDs. In the latter case, salinity estimates rely on assuming reasonable controlled and unperturbed flow conditions at the CTD sensor. This Thesis investigates the impact, if any, that glider hydrodynamics may have on the performance of onboard CTDs. Specifically, the location of the CTD sensor (external to the hull) relative to the boundary layer developed along the glider fuselage, is first investigated. This is done, initially, by applying a coupled inviscid-boundary layer model developed by the author, and later by using a commercial software for computational fluid dynamics (CFD). Results indicate, in both cases, that the CTD sensor is out of the boundary layer, being its inflow conditions those of the free stream. Still, the inflow speed to the CTD sensor is the speed of the platform, which largely depends on its hydrodynamic setup. For this reason, the research has been further extended to investigate the effect of the platform speed on the performance of the CTD sensor. A finite element model of the hydrodynamic and thermal behavior of the flow inside the CTD sensor, is developed for this purpose. Numerical results suggest that the thermal lag effect is mostly due to the interaction of the flow through the conductivity cell and its geometry. This interaction is different at different speeds of the glider. Specifically, at low glider speeds (0.2 m/s), the mixing of recent and old waters inside the conductivity cell is slowed down by the generation of coherent eddy structures. Significant departures between real and estimated values of the salinity are found. Instead, mixing is enhanced by turbulence and instabilities for high glider speeds (0.4 m/s). As a result, the thermal response of the CTD sensor is faster and the salinity estimates more accurate than for the low speed case. For completeness, numerical results have been validated against model tests. Specifically, a scaled model of the CTD sensor was built to obtain experimental confirmation of the numerical results. Making use of the similarity principle of the dynamics governing incompressible fluids, experiments are carried out with air flows. This significantly simplifies the experimental setup and facilitates its realization in a limited resource condition. Model tests qualitatively confirm the numerical findings. Moreover, it is suggested in this Thesis that the response of the CTD sensor would be significantly improved by adding small turbulators at adequate locations inside the conductivity cell.
Resumo:
La simple lectura de periódicos en la prensa o en Internet, o lo ver o oír de los noticieros en las emisoras de radio o televisión, fornecen subsidios ya suficientes para la percepción de la amplitud de la cuestión de la vivienda social en todo el mundo. Esto trabajo, que ahora se presenta, es el resultado de una investigación científica desarrollada de modo a subsidiar una Tesis de Doctorado a ser presentada en la Universidad Politécnica de Madrid. En la investigación científica, se pretendió sistematizar lecturas y datos sobre la cuestión habitar en todo el mundo, y sobre todo en Brasil, donde se tuvo como foco principal, las poblaciones inseridas en las fajas de renta inferiores a €300 mensuales, ubicadas en nordeste de Brasil. Basándose en todo eso trabajo de investigación científica, que tanto se preocupó con los aspectos ergonómicos, sociales, conceptuales, proyéctales, ambientales, arquitectónicos y urbanísticos, como también con los estadísticos, tecnológicos, infraestructurales, económicos y comerciales, se propugnó una metodología para la implantación urbana de asentamientos planeados de viviendas sociales y la producción en serie de unidades residenciales unifamiliares. La cuestión urbana fue la primera preocupación del científico, tanto en lo que se refiere al planeamiento estratégico de ciudades, como los temas que envuelven la espontaneidad, intencionalidad, territorialidad y centralidades de las ocupaciones humanas en la modernidad y postmodernidad. Proyectos de edificios y de conjuntos de viviendas, así como planes urbanos, fueron utilizados como estudios de casos de modo a basar los análisis presentados en esa tesis. La definición de diseños para la vivienda social, tuvo como premisas, la mejor adecuación posible a las características bioclimáticas de los diversos sitios elegidos y atender a las necesidades presentadas por los futuros usuarios de los objetos arquitectónicos construidos. Los partidos arquitectónicos adoptados intentaron ser coherentes con los valores antropológicos y culturales de las poblaciones atendidas con los bienes producidos, y se ajustaren a las capacidades de comprometimiento financiero de las diversas comunidades atendidas con las edificaciones. Pretendió aún, esa tesis de doctorado, hacer una correlación entre las políticas oficiales apuntadas para la vivienda social existentes en Brasil y las encontradas en los países desarrollados; así como los antecedentes históricos de las singulares problemáticas habitacionales, que tuvieron como cumbre la producción de estrategias gerenciales propias. Se aprovecho, también, mucho de la experiencia norteamericana y europea de industrialización, para ser empleada en la producción en serie de unidades habitacionales en nordeste de Brasil, de modo a que esa tecnología de construcción tanto pueda ser operada por un sistema fabril formal, cuanto por procesos de autogestión y autoconstrucción, actualmente adoptados en amplia escala en Brasil. ABSTRACT The simple reading of nowadays newspapers or the facts presented at Internet, or that can been seen on television or heard on a radio, gives the audience enough arguments to support the right perception of the magnitude of housing problem around the world. This paper is the result of a scientific investigation developed to support a Doctoral Thesis, which will be presented in the Universidad Politécnica de Madrid. This research aimed at presenting a methodology for serial production of housing designed for planned areas and their integration to urban space. Information that refers to housing problem around the world, including Brazil, was systematized. We selected as target people that live in the Northeast Region of the country and perceive less than €300 per month as a salary. We analyzed data regarding ergonomic, social, conceptual, environmental and architectural aspects, as well as project and urban information; and also considering statistical, technological, infrastructural, economical and commercial issues. Urban request was the first aspect analyzed. City strategic plan and themes involving aspects such as spontaneity, intentionality, territory and centrality of human occupations in modern and postmodern time were deeply considered. This thesis is also referenced on well known building projects and cities planning. The most adequate constructions considering local weather and poor people architectural necessities programs were pointed out before defining the projects’ sketches. The coherence between plans and anthropologic-cultural values, and also the perfect match to future owners’ financial capacities were also objectives of this work. This study also intended to compare Brazilian’s policies to the ones found in developed countries; and discusses historical facts involved with housing problem, which resulted with management strategies and policies. Some aspects of North American and European industrial experiences were applied to develop serial production of Northeast Brazilian housing. The technology obtained with this methodology intends to be applied in industrial production and also in self-management or self-production procedures, largely used in Brazil.
Resumo:
Coincidence detection is important for functions as diverse as Hebbian learning, binaural localization, and visual attention. We show here that extremely precise coincidence detection is a natural consequence of the normal function of rectifying electrical synapses. Such synapses open to bidirectional current flow when presynaptic cells depolarize relative to their postsynaptic targets and remain open until well after completion of presynaptic spikes. When multiple input neurons fire simultaneously, the synaptic currents sum effectively and produce a large excitatory postsynaptic potential. However, when some inputs are delayed relative to the rest, their contributions are reduced because the early excitatory postsynaptic potential retards the opening of additional voltage-sensitive synapses, and the late synaptic currents are shunted by already opened junctions. These mechanisms account for the ability of the lateral giant neurons of crayfish to sum synchronous inputs, but not inputs separated by only 100 μsec. This coincidence detection enables crayfish to produce reflex escape responses only to very abrupt mechanical stimuli. In light of recent evidence that electrical synapses are common in the mammalian central nervous system, the mechanisms of coincidence detection described here may be widely used in many systems.
Resumo:
There is considerable evidence from animal studies that gonadal steroid hormones modulate neuronal activity and affect behavior. To study this in humans directly, we used H215O positron-emission tomography to measure regional cerebral blood flow (rCBF) in young women during three pharmacologically controlled hormonal conditions spanning 4–5 months: ovarian suppression induced by the gonadotropin-releasing hormone agonist leuprolide acetate (Lupron), Lupron plus estradiol replacement, and Lupron plus progesterone replacement. Estradiol and progesterone were administered in a double-blind cross-over design. On each occasion positron-emission tomography scans were performed during (i) the Wisconsin Card Sorting Test, a neuropsychological test that physiologically activates prefrontal cortex (PFC) and an associated cortical network including inferior parietal lobule and posterior inferolateral temporal gyrus, and (ii) a no-delay matching-to-sample sensorimotor control task. During treatment with Lupron alone (i.e., with virtual absence of gonadal steroid hormones), there was marked attenuation of the typical Wisconsin Card Sorting Test activation pattern even though task performance did not change. Most strikingly, there was no rCBF increase in PFC. When either progesterone or estrogen was added to the Lupron regimen, there was normalization of the rCBF activation pattern with augmentation of the parietal and temporal foci and return of the dorsolateral PFC activation. These data directly demonstrate that the hormonal milieu modulates cognition-related neural activity in humans.
Resumo:
Growth factors can influence lineage determination of neural crest stem cells (NCSCs) in an instructive manner, in vitro. Because NCSCs are likely exposed to multiple signals in vivo, these findings raise the question of how stem cells would integrate such combined influences. Bone morphogenetic protein 2 (BMP2) promotes neuronal differentiation and glial growth factor 2 (GGF2) promotes glial differentiation; if NCSCs are exposed to saturating concentrations of both factors, BMP2 appears dominant. By contrast, if the cells are exposed to saturating concentrations of both BMP2 and transforming growth factor β1 (which promotes smooth muscle differentiation), the two factors appear codominant. Sequential addition experiments indicate that NCSCs require 48–96 hrs in GGF2 before they commit to a glial fate, whereas the cells commit to a smooth muscle fate within 24 hr in transforming growth factor β1. The delayed response to GGF2 does not reflect a lack of functional receptors; however, because the growth factor induces rapid mitogen-activated protein kinase phosphorylation in naive cells. Furthermore, GGF2 can attenuate induction of the neurogenic transcription factor mammalian achaete-scute homolog 1, by low doses of BMP2. This short-term antineurogenic influence of GGF2 is not sufficient for glial lineage commitment, however. These data imply that NCSCs exhibit cell-intrinsic biases in the timing and relative dosage sensitivity of their responses to instructive factors that influence the outcome of lineage decisions in the presence of multiple factors. The relative delay in glial lineage commitment, moreover, apparently reflects successive short-term and longer-term actions of GGF2. Such a delay may help to explain why glia normally differentiate after neurons, in vivo.
Resumo:
Genes that are characteristic of only certain strains of a bacterial species can be of great biologic interest. Here we describe a PCR-based subtractive hybridization method for efficiently detecting such DNAs and apply it to the gastric pathogen Helicobacter pylori. Eighteen DNAs specific to a monkey-colonizing strain (J166) were obtained by subtractive hybridization against an unrelated strain whose genome has been fully sequenced (26695). Seven J166-specific clones had no DNA sequence match to the 26695 genome, and 11 other clones were mixed, with adjacent patches that did and did not match any sequences in 26695. At the protein level, seven clones had homology to putative DNA restriction-modification enzymes, and two had homology to putative metabolic enzymes. Nine others had no database match with proteins of assigned function. PCR tests of 13 unrelated H. pylori strains by using primers specific for 12 subtracted clones and complementary Southern blot hybridizations indicated that these DNAs are highly polymorphic in the H. pylori population, with each strain yielding a different pattern of gene-specific PCR amplification. The search for polymorphic DNAs, as described here, should help identify previously unknown virulence genes in pathogens and provide new insights into microbial genetic diversity and evolution.
Resumo:
Dynamic importance weighting is proposed as a Monte Carlo method that has the capability to sample relevant parts of the configuration space even in the presence of many steep energy minima. The method relies on an additional dynamic variable (the importance weight) to help the system overcome steep barriers. A non-Metropolis theory is developed for the construction of such weighted samplers. Algorithms based on this method are designed for simulation and global optimization tasks arising from multimodal sampling, neural network training, and the traveling salesman problem. Numerical tests on these problems confirm the effectiveness of the method.
Resumo:
The ability of antigen-presenting cells to sample distinct intracellular compartments is crucial for microbe detection. Major histocompatibility complex class I and class II molecules sample the cytosol or the late endocytic compartment, allowing detection of microbial peptide antigens that arise in distinct intracellular compartments. In contrast, CD1a and CD1b molecules mediate the presentation of lipid and glycolipid antigens and differentially sample early recycling endosomes or late endocytic compartments, respectively, that contain distinct sets of lipid antigens. Here, we show that, unlike the other CD1 isoforms or major histocompatibility complex molecules that each sample restricted only intracellular compartments, CD1c is remarkable in that it distributes broadly throughout the endocytic system and is expressed in both recycling endosomes and late endocytic compartments. Further, in contrast to CD1b, which requires an acidic environment to function, antigen presentation by CD1c was able to overcome dependence on vesicular acidification. Because CD1c is expressed on essential antigen-presenting cells, such as epidermal Langerhans cells (in the absence of CD1b), or on B cells (without CD1a or -b), we suggest that CD1c molecules allow a comprehensive survey for lipid antigens throughout the endocytic system even in the absence of other CD1 isoforms.
Resumo:
Spectral analysis of climate data shows a strong narrow peak with period ≈100 kyr, attributed by the Milankovitch theory to changes in the eccentricity of the earth’s orbit. The narrowness of the peak does suggest an astronomical origin; however the shape of the peak is incompatible with both linear and nonlinear models that attribute the cycle to eccentricity or (equivalently) to the envelope of the precession. In contrast, the orbital inclination parameter gives a good match to both the spectrum and bispectrum of the climate data. Extraterrestrial accretion from meteoroids or interplanetary dust is proposed as a mechanism that could link inclination to climate, and experimental tests are described that could prove or disprove this hypothesis.
Resumo:
Recent major advances in x-ray imaging and spectroscopy of clusters have allowed the determination of their mass and mass profile out to ≈1/2 the virial radius. In rich clusters, most of the baryonic mass is in the gas phase, and the ratio of mass in gas/stars varies by a factor of 2–4. The baryonic fractions vary by a factor of ≈3 from cluster to cluster and almost always exceed 0.09 h50−[3/2] and thus are in fundamental conflict with the assumption of Ω = 1 and the results of big bang nucleosynthesis. The derived Fe abundances are 0.2–0.45 solar, and the abundances of O and Si for low redshift systems are 0.6–1.0 solar. This distribution is consistent with an origin in pure type II supernova. The amount of light and energy produced by these supernovae is very large, indicating their importance in influencing the formation of clusters and galaxies. The lack of evolution of Fe to a redshift of z ≈ 0.4 argues for very early enrichment of the cluster gas. Groups show a wide range of abundances, 0.1–0.5 solar. The results of an x-ray survey indicate that the contribution of groups to the mass density of the universe is likely to be larger than 0.1 h50−2. Many of the very poor groups have large x-ray halos and are filled with small galaxies whose velocity dispersion is a good match to the x-ray temperatures.
Resumo:
To replicate, HIV-1 must integrate a cDNA copy of the viral RNA genome into a chromosome of the host. The integration system is a promising target for antiretroviral agents, but to date no clinically useful integration inhibitors have been identified. Previous screens for integrase inhibitors have assayed inhibition of reactions containing HIV-1 integrase purified from an Escherichia coli expression system. Here we compare action of inhibitors in vitro on purified integrase and on subviral preintegration complexes (PICs) isolated from lymphoid cells infected with HIV-1. We find that many inhibitors active against purified integrase are inactive against PICs. Using PIC assays as a primary screen, we have identified three new anthraquinone inhibitors active against PICs and also against purified integrase. We propose that PIC assays are the closest in vitro match to integration in vivo and, as such, are particularly appropriate for identifying promising integration inhibitors.