948 resultados para LOW-RESOLUTION STRUCTURES
Resumo:
Structural analyses of heterologously expressed mammalian membrane proteins remain a great challenge given that microgram to milligram amounts of correctly folded and highly purified proteins are required. Here, we present a novel method for the expression and affinity purification of recombinant mammalian and in particular human transport proteins in Xenopus laevis frog oocytes. The method was validated for four human and one murine transporter. Negative stain transmission electron microscopy (TEM) and single particle analysis (SPA) of two of these transporters, i.e., the potassium-chloride cotransporter 4 (KCC4) and the aquaporin-1 (AQP1) water channel, revealed the expected quaternary structures within homogeneous preparations, and thus correct protein folding and assembly. This is the first time a cation-chloride cotransporter (SLC12) family member is isolated, and its shape, dimensions, low-resolution structure and oligomeric state determined by TEM, i.e., by a direct method. Finally, we were able to grow 2D crystals of human AQP1. The ability of AQP1 to crystallize was a strong indicator for the structural integrity of the purified recombinant protein. This approach will open the way for the structure determination of many human membrane transporters taking full advantage of the Xenopus laevis oocyte expression system that generally yields robust functional expression.
Resumo:
OBJECTIVE: NoGo-stimuli during a Continuous Performance Test (CPT) activate prefrontal brain structures such as the anterior cingulate gyrus and lead to an anteriorisation of the positive electrical field of the NoGo-P300 relative to the Go-P300, so-called NoGo-anteriorisation (NGA). NGA during CPT is regarded as a neurophysiological standard index for cognitive response control. While it is known that patients with chronic schizophrenia exhibit a significant reduction in NGA, it is unclear whether this also occurs in patients undergoing their first-episode. Thus, the aim of the present study was to determine NGA in a group of patients with first-episode schizophrenia by utilizing a CPT paradigm. METHODS: Eighteen patients with first-episode schizophrenia and 18 matched healthy subjects were investigated electrophysiologically during a cued CPT, and the parameters of the Go- and NoGo-P300 were determined using microstate analysis. Low resolution tomography analysis (LORETA) was used for source determination. RESULTS: Due to a more posterior Go- and a more anterior NoGo-centroid, NGA was greater in patients than in healthy controls. LORETA indicated the same sources for both groups after Go-stimuli, but a more anterior source in patients after NoGo-stimuli. In patients P300-amplitude responses to both Go- and NoGo-stimuli were decreased, and P300-latency to NoGo-stimuli was increased. After the Go-stimuli false reactions and reaction times were increased in patients. CONCLUSIONS: Attention was reduced in patients with first-episode schizophrenia, as indicated by more false reactions, prolongation of reaction time, P300-latencies and by a decrease in P300-amplitude. Significantly however, the NGA and prefrontal LORETA-sources indicate intact prefrontal brain structures in first-episode schizophrenia patients. Previously described changes in this indicator of prefrontal function may be related to a progressive decay in chronic schizophrenia. SIGNIFICANCE: The results support the idea of a possible new biological marker of first episode psychosis, which may be a useful parameter for the longitudinal measurement of changing prefrontal brain function in a single schizophrenia patient.
Resumo:
PURPOSE: To determine the feasibility of using a high resolution isotropic three-dimensional (3D) fast T1 mapping sequence for delayed gadolinium-enhanced MRI of cartilage (dGEMRIC) to assess osteoarthritis in the hip. MATERIALS AND METHODS: T1 maps of the hip were acquired using both low and high resolution techniques following the administration of 0.2 mmol/kg Gd-DTPA(2-) in 35 patients. Both T1 maps were generated from two separate spoiled GRE images. The high resolution T1 map was reconstructed in the anatomically equivalent plane as the low resolution map. T1 values from the equivalent anatomic regions containing femoral and acetabular cartilages were measured on the low and high resolution maps and compared using regression analysis. RESULTS: In vivo T1 measurements showed a statistically significant correlation between the low and high resolution acquisitions at 1.5 Tesla (R(2) = 0.958, P < 0.001). These results demonstrate the feasibility of using a fast two-angle T1 mapping (F2T1) sequence with isotropic spatial resolution (0.8 x 0.8 x 0.8 mm) for quantitative assessment of biochemical status in articular cartilage of the hip. CONCLUSION: The high resolution 3D F2T1 sequence provides accurate T1 measurements in femoral and acetabular cartilages of the hip, which enables the biochemical assessment of articular cartilage in any plane through the joint. It is a powerful tool for researchers and clinicians to acquire high resolution data in a reasonable scan time (< 30 min).
Resumo:
Despite efforts implicating the cationic channel transient receptor potential melastatin member 4 (TRPM4) to cardiac, nervous, and immunological pathologies, little is known about its structure and function. In this study, we optimized the requirements for purification and extraction of functional human TRPM4 protein and investigated its supra-molecular assembly. We selected the Xenopus laevis oocyte expression system because it lacks endogenous TRPM4 expression, it is known to overexpress functional human membrane channels, can be used for structure-function analysis within the same system, and is easily scaled to improve yield and develop moderate throughput capabilities through the use of robotics. Negative-stain electron microscopy (EM) revealed various sized low-resolution particles. Single particle analysis identified the majority of the projections represented the monomeric form with additional oligomeric structures potentially characterized as tetramers. Two-electrode voltage clamp electrophysiology demonstrated that human TRPM4 is functionally expressed at the oocyte plasma membrane. This study opens the door for medium-throughput screening and structure-function determination of this important therapeutically relevant target.
Resumo:
Performing a prospective memory task repeatedly changes the nature of the task from episodic to habitual. The goal of the present study was to investigate the neural basis of this transition. In two experiments, we contrasted event-related potentials (ERPs) evoked by correct responses to prospective memory targets in the first, more episodic part of the experiment with those of the second, more habitual part of the experiment. Specifically, we tested whether the early, middle, or late ERP-components, which are thought to reflect cue detection, retrieval of the intention, and post-retrieval processes, respectively, would be changed by routinely performing the prospective memory task. The results showed a differential ERP effect in the middle time window (450 - 650 ms post-stimulus). Source localization using low resolution brain electromagnetic tomography analysis (LORETA) suggests that the transition was accompanied by an increase of activation in the posterior parietal and occipital cortex. These findings indicate that habitual prospective memory involves retrieval processes guided more strongly by parietal brain structures. In brief, the study demonstrates that episodic and habitual prospective memory tasks recruit different brain areas.
Resumo:
The neodymium (Nd) isotopic composition (Nd) of seawater is a quasi-conservative tracer of water mass mixing and is assumed to hold great potential for paleoceanographic studies. Here we present a comprehensive approach for the simulation of the two neodymium isotopes 143Nd, and 144Nd using the Bern3D model, a low resolution ocean model. The high computational efficiency of the Bern3D model in conjunction with our comprehensive approach allows us to systematically and extensively explore the sensitivity of Nd concentrations and Nd to the parametrisation of sources and sinks. Previous studies have been restricted in doing so either by the chosen approach or by computational costs. Our study thus presents the most comprehensive survey of the marine Nd cycle to date. Our model simulates both Nd concentrations as well as Nd in good agreement with observations. Nd covaries with salinity, thus underlining its potential as a water mass proxy. Results confirm that the continental margins are required as a Nd source to simulate Nd concentrations and Nd consistent with observations. We estimate this source to be slightly smaller than reported in previous studies and find that above a certain magnitude its magnitude affects Nd only to a small extent. On the other hand, the parametrisation of the reversible scavenging considerably affects the ability of the model to simulate both, Nd concentrations and Nd. Furthermore, despite their small contribution, we find dust and rivers to be important components of the Nd cycle. In additional experiments, we systematically varied the diapycnal diffusivity as well as the Atlantic-to-Pacific freshwater flux to explore the sensitivity of Nd concentrations and its isotopic signature to the strength and geometry of the overturning circulation. These experiments reveal that Nd concentrations and Nd are comparatively little affected by variations in diapycnal diffusivity and the Atlantic-to-Pacific freshwater flux. In contrast, an adequate representation of Nd sources and sinks is crucial to simulate Nd concentrations and Nd consistent with observations. The good agreement of our results with observations paves the way for the evaluation of the paleoceanographic potential of Nd in further model studies.
Resumo:
Purpose Ophthalmologists are confronted with a set of different image modalities to diagnose eye tumors e.g., fundus photography, CT and MRI. However, these images are often complementary and represent pathologies differently. Some aspects of tumors can only be seen in a particular modality. A fusion of modalities would improve the contextual information for diagnosis. The presented work attempts to register color fundus photography with MRI volumes. This would complement the low resolution 3D information in the MRI with high resolution 2D fundus images. Methods MRI volumes were acquired from 12 infants under the age of 5 with unilateral retinoblastoma. The contrast-enhanced T1-FLAIR sequence was performed with an isotropic resolution of less than 0.5mm. Fundus images were acquired with a RetCam camera. For healthy eyes, two landmarks were used: the optic disk and the fovea. The eyes were detected and extracted from the MRI volume using a 3D adaption of the Fast Radial Symmetry Transform (FRST). The cropped volume was automatically segmented using the Split Bregman algorithm. The optic nerve was enhanced by a Frangi vessel filter. By intersection the nerve with the retina the optic disk was found. The fovea position was estimated by constraining the position with the angle between the optic and the visual axis as well as the distance from the optic disk. The optical axis was detected automatically by fitting a parable on to the lens surface. On the fundus, the optic disk and the fovea were detected by using the method of Budai et al. Finally, the image was projected on to the segmented surface using the lens position as the camera center. In tumor affected eyes, the manually segmented tumors were used instead of the optic disk and macula for the registration. Results In all of the 12 MRI volumes that were tested the 24 eyes were found correctly, including healthy and pathological cases. In healthy eyes the optic nerve head was found in all of the tested eyes with an error of 1.08 +/- 0.37mm. A successful registration can be seen in figure 1. Conclusions The presented method is a step toward automatic fusion of modalities in ophthalmology. The combination enhances the MRI volume with higher resolution from the color fundus on the retina. Tumor treatment planning is improved by avoiding critical structures and disease progression monitoring is made easier.
Resumo:
Modern scleractinian corals are classical components of marine shallow warm water ecosystems. Their occurrence and diversity patterns in the geological record have been widely used to infer past climates and environmental conditions. Coral skeletal composition data reflecting the nature of the coral environment are often affected by diagenetic alteration. Ghost structures of annual growth rhythms are, however, often well preserved in the transformed skeleton. We show that these relicts represent a valuable source of information on growth conditions of fossil corals. Annual growth bands were measured in massive hemispherical Porites of late Miocene age from the island of Crete (Greece) that were found in patch reefs and level bottom associations of attached mixed clastic environments as well as isolated carbonate environments. The Miocene corals grew slowly, about 2-4 mm/yr, compatible with present-day Porites from high-latitude reefs. Slow annual growth of the Miocene corals is in good agreement with the position of Crete at the margin of the Miocene reef belt. Within a given time slice, extension rates were lowest in level bottom environments and highest in attached inshore reef systems. Because sea surface temperatures (SSTs) can be expected to be uniform within a time slice, spatial variations in extension rates must reflect local variations in light levels (low in the level bottom communities) and nutrients (high in the attached reef systems). During the late Miocene (Tortonian-early Messinian), maximum linear extension rates remained remarkably constant within seven chronostratigraphic units, and if the relationship of SSTs and annual growth rates observed for modern massive Indo-Pacific Porites spp. applies to the Neogene, minimum (winter) SSTs were 20°-21°C. Although our paleoclimatic record has a low resolution, it fits the trends revealed by global data sets. In the near future we expect this new and easy to use Porites thermometer to add important new information to our understanding of Neogene climate.
Resumo:
Respiratory motion is a major source of reduced quality in positron emission tomography (PET). In order to minimize its effects, the use of respiratory synchronized acquisitions, leading to gated frames, has been suggested. Such frames, however, are of low signal-to-noise ratio (SNR) as they contain reduced statistics. Super-resolution (SR) techniques make use of the motion in a sequence of images in order to improve their quality. They aim at enhancing a low-resolution image belonging to a sequence of images representing different views of the same scene. In this work, a maximum a posteriori (MAP) super-resolution algorithm has been implemented and applied to respiratory gated PET images for motion compensation. An edge preserving Huber regularization term was used to ensure convergence. Motion fields were recovered using a B-spline based elastic registration algorithm. The performance of the SR algorithm was evaluated through the use of both simulated and clinical datasets by assessing image SNR, as well as the contrast, position and extent of the different lesions. Results were compared to summing the registered synchronized frames on both simulated and clinical datasets. The super-resolution image had higher SNR (by a factor of over 4 on average) and lesion contrast (by a factor of 2) than the single respiratory synchronized frame using the same reconstruction matrix size. In comparison to the motion corrected or the motion free images a similar SNR was obtained, while improvements of up to 20% in the recovered lesion size and contrast were measured. Finally, the recovered lesion locations on the SR images were systematically closer to the true simulated lesion positions. These observations concerning the SNR, lesion contrast and size were confirmed on two clinical datasets included in the study. In conclusion, the use of SR techniques applied to respiratory motion synchronized images lead to motion compensation combined with improved image SNR and contrast, without any increase in the overall acquisition times.
Resumo:
In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.
Resumo:
The wealth of kinetic and structural information makes inorganic pyrophosphatases (PPases) a good model system to study the details of enzymatic phosphoryl transfer. The enzyme accelerates metal-complexed phosphoryl transfer 1010-fold: but how? Our structures of the yeast PPase product complex at 1.15 Å and fluoride-inhibited complex at 1.9 Å visualize the active site in three different states: substrate-bound, immediate product bound, and relaxed product bound. These span the steps around chemical catalysis and provide strong evidence that a water molecule (Onu) directly attacks PPi with a pKa vastly lowered by coordination to two metal ions and D117. They also suggest that a low-barrier hydrogen bond (LBHB) forms between D117 and Onu, in part because of steric crowding by W100 and N116. Direct visualization of the double bonds on the phosphates appears possible. The flexible side chains at the top of the active site absorb the motion involved in the reaction, which may help accelerate catalysis. Relaxation of the product allows a new nucleophile to be generated and creates symmetry in the elementary catalytic steps on the enzyme. We are thus moving closer to understanding phosphoryl transfer in PPases at the quantum mechanical level. Ultra-high resolution structures can thus tease out overlapping complexes and so are as relevant to discussion of enzyme mechanism as structures produced by time-resolved crystallography.
Resumo:
The pivotal role of G proteins in sensory, hormonal, inflammatory, and proliferative responses has provoked intense interest in understanding how they interact with their receptors and effectors. Nonetheless, the locations of the receptors and effector binding sites remain poorly characterized, although nearly complete structures of the alphabetagamma heterotrimeric complex are available. Here we apply evolutionary trace (ET) analysis [Lichtarge, O., Bourne, H. R. & Cohen, F. E. (1996) J. Mol. Biol. 257, 342-358] to propose plausible locations for these sites. On each subunit, ET identifies evolutionarily selected surfaces composed of residues that do not vary within functional subgroups and that form spatial clusters. Four clusters correctly identify subunit interfaces, and additional clusters on Galpha point to likely receptor or effector binding sites. Our results implicate the conformationally variable region of Galpha in an effector binding role. Furthermore the range of predicted interactions between the receptor and Galphabetagamma, is sufficiently limited that we can build a low resolution and testable model of the receptor-G protein complex.
Resumo:
The x-ray crystal structures of the sulfide oxidase antibody 28B4 and of antibody 28B4 complexed with hapten have been solved at 2.2-angstrom and 1.9-angstrom resolution, respectively. To our knowledge, these structures are the highest resolution catalytic antibody structures to date and provide insight into the molecular mechanism of this antibody-catalyzed monooxygenation reaction. Specifically, the data suggest that entropic restriction plays a fundamental role in catalysis through the precise alignment of the thioether substrate and oxidant. The antibody active site also stabilizes developing charge on both sulfur and periodate in the transition state via cation-pi and electrostatic interactions, respectively. In addition to demonstrating that the active site of antibody 28B4 does indeed reflect the mechanistic information programmed in the aminophosphonic acid hapten, these high-resolution structures provide a basis for enhancing turnover rates through mutagenesis and improved hapten design.
Resumo:
Aims. We investigated in detail the system WDS 19312+3607, whose primary is an active M4.5Ve star previously inferred to be young (τ ~ 300–500 Ma) based on its high X-ray luminosity. Methods. We collected intermediate- and low-resolution optical spectra taken with 2 m-class telescopes, photometric data from the B to 8 μm bands, and data for eleven astrometric epochs with a time baseline of over 56 years for the two components in the system, G 125–15 and G 125–14. Results. We derived the M4.5V spectral types of both stars, confirmed their common proper motion, estimated their heliocentric distance and projected physical separation, determined their Galactocentric space velocities, and deduced a most-probable age of older than 600 Ma. We discovered that the primary, G 125–15, is an inflated, double-lined, spectroscopic binary with a short period of photometric variability of 1.6 d, which we associated with orbital synchronisation. The observed X-ray and Hα emissions, photometric variability, and abnormal radius and effective temperature of G 125–15 AB are indicative of strong magnetic activity, possibly because of the rapid rotation. In addition, the estimated projected physical separation between G 125–15 AB and G 125–14 of about 1200 AU ensures that WDS 19312+3607 is one of the widest systems with intermediate M-type primaries. Conclusions. G 125–15 AB is a nearby (d ≈ 26 pc), bright (J ≈ 9.6 mag), active spectroscopic binary with a single proper-motion companion of the same spectral type at a wide separation. They are thus ideal targets for specific follow-ups to investigate wide and close multiplicity or stellar expansion and surface cooling because of the lower convective efficiency.