934 resultados para Low resolution brain tomography (LORETA)
Resumo:
This paper discusses the target localization problem of wireless visual sensor networks. Specifically, each node with a low-resolution camera extracts multiple feature points to represent the target at the sensor node level. A statistical method of merging the position information of different sensor nodes to select the most correlated feature point pair at the base station is presented. This method releases the influence of the accuracy of target extraction on the accuracy of target localization in universal coordinate system. Simulations show that, compared with other relative approach, our proposed method can generate more desirable target localization's accuracy, and it has a better trade-off between camera node usage and localization accuracy.
Resumo:
We introduce an innovative, semi-automatic method to transform low resolution facial meshes into high definition ones, based on the tailoring of a generic, neutral human head model, designed by an artist, to fit the facial features of a specific person. To determine these facial features we need to select a set of "control points" (corners of eyes, lips, etc.) in at least two photographs of the subject's face. The neutral head mesh is then automatically reshaped according to the relation between the control points in the original subject's mesh through a set of transformation pyramids. The last step consists in merging both meshes and filling the gaps that appear in the previous process. This algorithm avoids the use of expensive and complicated technologies to obtain depth maps, which also need to be meshed later.
Resumo:
Solar radiation estimates with clear sky models require estimations of aerosol data. The low spatial resolution of current aerosol datasets, with their remarkable drift from measured data, poses a problem in solar resource estimation. This paper proposes a new downscaling methodology by combining support vector machines for regression (SVR) and kriging with external drift, with data from the MACC reanalysis datasets and temperature and rainfall measurements from 213 meteorological stations in continental Spain. The SVR technique was proven efficient in aerosol variable modeling. The Linke turbidity factor (TL) and the aerosol optical depth at 550 nm (AOD 550) estimated with SVR generated significantly lower errors in AERONET positions than MACC reanalysis estimates. The TL was estimated with relative mean absolute error (rMAE) of 10.2% (compared with AERONET), against the MACC rMAE of 18.5%. A similar behavior was seen with AOD 550, estimated with rMAE of 8.6% (compared with AERONET), against the MACC rMAE of 65.6%. Kriging using MACC data as an external drift was found useful in generating high resolution maps (0.05° × 0.05°) of both aerosol variables. We created high resolution maps of aerosol variables in continental Spain for the year 2008. The proposed methodology was proven to be a valuable tool to create high resolution maps of aerosol variables (TL and AOD 550). This methodology shows meaningful improvements when compared with estimated available databases and therefore, leads to more accurate solar resource estimations. This methodology could also be applied to the prediction of other atmospheric variables, whose datasets are of low resolution.
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
This paper discusses the target localization problem in wireless visual sensor networks. Additive noises and measurement errors will affect the accuracy of target localization when the visual nodes are equipped with low-resolution cameras. In the goal of improving the accuracy of target localization without prior knowledge of the target, each node extracts multiple feature points from images to represent the target at the sensor node level. A statistical method is presented to match the most correlated feature point pair for merging the position information of different sensor nodes at the base station. Besides, in the case that more than one target exists in the field of interest, a scheme for locating multiple targets is provided. Simulation results show that, our proposed method has desirable performance in improving the accuracy of locating single target or multiple targets. Results also show that the proposed method has a better trade-off between camera node usage and localization accuracy.
Resumo:
Recombination of genes is essential to the evolution of genetic diversity, the segregation of chromosomes during cell division, and certain DNA repair processes. The Holliday junction, a four-arm, four-strand branched DNA crossover structure, is formed as a transient intermediate during genetic recombination and repair processes in the cell. The recognition and subsequent resolution of Holliday junctions into parental or recombined products appear to be critically dependent on their three-dimensional structure. Complementary NMR and time-resolved fluorescence resonance energy transfer experiments on immobilized four-arm DNA junctions reported here indicate that the Holliday junction cannot be viewed as a static structure but rather as an equilibrium mixture of two conformational isomers. Furthermore, the distribution between the two possible crossover isomers was found to depend on the sequence in a manner that was not anticipated on the basis of previous low-resolution experiments.
Resumo:
The cell cycle-dependent, ordered assembly of protein prereplicative complexes suggests that eukaryotic replication origins determine when genomic replication initiates. By comparison, the factors that determine where replication initiates relative to the sites of prereplicative complex formation are not known. In the human globin gene locus previous work showed that replication initiates at a single site 5′ to the β-globin gene when protein synthesis is inhibited by emetine. The present study has examined the pattern of initiation around the genetically defined β-globin replicator in logarithmically growing HeLa cells, using two PCR-based nascent strand assays. In contrast to the pattern of initiation detected in emetine-treated cells, analysis of the short nascent strands at five positions spanning a 40 kb globin gene region shows that replication initiates at more than one site in non-drug-treated cells. Quantitation of nascent DNA chains confirmed that replication begins at several locations in this domain, including one near the initiation region (IR) identified in emetine-treated cells. However, the abundance of short nascent strands at another initiation site ∼20 kb upstream is ∼4-fold as great as that at the IR. The latter site abuts an early S phase replicating fragment previously defined at low resolution in logarithmically dividing cells.
Resumo:
The pivotal role of G proteins in sensory, hormonal, inflammatory, and proliferative responses has provoked intense interest in understanding how they interact with their receptors and effectors. Nonetheless, the locations of the receptors and effector binding sites remain poorly characterized, although nearly complete structures of the alphabetagamma heterotrimeric complex are available. Here we apply evolutionary trace (ET) analysis [Lichtarge, O., Bourne, H. R. & Cohen, F. E. (1996) J. Mol. Biol. 257, 342-358] to propose plausible locations for these sites. On each subunit, ET identifies evolutionarily selected surfaces composed of residues that do not vary within functional subgroups and that form spatial clusters. Four clusters correctly identify subunit interfaces, and additional clusters on Galpha point to likely receptor or effector binding sites. Our results implicate the conformationally variable region of Galpha in an effector binding role. Furthermore the range of predicted interactions between the receptor and Galphabetagamma, is sufficiently limited that we can build a low resolution and testable model of the receptor-G protein complex.
Resumo:
Transmission of human immunodeficiency virus 1 (HIV-1) from an infected women to her offspring during gestation and delivery was found to be influenced by the infant's major histocompatibility complex class II DRB1 alleles. Forty-six HIV-infected infants and 63 seroreverting infants, born with passively acquired anti-HIV antibodies but not becoming detectably infected, were typed by an automated nucleotide-sequence-based technique that uses low-resolution PCR to select either the simpler Taq or the more demanding T7 sequencing chemistry. One or more DR13 alleles, including DRB1*1301, 1302, and 1303, were found in 31.7% of seroreverting infants and 15.2% of those becoming HIV-infected [OR (odds ratio) = 2.6 (95% confidence interval 1.0-6.8); P = 0.048]. This association was influenced by ethnicity, being seen more strongly among the 80 Black and Hispanic children [OR = 4.3 (1.2-16.4); P = 0.023], with the most pronounced effect among Black infants where 7 of 24 seroreverters inherited these alleles with none among 12 HIV-infected infants (Haldane OR = 12.3; P = 0.037). The previously recognized association of DR13 alleles with some situations of long-term nonprogression of HIV suggests that similar mechanisms may regulate both the occurrence of infection and disease progression after infection. Upon examining for residual associations, only only the DR2 allele DRB1*1501 was associated with seroreversion in Caucasoid infants (OR = 24; P = 0.004). Among Caucasoids the DRB1*03011 allele was positively associated with the occurrence of HIV infection (P = 0.03).
Resumo:
Using the yeast two-hybrid system we have identified a human protein, GAIP (G Alpha Interacting Protein), that specifically interacts with the heterotrimeric GTP-binding protein G alpha i3. Interaction was verified by specific binding of in vitro-translated G alpha i3 with a GAIP-glutathione S-transferase fusion protein. GAIP is a small protein (217 amino acids, 24 kDa) that contains two potential phosphorylation sites for protein kinase C and seven for casein kinase 2. GAIP shows high homology to two previously identified human proteins, GOS8 and 1R20, two Caenorhabditis elegans proteins, CO5B5.7 and C29H12.3, and the FLBA gene product in Aspergillus nidulans--all of unknown function. Significant homology was also found to the SST2 gene product in Saccharomyces cerevisiae that is known to interact with a yeast G alpha subunit (Gpa1). A highly conserved core domain of 125 amino acids characterizes this family of proteins. Analysis of deletion mutants demonstrated that the core domain is the site of GAIP's interaction with G alpha i3. GAIP is likely to be an early inducible phosphoprotein, as its cDNA contains the TTTTGT sequence characteristic of early response genes in its 3'-untranslated region. By Northern analysis GAIP's 1.6-kb mRNA is most abundant in lung, heart, placenta, and liver and is very low in brain, skeletal muscle, pancreas, and kidney. GAIP appears to interact exclusively with G alpha i3, as it did not interact with G alpha i2 and G alpha q. The fact that GAIP and Sst2 interact with G alpha subunits and share a common domain suggests that other members of the GAIP family also interact with G alpha subunits through the 125-amino-acid core domain.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
In this paper, we demonstrate the use of a video camera for measuring the frequency of small-amplitude vibration movements. The method is based on image acquisition and multilevel thresholding and it only requires a video camera with high enough acquisition rate, not being necessary the use of targets or auxiliary laser beams. Our proposal is accurate and robust. We demonstrate the technique with a pocket camera recording low-resolution videos with AVI-JPEG compression and measuring different objects that vibrate in parallel or perpendicular direction to the optical sensor. Despite the low resolution and the noise, we are able to measure the main vibration modes of a tuning fork, a loudspeaker and a bridge. Results are successfully compared with design parameters and measurements with alternative devices.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
Measurement of joint kinematics can provide knowledge to help improve joint prosthesis design, as well as identify joint motion patterns that may lead to joint degeneration or injury. More investigation into how the hip translates in live human subjects during high amplitude motions is needed. This work presents a design of a non-invasive method using the registration between images from conventional Magnetic Resonance Imaging (MRI) and open MRI to calculate three dimensional hip joint kinematics. The method was tested on a single healthy subject in three different poses. MRI protocols for the conventional gantry, high-resolution MRI and the open gantry, lowresolution MRI were developed. The scan time for the low-resolution protocol was just under 6 minutes. High-resolution meshes and low resolution contours were derived from segmentation of the high-resolution and low-resolution images, respectively. Low-resolution contours described the poses as scanned, whereas the meshes described the bones’ geometries. The meshes and contours were registered to each other, and joint kinematics were calculated. The segmentation and registration were performed for both cortical and sub-cortical bone surfaces. A repeatability study was performed by comparing the kinematic results derived from three users’ segmentations of the sub-cortical bone surfaces from a low-resolution scan. The root mean squared error of all registrations was below 1.92mm. The maximum range between segmenters in translation magnitude was 0.95mm, and the maximum deviation from the average of all orientations was 1.27◦. This work demonstrated that this method for non-invasive measurement of hip kinematics is promising for measuring high-range-of-motion hip motions in vivo.