925 resultados para Sample-sample two dimensional correlation spectroscopy (SS 2D)
Resumo:
Recently, three-dimensional (3D) video has decisively burst onto the entertainment industry scene, and has arrived in households even before the standardization process has been completed. 3D television (3DTV) adoption and deployment can be seen as a major leap in television history, similar to previous transitions from black and white (B&W) to color, from analog to digital television (TV), and from standard definition to high definition. In this paper, we analyze current 3D video technology trends in order to define a taxonomy of the availability and possible introduction of 3D-based services. We also propose an audiovisual network services architecture which provides a smooth transition from two-dimensional (2D) to 3DTV in an Internet Protocol (IP)-based scenario. Based on subjective assessment tests, we also analyze those factors which will influence the quality of experience in those 3D video services, focusing on effects of both coding and transmission errors. In addition, examples of the application of the architecture and results of assessment tests are provided.
Resumo:
Apples can be considered as having a complex system formed by several structures at different organization levels: macroscale (mayor que100 ?m) and microscale (menor que100 ?m). This work implements 2D T1/T2 global and localized relaxometry sequences on whole apples to be able to perform an intensive non-destructive and non-invasive microstructure study. The 2D T1/T2 cross-correlation spectroscopy allows the extraction of quantitative information about the water compartmentation in different subcellular organelles. A clear difference is found as sound apples show neat peaks for water in different subcellular compartments, such as vacuolar, cytoplasmatic and extracellular water, while in watercore-affected tissues such compartments appear merged. Localized relaxometry allows for the predefinition of slices in order to understand the microstructure of a particular region of the fruit, providing information that cannot be derived from global 2D T1/T2 relaxometry.
Resumo:
El estudio del comportamiento de la atmósfera ha resultado de especial importancia tanto en el programa SESAR como en NextGen, en los que la gestión actual del tránsito aéreo (ATM) está experimentando una profunda transformación hacia nuevos paradigmas tanto en Europa como en los EE.UU., respectivamente, para el guiado y seguimiento de las aeronaves en la realización de rutas más eficientes y con mayor precisión. La incertidumbre es una característica fundamental de los fenómenos meteorológicos que se transfiere a la separación de las aeronaves, las trayectorias de vuelo libres de conflictos y a la planificación de vuelos. En este sentido, el viento es un factor clave en cuanto a la predicción de la futura posición de la aeronave, por lo que tener un conocimiento más profundo y preciso de campo de viento reducirá las incertidumbres del ATC. El objetivo de esta tesis es el desarrollo de una nueva técnica operativa y útil destinada a proporcionar de forma adecuada y directa el campo de viento atmosférico en tiempo real, basada en datos de a bordo de la aeronave, con el fin de mejorar la predicción de las trayectorias de las aeronaves. Para lograr este objetivo se ha realizado el siguiente trabajo. Se han descrito y analizado los diferentes sistemas de la aeronave que proporcionan las variables necesarias para obtener la velocidad del viento, así como de las capacidades que permiten la presentación de esta información para sus aplicaciones en la gestión del tráfico aéreo. Se ha explorado el uso de aeronaves como los sensores de viento en un área terminal para la estimación del viento en tiempo real con el fin de mejorar la predicción de las trayectorias de aeronaves. Se han desarrollado métodos computacionalmente eficientes para estimar las componentes horizontales de la velocidad del viento a partir de las velocidades de las aeronaves (VGS, VCAS/VTAS), la presión y datos de temperatura. Estos datos de viento se han utilizado para estimar el campo de viento en tiempo real utilizando un sistema de procesamiento de datos a través de un método de mínima varianza. Por último, se ha evaluado la exactitud de este procedimiento para que esta información sea útil para el control del tráfico aéreo. La información inicial proviene de una muestra de datos de Registradores de Datos de Vuelo (FDR) de aviones que aterrizaron en el aeropuerto Madrid-Barajas. Se dispuso de datos de ciertas aeronaves durante un periodo de más de tres meses que se emplearon para calcular el vector viento en cada punto del espacio aéreo. Se utilizó un modelo matemático basado en diferentes métodos de interpolación para obtener los vectores de viento en áreas sin datos disponibles. Se han utilizado tres escenarios concretos para validar dos métodos de interpolación: uno de dos dimensiones que trabaja con ambas componentes horizontales de forma independiente, y otro basado en el uso de una variable compleja que relaciona ambas componentes. Esos métodos se han probado en diferentes escenarios con resultados dispares. Esta metodología se ha aplicado en un prototipo de herramienta en MATLAB © para analizar automáticamente los datos de FDR y determinar el campo vectorial del viento que encuentra la aeronave al volar en el espacio aéreo en estudio. Finalmente se han obtenido las condiciones requeridas y la precisión de los resultados para este modelo. El método desarrollado podría utilizar los datos de los aviones comerciales como inputs utilizando los datos actualmente disponibles y la capacidad computacional, para proporcionárselos a los sistemas ATM donde se podría ejecutar el método propuesto. Estas velocidades del viento calculadas, o bien la velocidad respecto al suelo y la velocidad verdadera, se podrían difundir, por ejemplo, a través del sistema de direccionamiento e informe para comunicaciones de aeronaves (ACARS), mensajes de ADS-B o Modo S. Esta nueva fuente ayudaría a actualizar la información del viento suministrada en los productos aeronáuticos meteorológicos (PAM), informes meteorológicos de aeródromos (AIRMET), e información meteorológica significativa (SIGMET). ABSTRACT The study of the atmosphere behaviour is been of particular importance both in SESAR and NextGen programs, where the current air traffic management (ATM) system is undergoing a profound transformation to the new paradigms both in Europe and the USA, respectively, to guide and track aircraft more precisely on more efficient routes. Uncertainty is a fundamental characteristic of weather phenomena which is transferred to separation assurance, flight path de-confliction and flight planning applications. In this respect, the wind is a key factor regarding the prediction of the future position of the aircraft, so that having a deeper and accurate knowledge of wind field will reduce ATC uncertainties. The purpose of this thesis is to develop a new and operationally useful technique intended to provide adequate and direct real-time atmospheric winds fields based on on-board aircraft data, in order to improve aircraft trajectory prediction. In order to achieve this objective the following work has been accomplished. The different sources in the aircraft systems that provide the variables needed to derivate the wind velocity have been described and analysed, as well as the capabilities which allow presenting this information for air traffic management applications. The use of aircraft as wind sensors in a terminal area for real-time wind estimation in order to improve aircraft trajectory prediction has been explored. Computationally efficient methods have been developed to estimate horizontal wind components from aircraft velocities (VGS, VCAS/VTAS), pressure, and temperature data. These wind data were utilized to estimate a real-time wind field using a data processing approach through a minimum variance method. Finally, the accuracy of this procedure has been evaluated for this information to be useful to air traffic control. The initial information comes from a Flight Data Recorder (FDR) sample of aircraft landing in Madrid-Barajas Airport. Data available for more than three months were exploited in order to derive the wind vector field in each point of the airspace. Mathematical model based on different interpolation methods were used in order to obtain wind vectors in void areas. Three particular scenarios were employed to test two interpolation methods: a two-dimensional one that works with both horizontal components in an independent way, and also a complex variable formulation that links both components. Those methods were tested using various scenarios with dissimilar results. This methodology has been implemented in a prototype tool in MATLAB © in order to automatically analyse FDR and determine the wind vector field that aircraft encounter when flying in the studied airspace. Required conditions and accuracy of the results were derived for this model. The method developed could be fed by commercial aircraft utilizing their currently available data sources and computational capabilities, and providing them to ATM system where the proposed method could be run. Computed wind velocities, or ground and true airspeeds, would then be broadcasted, for example, via the Aircraft Communication Addressing and Reporting System (ACARS), ADS-B out messages, or Mode S. This new source would help updating the wind information furnished in meteorological aeronautical products (PAM), meteorological aerodrome reports (AIRMET), and significant meteorological information (SIGMET).
Resumo:
El principal objetivo del presente trabajo de investigación fue determinar las diferencias en distintas variables relacionadas con el rendimiento físico entre atletas de distinto nivel durante la prueba de los 60 metros vallas. Un total de 59 vallistas masculinos (los 31 participantes en el Campeonato del Mundo Absoluto de Pista Cubierta y los 28 participantes en el Campeonato de España Absoluto de Pista Cubierta, ambos celebrados en Valencia en el año 2008) formaron la muestra del estudio. El análisis biomecánico se realizó mediante un sistema fotogramétrico en dos dimensiones que permitió calcular, aplicando algoritmos basados en el procedimiento de la DLT (Abdel-Aziz y Karara, 1971), las coordenadas (x, y) de los sucesivos apoyos de los pies de los atletas sobre toda la superficie de competición. La filmación de las pruebas se llevó a cabo con seis cámaras de vídeo, ubicadas sobre la gradas, con una frecuencia de muestreo para el tratamiento de los datos de 50 Hz. En la fase de salida, los atletas de nivel superior mostraron una menor longitud (p<0,05) y tiempo de zancada (p<0,001), debido a un menor tiempo de vuelo (p<0,05). En la fase de vallas, los atletas de nivel más elevado presentaron mayores distancia de ataque a la valla (p<0,001), así como menores distancias de caída de la valla (p<0,001), tiempos de zancada (p<0,01-0,001) y de apoyo (p<0,01-0,001 ) en los cuatro pasos que conforman cada ciclo de vallas, así como un menor tiempo de vuelo en el paso de valla (p<0,001) y en el paso de transición (p<0,001). De manera adicional, se encontraron importantes diferencias en el reparto de los pasos entre vallas entre la primera y tercera valla y el resto de obstáculos. En la fase final, se observó una mayor longitud de zancada en los atletas de nivel superior (p<0,001), así como un menor tiempo de zancada (p<0,01) y de apoyo (p<0,01). Los resultados obtenidos en el presente estudio de investigación avalan la utilización de la fotogrametría en dos dimensiones para el análisis biomecánico de la prueba de 60 metros vallas en competición. Su aplicación en competiciones del máximo nivel internacional ha posibilitado conocer las características de los vallistas a lo largo de toda la prueba y determinar posibles implicaciones de cara al proceso de entrenamiento. ABSTRACT The aim of this research was to determine the differences in different variables related to physical performance among athletes of different levels during the race of 60 meter hurdles. A total of 59 male hurdlers (the 31 participants in the World Indoor Championship and the 28 participants in the Spanish Indoor Championship, both held in Valencia in 2008) formed the sample of the study. The biomechanical analysis of athletes was performed using a two-dimensional photogrammetric system which enabled calculation, applying algorithms based on the DLT method (Abdel -Aziz y Karara , 1971), the coordinates (x , y) of the successive supports of the feet on the entire competition surface. Filming test was conducted with six video cameras, located on the bleachers, with a sampling frequency for data processing of 50 Hz. In the approach run phase, the top-level athletes showed a smaller length step (p<0.05), and shorter step time (p<0.001), due to a shorter step flight time (p<0.05). In the hurdle unit phase, the higher level athletes had greater take-off distances (p<0.001), shorter landing distances (p<0.001), smaller step times (p<0.01-0.001), and support times (p<0.01- 0.001) in the four steps that comprised each hurdle unit, and smaller flight times in the hurdle step (p < 0.001), and the recovery step (p<0.001). Additionally, differences in the distribution of hurdle unit steps between the first and third hurdle, and other hurdles were found. In the run-in phase, a greater step length in top-level athletes (p<0.001), and a shorter step time (p<0.01) and contact time (p<0.01) was observed. The results obtained in this study support the use of photogrammetry in two dimensions for biomechanical analysis in 60 meter hurdles competition events. Its application at the highest international level competitions has allowed to know the characteristics of the hurdlers over the entire race and identify possible implications for the training process.
Resumo:
Apples can be considered as having a complex system formed by several structures at different organization levels: macroscale (>100 μm) and microscale (<100 μm). This work implements 2D T1/T2 global and localized relaxometry sequences on whole apples to be able to perform an intensive non-destructive and non-invasive microstructure study. The 2D T1/T2 cross-correlation spectroscopy allows the extraction of quantitative information about the water compartmentation in different subcellular organelles. A clear difference is found as sound apples show neat peaks for water in different subcellular compartments, such as vacuolar, cytoplasmatic and extracellular water, while in watercore-affected tissues such compartments appear merged. Localized relaxometry allows for the predefinition of slices in order to understand the microstructure of a particular region of the fruit, providing information that cannot be derived from global 2D T1/T2 relaxometry.
Resumo:
El método de Muskingum-Cunge, con más de 45 años de historia, sigue siendo uno de los más empleados a la hora de calcular el tránsito en un cauce. Una vez calibrado, permite realizar cálculos precisos, siendo asimismo mucho más rápido que los métodos que consideran las ecuaciones completas. Por esta razón, en el presente trabajo de investigación se llevó a cabo un análisis de su precisión, comparándolo con los resultados de un modelo hidráulico bidimensional. En paralelo se llevó a cabo un análisis de sus limitaciones y se ensayó una metodología práctica de aplicación. Con esta motivación se llevaron a cabo más de 200 simulaciones de tránsito en cauces prismáticos y naturales. Los cálculos se realizaron empleando el programa HEC-HMS con el método de Muskingum-Cunge de sección de 8 puntos, así como con la herramienta de cálculo hidráulico bidimensional InfoWorks ICM. Se eligieron HEC-HMS por su gran difusión e InfoWorks ICM por su rapidez de cálculo, pues emplea la tecnología CUDA (Arquitectura Unificada de Dispositivos de Cálculo). Inicialmente se validó el modelo hidráulico bidimensional contrastándolo con la formulación unidimensional en régimen uniforme y variado, así como con fórmulas analíticas de régimen variable, consiguiéndose resultados muy satisfactorios. También se llevó a cabo un análisis de la sensibilidad al mallado del modelo bidimensional aplicado a tránsitos, obteniéndose unos ábacos con tamaños recomendados de los elementos 2D que cuantifican el error cometido. Con la técnica del análisis dimensional se revisó una correlación de los resultados obtenidos entre ambos métodos, ponderando su precisión y definiendo intervalos de validez para la mejor utilización del método de Muskingum-Cunge. Simultáneamente se desarrolló una metodología que permite obtener la sección característica media de 8 puntos para el cálculo de un tránsito, basándose en una serie de simulaciones bidimensionales simplificadas. De este modo se pretende facilitar el uso y la correcta definición de los modelos hidrológicos. The Muskingum-Cunge methodology, which has been used for more 45 than years, is still one of the main procedures to calculate stream routing. Once calibrated, it gives precise results, and it is also much faster than other methods that consider the full hydraulic equations. Therefore, in the present investigation an analysis of its accuracy was carried out by comparing it with the results of a two-dimensional hydraulic model. At the same time, reasonable ranges of applicability as well as an iterative method for its adequate use were defined. With this motivation more than 200 simulations of stream routing were conducted in both synthetic and natural waterways. Calculations were performed with the aid of HEC-HMS choosing the Muskingum-Cunge 8 point cross-section method and in InfoWorks ICM, a two-dimensional hydraulic calculation software. HEC-HMS was chosen because its extensive use and InfoWorks ICM for its calculation speed as it takes advantage of the CUDA technology (Compute Unified Device Architecture). Initially, the two-dimensional hydraulic engine was compared to one-dimensional formulation in both uniform and varied flow. Then it was contrasted to variable flow analytical formulae, achieving most satisfactory results. A sensitivity size analysis of the two-dimensional rooting model mesh was also conduced, obtaining charts with suggested 2D element sizes to narrow the committed error. With the technique of dimensional analysis a correlation of results between the two methods was reviewed, assessing their accuracy and defining valid intervals for improved use of the Muskingum-Cunge method. Simultaneously, a methodology to draw a representative 8 point cross-section was developed, based on a sequence of simplified two-dimensional simulations. This procedure is intended to provide a simplified approach and accurate definition of hydrological models.
Resumo:
En la presente Tesis se ha llevado a cabo el contraste y desarrollo de metodologías que permitan mejorar el cálculo de las avenidas de proyecto y extrema empleadas en el cálculo de la seguridad hidrológica de las presas. En primer lugar se ha abordado el tema del cálculo de las leyes de frecuencia de caudales máximos y su extrapolación a altos periodos de retorno. Esta cuestión es de gran relevancia, ya que la adopción de estándares de seguridad hidrológica para las presas cada vez más exigentes, implica la utilización de periodos de retorno de diseño muy elevados cuya estimación conlleva una gran incertidumbre. Es importante, en consecuencia incorporar al cálculo de los caudales de diseño todas la técnicas disponibles para reducir dicha incertidumbre. Asimismo, es importante hacer una buena selección del modelo estadístico (función de distribución y procedimiento de ajuste) de tal forma que se garantice tanto su capacidad para describir el comportamiento de la muestra, como para predecir de manera robusta los cuantiles de alto periodo de retorno. De esta forma, se han realizado estudios a escala nacional con el objetivo de determinar el esquema de regionalización que ofrece mejores resultados para las características hidrológicas de las cuencas españolas, respecto a los caudales máximos anuales, teniendo en cuenta el numero de datos disponibles. La metodología utilizada parte de la identificación de regiones homogéneas, cuyos límites se han determinado teniendo en cuenta las características fisiográficas y climáticas de las cuencas, y la variabilidad de sus estadísticos, comprobando posteriormente su homogeneidad. A continuación, se ha seleccionado el modelo estadístico de caudales máximos anuales con un mejor comportamiento en las distintas zonas de la España peninsular, tanto para describir los datos de la muestra como para extrapolar a los periodos de retorno más altos. El proceso de selección se ha basado, entre otras cosas, en la generación sintética de series de datos mediante simulaciones de Monte Carlo, y el análisis estadístico del conjunto de resultados obtenido a partir del ajuste de funciones de distribución a estas series bajo distintas hipótesis. Posteriormente, se ha abordado el tema de la relación caudal-volumen y la definición de los hidrogramas de diseño en base a la misma, cuestión que puede ser de gran importancia en el caso de presas con grandes volúmenes de embalse. Sin embargo, los procedimientos de cálculo hidrológico aplicados habitualmente no tienen en cuenta la dependencia estadística entre ambas variables. En esta Tesis se ha desarrollado un procedimiento para caracterizar dicha dependencia estadística de una manera sencilla y robusta, representando la función de distribución conjunta del caudal punta y el volumen en base a la función de distribución marginal del caudal punta y la función de distribución condicionada del volumen respecto al caudal. Esta última se determina mediante una función de distribución log-normal, aplicando un procedimiento de ajuste regional. Se propone su aplicación práctica a través de un procedimiento de cálculo probabilístico basado en la generación estocástica de un número elevado de hidrogramas. La aplicación a la seguridad hidrológica de las presas de este procedimiento requiere interpretar correctamente el concepto de periodo de retorno aplicado a variables hidrológicas bivariadas. Para ello, se realiza una propuesta de interpretación de dicho concepto. El periodo de retorno se entiende como el inverso de la probabilidad de superar un determinado nivel de embalse. Al relacionar este periodo de retorno con las variables hidrológicas, el hidrograma de diseño de la presa deja de ser un único hidrograma para convertirse en una familia de hidrogramas que generan un mismo nivel máximo en el embalse, representados mediante una curva en el plano caudal volumen. Esta familia de hidrogramas de diseño depende de la propia presa a diseñar, variando las curvas caudal-volumen en función, por ejemplo, del volumen de embalse o la longitud del aliviadero. El procedimiento propuesto se ilustra mediante su aplicación a dos casos de estudio. Finalmente, se ha abordado el tema del cálculo de las avenidas estacionales, cuestión fundamental a la hora de establecer la explotación de la presa, y que puede serlo también para estudiar la seguridad hidrológica de presas existentes. Sin embargo, el cálculo de estas avenidas es complejo y no está del todo claro hoy en día, y los procedimientos de cálculo habitualmente utilizados pueden presentar ciertos problemas. El cálculo en base al método estadístico de series parciales, o de máximos sobre un umbral, puede ser una alternativa válida que permite resolver esos problemas en aquellos casos en que la generación de las avenidas en las distintas estaciones se deba a un mismo tipo de evento. Se ha realizado un estudio con objeto de verificar si es adecuada en España la hipótesis de homogeneidad estadística de los datos de caudal de avenida correspondientes a distintas estaciones del año. Asimismo, se han analizado los periodos estacionales para los que es más apropiado realizar el estudio, cuestión de gran relevancia para garantizar que los resultados sean correctos, y se ha desarrollado un procedimiento sencillo para determinar el umbral de selección de los datos de tal manera que se garantice su independencia, una de las principales dificultades en la aplicación práctica de la técnica de las series parciales. Por otra parte, la aplicación practica de las leyes de frecuencia estacionales requiere interpretar correctamente el concepto de periodo de retorno para el caso estacional. Se propone un criterio para determinar los periodos de retorno estacionales de forma coherente con el periodo de retorno anual y con una distribución adecuada de la probabilidad entre las distintas estaciones. Por último, se expone un procedimiento para el cálculo de los caudales estacionales, ilustrándolo mediante su aplicación a un caso de estudio. The compare and develop of a methodology in order to improve the extreme flow estimation for dam hydrologic security has been developed. First, the work has been focused on the adjustment of maximum peak flows distribution functions from which to extrapolate values for high return periods. This has become a major issue as the adoption of stricter standards on dam hydrologic security involves estimation of high design return periods which entails great uncertainty. Accordingly, it is important to incorporate all available techniques for the estimation of design peak flows in order to reduce this uncertainty. Selection of the statistical model (distribution function and adjustment method) is also important since its ability to describe the sample and to make solid predictions for high return periods quantiles must be guaranteed. In order to provide practical application of previous methodologies, studies have been developed on a national scale with the aim of determining a regionalization scheme which features best results in terms of annual maximum peak flows for hydrologic characteristics of Spanish basins taking into account the length of available data. Applied methodology starts with the delimitation of regions taking into account basin’s physiographic and climatic characteristics and the variability of their statistical properties, and continues with their homogeneity testing. Then, a statistical model for maximum annual peak flows is selected with the best behaviour for the different regions in peninsular Spain in terms of describing sample data and making solid predictions for high return periods. This selection has been based, among others, on synthetic data series generation using Monte Carlo simulations and statistical analysis of results from distribution functions adjustment following different hypothesis. Secondly, the work has been focused on the analysis of the relationship between peak flow and volume and how to define design flood hydrographs based on this relationship which can be highly important for large volume reservoirs. However, commonly used hydrologic procedures do not take statistical dependence between these variables into account. A simple and sound method for statistical dependence characterization has been developed by the representation of a joint distribution function of maximum peak flow and volume which is based on marginal distribution function of peak flow and conditional distribution function of volume for a given peak flow. The last one is determined by a regional adjustment procedure of a log-normal distribution function. Practical application is proposed by a probabilistic estimation procedure based on stochastic generation of a large number of hydrographs. The use of this procedure for dam hydrologic security requires a proper interpretation of the return period concept applied to bivariate hydrologic data. A standard is proposed in which it is understood as the inverse of the probability of exceeding a determined reservoir level. When relating return period and hydrological variables the only design flood hydrograph changes into a family of hydrographs which generate the same maximum reservoir level and that are represented by a curve in the peak flow-volume two-dimensional space. This family of design flood hydrographs depends on the dam characteristics as for example reservoir volume or spillway length. Two study cases illustrate the application of the developed methodology. Finally, the work has been focused on the calculation of seasonal floods which are essential when determining the reservoir operation and which can be also fundamental in terms of analysing the hydrologic security of existing reservoirs. However, seasonal flood calculation is complex and nowadays it is not totally clear. Calculation procedures commonly used may present certain problems. Statistical partial duration series, or peaks over threshold method, can be an alternative approach for their calculation that allow to solve problems encountered when the same type of event is responsible of floods in different seasons. A study has been developed to verify the hypothesis of statistical homogeneity of peak flows for different seasons in Spain. Appropriate seasonal periods have been analyzed which is highly relevant to guarantee correct results. In addition, a simple procedure has been defined to determine data selection threshold on a way that ensures its independency which is one of the main difficulties in practical application of partial series. Moreover, practical application of seasonal frequency laws requires a correct interpretation of the concept of seasonal return period. A standard is proposed in order to determine seasonal return periods coherently with the annual return period and with an adequate seasonal probability distribution. Finally a methodology is proposed to calculate seasonal peak flows. A study case illustrates the application of the proposed methodology.
Resumo:
A technique is described for displaying distinct tissue layers of large blood vessel walls as well as measuring their mechanical strain. The technique is based on deuterium double-quantum-filtered (DQF) spectroscopic imaging. The effectiveness of the double-quantum filtration in suppressing the signal of bulk water is demonstrated on a phantom consisting of rat tail tendon fibers. Only intrafibrillar water is displayed, excluding all other signals of water molecules that reorient isotropically. One- and two-dimensional spectroscopic imaging of bovine aorta and coronary arteries show the characteristic DQF spectrum of each of the tissue layers. This property is used to obtain separate images of the outer layer, the tunica adventitia, or the intermediate layer, the tunica media, or both. To visualize the effect of elongation, the average residual quadrupole splitting <Δνq> is calculated for each pixel. Two-dimensional deuterium quadrupolar splitting images are obtained for a fully relaxed and a 55% elongated sample of bovine coronary artery. These images indicate that the strong effect of strain is associated with water molecules in the tunica adventitia whereas the DQF NMR signal of water in the tunica media is apparently strain-insensitive. After appropriate calibration, these average quadrupolar splitting images can be interpreted as strain maps.
Resumo:
Fast transverse relaxation of 1H, 15N, and 13C by dipole-dipole coupling (DD) and chemical shift anisotropy (CSA) modulated by rotational molecular motions has a dominant impact on the size limit for biomacromolecular structures that can be studied by NMR spectroscopy in solution. Transverse relaxation-optimized spectroscopy (TROSY) is an approach for suppression of transverse relaxation in multidimensional NMR experiments, which is based on constructive use of interference between DD coupling and CSA. For example, a TROSY-type two-dimensional 1H,15N-correlation experiment with a uniformly 15N-labeled protein in a DNA complex of molecular mass 17 kDa at a 1H frequency of 750 MHz showed that 15N relaxation during 15N chemical shift evolution and 1HN relaxation during signal acquisition both are significantly reduced by mutual compensation of the DD and CSA interactions. The reduction of the linewidths when compared with a conventional two-dimensional 1H,15N-correlation experiment was 60% and 40%, respectively, and the residual linewidths were 5 Hz for 15N and 15 Hz for 1HN at 4°C. Because the ratio of the DD and CSA relaxation rates is nearly independent of the molecular size, a similar percentagewise reduction of the overall transverse relaxation rates is expected for larger proteins. For a 15N-labeled protein of 150 kDa at 750 MHz and 20°C one predicts residual linewidths of 10 Hz for 15N and 45 Hz for 1HN, and for the corresponding uniformly 15N,2H-labeled protein the residual linewidths are predicted to be smaller than 5 Hz and 15 Hz, respectively. The TROSY principle should benefit a variety of multidimensional solution NMR experiments, especially with future use of yet somewhat higher polarizing magnetic fields than are presently available, and thus largely eliminate one of the key factors that limit work with larger molecules.
Resumo:
Near infrared Yb3+ vibronic sideband spectroscopy was used to characterize specific lanthanide binding sites in bacteriorhodopsin (bR) and retinal free bacteriorhodopsin (bO). The VSB spectra for deionized bO regenerated with a ratio of 1:1 and 2:1 ion to bO are identical. Application of a two-dimensional anti-correlation technique suggests that only a single Yb3+ site is observed. The Yb3+ binding site in bO is observed to consist of PO2− groups and carboxylic acid groups, both of which are bound in a bidentate manner. An additional contribution most likely arising from a phenolic group is also observed. This implies that the ligands for the observed single binding site are the lipid head groups and amino acid residues. The vibronic sidebands of Yb3+ in deionized bR regenerated at a ratio of 2:1 ion to bR are essentially identical to those in bO. The other high-affinity binding site is thus either not evident or its fluorescence is quenched. A discussion is given on the difference in binding of Ca2+ (or Mg2+) and lanthanides in phospholipid membrane proteins.
Resumo:
Confocal fluorescence correlation spectroscopy as a time-averaging fluctuation analysis combining maximum sensitivity with high statistical confidence has proved to be a very versatile and powerful tool for detection and temporal investigation of biomolecules at ultralow concentrations on surfaces, in solutions, and in living cells. To probe the interaction of different molecular species for a detailed understanding of biologically relevant mechanisms, crosscorrelation studies on dual or multiple fluorophore assays with spectrally distinct excitation and emission are particularly promising. Despite the considerable improvement of detection specificity provided by fluorescence crosscorrelation analysis, few applications have so far been reported, presumably because of the practical challenges of properly aligning and controlling the stability of the experimental setup. In this work, we demonstrate that two-photon excitation combined with dual-color fluorescence correlation spectroscopy can be the key to simplifying simultaneous investigations of multiple fluorescent species significantly on a single-molecule scale. Two-photon excitation allows accession of common fluorophores of largely distinct emission by the same excitation wavelength, because differences in selection rules and vibronic coupling can induce considerable shifts between the one-photon and two-photon excitation spectra. The concept of dual-color two-photon fluorescence crosscorrelation analysis is introduced and experimentally demonstrated with an established assay probing the selective cleavage of dual-labeled DNA substrates by restriction endonuclease EcoRI.
Resumo:
We have identified a new family of Tc1-like transposons in the zebrafish, Danio rerio. The sequence of a candidate active transposon, deduced from sample Tzf elements, shows limited resemblance to the previously described Tdr1 elements of zebrafish. Both the Tzf and the Tdr elements are extremely abundant in zebrafish. We describe here a general strategy for detecting transposition events in a complex genome and demonstrate its utility by selectively monitoring hundreds of potentially active Tzf copies in the zebrafish genome against a background of other related elements. We have followed members of a zebrafish pedigree, using this two-dimensional transposon display strategy, to identify the first examples of active transposition of such elements in vertebrates.
Resumo:
Light confinement and controlling an optical field has numerous applications in the field of telecommunications for optical signals processing. When the wavelength of the electromagnetic field is on the order of the period of a photonic microstructure, the field undergoes reflection, refraction, and coherent scattering. This produces photonic bandgaps, forbidden frequency regions or spectral stop bands where light cannot exist. Dielectric perturbations that break the perfect periodicity of these structures produce what is analogous to an impurity state in the bandgap of a semiconductor. The defect modes that exist at discrete frequencies within the photonic bandgap are spatially localized about the cavity-defects in the photonic crystal. In this thesis the properties of two tight-binding approximations (TBAs) are investigated in one-dimensional and two-dimensional coupled-cavity photonic crystal structures We require an efficient and simple approach that ensures the continuity of the electromagnetic field across dielectric interfaces in complex structures. In this thesis we develop \textrm{E} -- and \textrm{D} --TBAs to calculate the modes in finite 1D and 2D two-defect coupled-cavity photonic crystal structures. In the \textrm{E} -- and \textrm{D} --TBAs we expand the coupled-cavity \overrightarrow{E} --modes in terms of the individual \overrightarrow{E} -- and \overrightarrow{D} --modes, respectively. We investigate the dependence of the defect modes, their frequencies and quality factors on the relative placement of the defects in the photonic crystal structures. We then elucidate the differences between the two TBA formulations, and describe the conditions under which these formulations may be more robust when encountering a dielectric perturbation. Our 1D analysis showed that the 1D modes were sensitive to the structure geometry. The antisymmetric \textrm{D} mode amplitudes show that the \textrm{D} --TBA did not capture the correct (tangential \overrightarrow{E} --field) boundary conditions. However, the \textrm{D} --TBA did not yield significantly poorer results compared to the \textrm{E} --TBA. Our 2D analysis reveals that the \textrm{E} -- and \textrm{D} --TBAs produced nearly identical mode profiles for every structure. Plots of the relative difference between the \textrm{E} and \textrm{D} mode amplitudes show that the \textrm{D} --TBA did capture the correct (normal \overrightarrow{E} --field) boundary conditions. We found that the 2D TBA CC mode calculations were 125-150 times faster than an FDTD calculation for the same two-defect PCS. Notwithstanding this efficiency, the appropriateness of either TBA was found to depend on the geometry of the structure and the mode(s), i.e. whether or not the mode has a large normal or tangential component.
Resumo:
The geothermal regime of the western margin of the Great Bahama Bank was examined using the bottom hole temperature and thermal conductivity measurements obtained during and after Ocean Drilling Program (ODP) Leg 166. This study focuses on the data from the drilling transect of Sites 1003 through 1007. These data reveal two important observational characteristics. First, temperature vs. cumulative thermal resistance profiles from all the drill sites show significant curvature in the depth range of 40 to 100 mbsf. They tend to be of concave-upward shape. Second, the conductive background heat-flow values for these five drill sites, determined from deep, linear parts of the geothermal profiles, show a systematic variation along the drilling transect. Heat flow is 43-45 mW/m**2 on the seafloor away from the bank and decreases upslope to ~35 mW/m**2. We examine three mechanisms as potential causes for the curved geothermal profiles. They are: (1) a recent increase in sedimentation rate, (2) influx of seawater into shallow sediments, and (3) temporal fluctuation of the bottom water temperature (BWT). Our analysis shows that the first mechanism is negligible. The second mechanism may explain the data from Sites 1004 and 1005. The temperature profile of Site 1006 is most easily explained by the third mechanism. We reconstruct the history of BWT at this site by solving the inverse heat conduction problem. The inversion result indicates gradual warming throughout this century by ~1°C and is agreeable to other hydrographic and climatic data from the western subtropic Atlantic. However, data from Sites 1003 and 1007 do not seem to show such trends. Therefore, none of the three mechanisms tested here explain the observations from all the drill sites. As for the lateral variation of the background heat flow along the drill transect, we believe that much of it is caused by the thermal effect of the topographic variation. We model this effect by obtaining a two-dimensional analytical solution. The model suggests that the background heat flow of this area is ~43 mW/m**2, a value similar to the background heat flow determined for the Gulf of Mexico in the opposite side of the Florida carbonate platform.
Resumo:
Site 996 is located above the Blake Diapir where numerous indications of vertical fluid migration and the presence of hydrate existed prior to Ocean Drilling Program (ODP) Leg 164. Direct sampling of hydrates and visual observations of hydrate-filled veins that could be traced 30-40 cm along cores suggest a connection between fluid migration and hydrate formation. The composition of pore water squeezed from sediment cores showed large variations due to melting of hydrate during core recovery and influence of saline water from the evaporitic diapir below. Analysis of water released during hydrate decomposition experiments showed that the recovered hydrates contained significant amounts of pore water. Solutions of the transport equations for deuterium (d2H) and chloride (Cl-) were used to determine maximum (d2H) and minimum (Cl-) in situ concentrations of these species. Minimum in situ concentrations of hydrate were estimated by combining these results with Cl- and d2H values measured on hydrate meltwaters and pore waters obtained by squeezing of sediments, by the means of a method based on analysis of distances in the two-dimensional Cl- d2H space. The computed Cl- and d2H distribution indicates that the minimum hydrate amount solutions are representative of the actual hydrate amount. The highest and mean hydrate concentrations estimates from our model are 31% and 10% of the pore space, respectively. These concentrations agree well with visual core observations, supporting the validity of the model assumptions. The minimum in situ Cl- concentrations were used to constrain the rates of upward fluid migration. Simulation of all available data gave a mean flow rate of 0.35 m/k.y. (range: 0.125-0.5 m/k.y.).