952 resultados para Spectral Difference Method
Resumo:
The development of this Master's Thesis is aimed at modeling active for estimating seismic hazard in Haití failures. It has been used zoned probabilistic method, both classical and hybrid, considering the incorporation of active faults as independent units in the calculation of seismic hazard. In this case, the rate of seismic moment is divided between the failures and the area seismogenetic same region. Failures included in this study are the Septentrional, Matheux and Enriquillo fault. We compared the results obtained by both methods to determine the importance of considering the faults in the calculation. In the first instance, updating the seismic catalog, homogenization, completeness analysis and purification was necessary to obtain a catalog ready to proceed to the estimation of the hazard. With the seismogenic zoning defined in previous studies and the updated seismic catalog, they are obtained relations Gutenberg-Richter recurrence of seismicity, superficial and deep in each area. Selected attenuation models were those used in (Benito et al., 2011), as the tectonic area of study is very similar to that of Central America. Its implementation has been through the development of a logical in which each branch is multiplied by an index based on the relevance of each combination of models. Results are presented as seismic hazard maps for return periods of 475, 975 and 2475 years, and spectral acceleration (SA) in structural periods: 0.1 - 0.2 - 0.5 - 1.0 and 2.0 seconds, and the difference accelerations between maps obtained by the classical method and the hybrid method. Maps realize the importance of including faults as separate items in the calculation of the hazard. The morphology of the zoned maps presented higher values in the area where the superficial and deep zone overlap. In the results it can determine that the minimum values in the zoned approach they outweigh the hybrid method, especially in areas where there are no faults. Higher values correspond to those obtained in fault zones by the hybrid method understanding that the contribution of the faults in this method is very important with high values. The maximum value of PGA obtained is close to Septentrional in 963gal, near to 460 gal in Matheux, and the Enriquillo fault line value reaches 760gal PGA in the Eastern segment and Western 730gal in the segment. This compares with that obtained in the zoned approach in this area where the value of PGA obtained was 240gal. These values are compared with those obtained by Frankel et al., (2011) with those have much similarity in values and morphology, in contrast to those presented by Benito et al., (2012) and the Standard Seismic Dominican Republic
Resumo:
El trabajo contenido en esta tesis doctoral está encuadrado en el desarrollo de antenas reconfigurables electrónicamente capaces de proporcionar prestaciones competitivas a las aplicaciones cada vez más comunes que operan a frecuencias superiores a 60 GHz. En concreto, esta tesis se centra en el estudio, diseño, e implementación de las antenas reflectarray, a las que se introduce la tecnología de cristal líquido como elemento característico con el que se consigue reconfigurabilidad de haz de forma electrónica. Desde un punto de vista muy general, se puede describir un cristal líquido como un material cuya permitividad eléctrica es variable y controlada por una excitación externa, que generalmente suele corresponderse con un campo eléctrico quasi-estático (AC). Las antenas reflectarray de cristal líquido se han escogido como objeto de estudio por varias razones. La primera de ellas tiene que ver con las ventajas que los reflectarrays, y en especial aquellos realizados en configuración planar, proporcionan con respecto a otras antenas de alta ganancia como los reflectores o los “phased-arrays”. En los reflectarrays, la alimentación a través de una fuente primaria común (característica de reflectores) y el elevado número de grados de libertad de las celdas que los componen (característica de arrays) hacen que estas antenas puedan proporcionar prestaciones eléctricas iguales o mejores que las anteriores, a un coste más reducido y con estructuras de antena más compactas. La segunda razón radica en la flexibilidad que ofrece el cristal líquido a ser confinado y polarizado en recintos de geometría variada, como consecuencia de su fluidez (propiedad de los líquidos). Por ello, la tecnología de cristal líquido permite que el propio elemento reconfigurable en las celdas de reflectarray se adapte a la configuración planar de manera que en sí mismo, el cristal líquido sea una o varias de las capas características de esta configuración. Esto simplifica de forma drástica la estructura y la fabricación de este tipo de antenas, incluso si se comparan con reflectarrays reconfigurables basados en otras tecnologías como diodos, MEMS, etc. Por tanto, su coste y desarrollo es muy reducido, lo que hace que se puedan fabricar reflectarrays reconfigurables eléctricamente grandes, a bajo coste, y en producción elevada. Un ejemplo claro de una estructura similar, y que ha tenido éxito comercial, son las pantallas de cristal líquido. La tercera razón reside en el hecho de que el cristal líquido es, hasta la fecha, de las pocas tecnologías capaces de ofrecer reconfigurabilidad del haz a frecuencias superiores a 60 GHz. De hecho, el cristal líquido permite reconfigurabilidad en un amplio margen de frecuencias, que va desde DC a frecuencias del espectro visible, incluyendo las microondas y los THz. Otras tecnologías, como los materiales ferroeléctricos, el grafeno o la tecnología CMOS “on chip” permiten también conmutar el haz en estas frecuencias. Sin embargo, la tecnología CMOS tiene un elevado coste y actualmente está limitada a frecuencias inferiores a 150 GHz, y aunque los materiales ferroeléctricos o el grafeno puedan conmutar a frecuencias más altas y en un rango más amplio, tienen serias dificultades que los hacen aún inmaduros. En el caso de los materiales ferroeléctricos, los elevados voltajes para conmutar el material los hacen poco atractivos, mientras que en el caso del grafeno, su modelado aún está en discusión, y todavía no se han arrojado resultados experimentales que validen su idoneidad. Estas tres razones hacen que los reflectarrays basados en cristal líquido sean atractivos para multitud de aplicaciones de haz reconfigurable a frecuencias superiores a 60 GHz. Aplicaciones como radar de escaneo de imágenes de alta resolución, espectroscopia molecular, radiómetros para observación atmosférica, o comunicaciones inalámbricas de alta frecuencia (WiGig) son algunas de ellas. La tesis está estructurada en tres partes. En la primera de ellas se describen las características más comunes de los cristales líquidos, centrándonos en detalle en aquellas propiedades ofrecidas por este material en fase nemática. En concreto, se estudiará la anisotropía dieléctrica (Ae) de los cristales líquidos uniaxiales, que son los que se emplean en esta tesis, definida como la diferencia entre la permitividad paralela (£//) y la perpendicular (e±): Ae = e,, - e±. También se estudiará la variación de este parámetro (Ae) con la frecuencia, y el modelado electromagnético macroscópico más general que, extraído a partir de aquella, permite describir el cristal líquido para cada tensión de polarización en celdas de geometría planar. Este modelo es de suma importancia para garantizar precisión en el desfasaje proporcionado por las diferentes celdas reconfigurables para reflectarrays que se describirán en la siguiente parte de la tesis. La segunda parte de la tesis se centra en el diseño de celdas reflectarray resonantes basadas en cristal líquido. La razón por la que se escogen estos tipos de celdas reside en el hecho de que son las únicas capaces de proporcionar rangos de fase elevados ante la reducida anisotropía dieléctrica que ofrecen los cristales líquidos. El objetivo de esta parte trata, por tanto, de obtener estructuras de celdas reflectarray que sean capaces de proporcionar buenas prestaciones eléctricas a nivel de antena, mejorando sustancialmente las prestaciones de las celdas reportadas en el estado del arte, así como de desarrollar una herramienta de diseño general para aquellas. Para ello, se estudian las prestaciones eléctricas de diferentes tipos de elementos resonantes de cristal líquido que van, desde el más sencillo, que ha limitado el estado de la técnica hasta el desarrollo de esta tesis y que está formado por un sólo resonador, a elementos que constan de varios resonadores (multi-resonantes) y que pueden ser monocapa o multicapa. En un primer paso, el procedimiento de diseño de estas estructuras hace uso de un modelo convencional de cristal líquido que ha venido siendo usado en el estado del arte para este tipo de celdas, y que considera el cristal líquido como un material homogéneo e isótropo cuya permitividad varía entre (e/7) y (e±). Sin embargo, en esta parte de la tesis se demuestra que dicho modelado no es suficiente para describir de forma genérica el comportamiento del cristal líquido en las celdas tipo reflectarray. En la tesis se proponen procedimientos más exactos para el análisis y diseño basados en un modelo más general que define el cristal líquido como un material anisótropo e inhomogeneo en tres dimensiones, y se ha implementado una técnica que permite optimizar celdas multi-resonantes de forma eficiente para conseguir elevadas prestaciones en cuanto a ancho de banda, rango de fase, pérdidas, o sensibilidad al ángulo de incidencia. Los errores cometidos en el uso del modelado convencional a nivel de celda (amplitud y fase) se han analizado para varias geometrías, usando medidas de varios prototipos de antena que usan un cristal líquido real a frecuencias superiores a 100 GHz. Las medidas se han realizado en entorno periódico mediante un banco cuasi-óptico, que ha sido diseñado especialmente para este fin. Uno de estos prototipos se ha optimizado a 100 GHz para conseguir un ancho de banda relativamente elevado (10%), pérdidas reducidas, un rango de fase mayor de 360º, baja sensibilidad al ángulo de incidencia, y baja influencia de la inhomogeneidad transversal del cristal líquido en la celda. Estas prestaciones a nivel de celda superan de forma clara aquellas conseguidas por otros elementos que se han reportado en la literatura, de manera que dicho prototipo se ha usado en la última parte de la tesis para realizar diversas antenas de barrido. Finalmente, en esta parte se presenta una estrategia de caracterización de la anisotropía macroscópica a partir de medidas de los elementos de reflectarray diseñados en banco cuasi-óptico, obteniendo resultados tanto en las frecuencias de interés en RF como en AC, y comparándolas con aquellas obtenidas mediante otros métodos. La tercera parte de la tesis consiste en el estudio, diseño, fabricación y medida de antenas reconfigurables basadas en cristal líquido en configuraciones complejas. En reflectarrays pasivos, el procedimiento de diseño de la antena se limita únicamente al ajuste en cada celda de la antena de las dimensiones de las metalizaciones que se emplean para el control de fase, mediante procesos de optimización bien conocidos. Sin embargo, en el caso de reflectarrays reconfigurables basados en cristal líquido, resulta necesario un paso adicional, que consiste en calcular de forma adecuada las tensiones de control en cada celda del reflectarray para configurar la fase requerida en cada una de ellas, así como diseñar la estructura y los circuitos de control que permitan direccionar a cada elemento su tensión correspondiente. La síntesis de tensiones es por tanto igual o más importante que el diseño de la geometría de las celdas, puesto que éstas son las que están directamente relacionadas con la fase. En el estado del arte, existen varias estrategias de síntesis de tensiones que se basan en la caracterización experimental de la curva de fase respecto al voltaje. Sin embargo, esta caracterización sólo puede hacerse a un solo ángulo de incidencia y para unas determinadas dimensiones de celda, lo que produce que las tensiones sintetizadas sean diferentes de las adecuadas, y en definitiva que se alcancen errores de fase mayores de 70º. De esta forma, hasta la fecha, las prestaciones a nivel de antena que se han conseguido son reducidas en cuanto a ancho de banda, rango de escaneo o nivel de lóbulos secundarios. En esta última parte de la tesis, se introduce una nueva estrategia de síntesis de tensiones que es capaz de predecir mediante simulaciones, y con alta precisión, las tensiones que deben introducirse en cada celda teniendo en cuenta su ángulo de incidencia, sus dimensiones, la frecuencia, así como la señal de polarización definida por su frecuencia y forma de onda AC. Esta estrategia se basa en modelar cada uno de los estados de permitividad del cristal líquido como un sustrato anisótropo con inhomogeneidad longitudinal (1D), o en ciertos casos, como un tensor equivalente homogéneo. La precisión de ambos modelos electromagnéticos también se discute. Con el objetivo de obtener una herramienta eficiente de cálculo de tensiones, también se ha escrito e implementado una herramienta de análisis basada en el Método de los Momentos en el Dominio Espectral (SD-MoM) para sustratos estratificados anisótropos, que se usa en cada iteración del procedimiento de síntesis para analizar cada una de las celdas de la antena. La síntesis de tensiones se ha diseñado además para reducir al máximo el efecto del rizado de amplitud en el diagrama de radiación, que es característico en los reflectarrays que están formados por celdas con pérdidas elevadas, lo que en sí, supone un avance adicional para la obtención de mejores prestaciones de antena. Para el cálculo de los diagramas de radiación empleados en el procedimiento de síntesis, se asume un análisis elemento a elemento considerando periodicidad local, y se propone el uso de un método capaz de modelar el campo incidente de forma que se elimine la limitación de la periodicidad local en la excitación. Una vez definida la estrategia adecuada de cálculo de las tensiones a aplicar al cristal líquido en cada celda, la estructura de direccionamiento de las mismas en la antena, y diseñados los circuitos de control, se diseñan, fabrican y miden dos prototipos diferentes de antena de barrido electrónico a 100 GHz usando las celdas anteriormente presentadas. El primero de estos prototipos es un reflectarray en configuración “single offset” con capacidad de escaneo en un plano (elevación o azimut). Aunque previamente se realizan diseños de antenas de barrido en 2D a varias frecuencias en el rango de milimétricas y sub-milimétricas, y se proponen ciertas estrategias de direccionamiento que permiten conseguir este objetivo, se desarrolla el prototipo con direccionamiento en una dimensión con el fin de reducir el número de controles y posibles errores de fabricación, y así también validar la herramienta de diseño. Para un tamaño medio de apertura (con un numero de filas y columnas entre 30 y 50 elementos, lo que significa un reflectarray con un número de elementos superior a 900), la configuración “single offset” proporciona rangos de escaneo elevados, y ganancias que pueden oscilar entre los 20 y 30 dBi. En concreto, el prototipo medido proporciona un haz de barrido en un rango angular de 55º, en el que el nivel de lóbulos secundarios (SLL) permanece mejor de -13 dB en un ancho de banda de un 8%. La ganancia máxima es de 19.4 dBi. Estas prestaciones superan de forma clara aquellas conseguidas por otros autores. El segundo prototipo se corresponde con una antena de doble reflector que usa el reflectarray de cristal líquido como sub-reflector para escanear el haz en un plano (elevación o azimut). El objetivo básico de esta geometría es obtener mayores ganancias que en el reflectarray “single offset” con una estructura más compacta, aunque a expensas de reducir el rango de barrido. En concreto, se obtiene una ganancia máxima de 35 dBi, y un rango de barrido de 12º. Los procedimientos de síntesis de tensiones y de diseño de las estructuras de las celdas forman, en su conjunto, una herramienta completa de diseño precisa y eficiente de antenas reflectarray reconfigurables basados en cristales líquidos. Dicha herramienta se ha validado mediante el diseño, la fabricación y la medida de los prototipos anteriormente citados a 100 GHz, que consiguen algo nunca alcanzado anteriormente en la investigación de este tipo de antenas: unas prestaciones competitivas y una predicción excelente de los resultados. El procedimiento es general, y por tanto se puede usar a cualquier frecuencia en la que el cristal líquido ofrezca anisotropía dieléctrica, incluidos los THz. Los prototipos desarrollados en esta tesis doctoral suponen también unas de las primeras antenas de barrido real a frecuencias superiores a 100 GHz. En concreto, la antena de doble reflector para escaneo de haz es la primera antena reconfigurable electrónicamente a frecuencias superiores a 60 GHz que superan los 25 dBi de ganancia, siendo a su vez la primera antena de doble reflector que contiene un reflectarray reconfigurable como sub-reflector. Finalmente, se proponen ciertas mejoras que aún deben se deben realizar para hacer que estas antenas puedan ser un producto completamente desarrollado y competitivo en el mercado. ABSTRACT The work presented in this thesis is focused on the development of electronically reconfigurable antennas that are able to provide competitive electrical performance to the increasingly common applications operating at frequencies above 60 GHz. Specifically, this thesis presents the study, design, and implementation of reflectarray antennas, which incorporate liquid crystal (LC) materials to scan or reconfigure the beam electronically. From a general point of view, a liquid crystal can be defined as a material whose dielectric permittivity is variable and can be controlled with an external excitation, which usually corresponds with a quasi-static electric field (AC). By changing the dielectric permittivity at each cell that makes up the reflectarray, the phase shift on the aperture is controlled, so that a prescribed radiation pattern can be configured. Liquid Crystal-based reflectarrays have been chosen for several reasons. The first has to do with the advantages provided by the reflectarray antenna with respect to other high gain antennas, such as reflectors or phased arrays. The RF feeding in reflectarrays is achieved by using a common primary source (as in reflectors). This arrangement and the large number of degrees of freedom provided by the cells that make up the reflectarray (as in arrays), allow these antennas to provide a similar or even better electrical performance than other low profile antennas (reflectors and arrays), but assuming a more reduced cost and compactness. The second reason is the flexibility of the liquid crystal to be confined in an arbitrary geometry due to its fluidity (property of liquids). Therefore, the liquid crystal is able to adapt to a planar geometry so that it is one or more of the typical layers of this configuration. This simplifies drastically both the structure and manufacture of this type of antenna, even when compared with reconfigurable reflectarrays based on other technologies, such as diodes MEMS, etc. Therefore, the cost of developing this type of antenna is very small, which means that electrically large reconfigurable reflectarrays could be manufactured assuming low cost and greater productions. A paradigmatic example of a similar structure is the liquid crystal panel, which has already been commercialized successfully. The third reason lies in the fact that, at present, the liquid crystal is one of the few technologies capable of providing switching capabilities at frequencies above 60 GHz. In fact, the liquid crystal allows its permittivity to be switched in a wide range of frequencies, which are from DC to the visible spectrum, including microwaves and THz. Other technologies, such as ferroelectric materials, graphene or CMOS "on chip" technology also allow the beam to be switched at these frequencies. However, CMOS technology is expensive and is currently limited to frequencies below 150 GHz, and although ferroelectric materials or graphene can switch at higher frequencies and in a wider range, they have serious difficulties that make them immature. Ferroelectric materials involve the use of very high voltages to switch the material, making them unattractive, whereas the electromagnetic modelling of the graphene is still under discussion, so that the experimental results of devices based on this latter technology have not been reported yet. These three reasons make LC-based reflectarrays attractive for many applications that involve the use of electronically reconfigurable beams at frequencies beyond 60 GHz. Applications such as high resolution imaging radars, molecular spectroscopy, radiometers for atmospheric observation, or high frequency wireless communications (WiGig) are just some of them. This thesis is divided into three parts. In the first part, the most common properties of the liquid crystal materials are described, especially those exhibited in the nematic phase. The study is focused on the dielectric anisotropy (Ac) of uniaxial liquid crystals, which is defined as the difference between the parallel (e/7) and perpendicular (e±) permittivities: Ae = e,, - e±. This parameter allows the permittivity of a LC confined in an arbitrary volume at a certain biasing voltage to be described by solving a variational problem that involves both the electrostatic and elastic energies. Thus, the frequency dependence of (Ae) is also described and characterised. Note that an appropriate LC modelling is quite important to ensure enough accuracy in the phase shift provided by each cell that makes up the reflectarray, and therefore to achieve a good electrical performance at the antenna level. The second part of the thesis is focused on the design of resonant reflectarray cells based on liquid crystal. The reason why resonant cells have been chosen lies in the fact that they are able to provide enough phase range using the values of the dielectric anisotropy of the liquid crystals, which are typically small. Thus, the aim of this part is to investigate several reflectarray cell architectures capable of providing good electrical performance at the antenna level, which significantly improve the electrical performance of the cells reported in the literature. Similarly, another of the objectives is to develop a general tool to design these cells. To fulfill these objectives, the electrical yields of different types of resonant reflectarray elements are investigated, beginning from the simplest, which is made up of a single resonator and limits the state of the art. To overcome the electrical limitations of the single resonant cell, several elements consisting of multiple resonators are considered, which can be single-layer or multilayer. In a first step, the design procedure of these structures makes use of a conventional electromagnetic model which has been used in the literature, which considers that the liquid crystal behaves as homogeneous and isotropic materials whose permittivity varies between (e/7) y (e±). However, in this part of the thesis it is shown that the conventional modelling is not enough to describe the physical behaviour of the liquid crystal in reflectarray cells accurately. Therefore, a more accurate analysis and design procedure based on a more general model is proposed and developed, which defines the liquid crystal as an anisotropic three-dimensional inhomogeneous material. The design procedure is able to optimize multi-resonant cells efficiently to achieve good electrical performance in terms of bandwidth, phase range, losses, or sensitivity to the angle of incidence. The errors made when the conventional modelling (amplitude and phase) is considered have been also analysed for various cell geometries, by using measured results from several antenna prototypes made up of real liquid crystals at frequencies above 100 GHz. The measurements have been performed in a periodic environment using a quasi-optical bench, which has been designed especially for this purpose. One of these prototypes has been optimized to achieve a relatively large bandwidth (10%) at 100 GHz, low losses, a phase range of more than 360º, a low sensitivity to angle of incidence, and a low influence of the transversal inhomogeneity of the liquid crystal in the cell. The electrical yields of this prototype at the cell level improve those achieved by other elements reported in the literature, so that this prototype has been used in the last part of the thesis to perform several complete antennas for beam scanning applications. Finally, in this second part of the thesis, a novel strategy to characterise the macroscopic anisotropy using reflectarray cells is presented. The results in both RF and AC frequencies are compared with those obtained by other methods. The third part of the thesis consists on the study, design, manufacture and testing of LCbased reflectarray antennas in complex configurations. Note that the design procedure of a passive reflectarray antenna just consists on finding out the dimensions of the metallisations of each cell (which are used for phase control), using well-known optimization processes. However, in the case of reconfigurable reflectarrays based on liquid crystals, an additional step must be taken into account, which consists of accurately calculating the control voltages to be applied to each cell to configure the required phase-shift distribution on the surface of the antenna. Similarly, the structure to address the voltages at each cell and the control circuitry must be also considered. Therefore, the voltage synthesis is even more important than the design of the cell geometries (dimensions), since the voltages are directly related to the phase-shift. Several voltage synthesis procedures have been proposed in the state of the art, which are based on the experimental characterization of the phase/voltage curve. However, this characterization can be only carried out at a single angle of incidence and at certain cell dimensions, so that the synthesized voltages are different from those needed, thus giving rise to phase errors of more than 70°. Thus, the electrical yields of the LCreflectarrays reported in the literature are limited in terms of bandwidth, scanning range or side lobes level. In this last part of the thesis, a new voltage synthesis procedure has been defined and developed, which allows the required voltage to be calculated at each cell using simulations that take into account the particular dimensions of the cells, their angles of incidence, the frequency, and the AC biasing signal (frequency and waveform). The strategy is based on the modelling of each one of the permittivity states of the liquid crystal as an anisotropic substrate with longitudinal inhomogeneity (1D), or in certain cases, as an equivalent homogeneous tensor. The accuracy of both electromagnetic models is also discussed. The phase errors made by using the proposed voltage synthesis are better than 7º. In order to obtain an efficient tool to analyse and design the reflectarray, an electromagnetic analysis tool based on the Method of Moments in the spectral domain (SD-MoM) has also written and developed for anisotropic stratified media, which is used at each iteration of the voltage synthesis procedure. The voltage synthesis is also designed to minimize the effect of amplitude ripple on the radiation pattern, which is typical of reflectarrays made up of cells exhibiting high losses and represents a further advance in achieving a better antenna performance. To calculate the radiation patterns used in the synthesis procedure, an element-by-element analysis is assumed, which considers the local periodicity approach. Under this consideration, the use of a novel method is proposed, which avoids the limitation that the local periodicity imposes on the excitation. Once the appropriate strategy to calculate the voltages to be applied at each cell is developed, and once it is designed and manufactured both the structure to address the voltages to the antenna and the control circuits, two complete LC-based reflectarray antennas that operate at 100 GHz have been designed, manufactured and tested using the previously presented cells. The first prototype consists of a single offset reflectarray with beam scanning capabilities on one plane (elevation and azimuth). Although several LC-reflectarray antennas that provide 2-D scanning capabilities are also designed, and certain strategies to achieve the 2-D addressing of the voltage are proposed, the manufactured prototype addresses the voltages in one dimension in order to reduce the number of controls and manufacturing errors, and thereby validating the design tool. For an average aperture size (with a number of rows and columns of between 30 and 50 elements, which means a reflectarray with more than 900 cells), the single offset configuration provides an antenna gain of between 20 and 30 dBi and a large scanning range. The prototype tested at 100 GHz exhibits an electronically scanned beam in an angular range of 55º and 8% of bandwidth, in which the side lobe level (SLL) remains better than -13 dB. The maximum gain is 19.4 dBi. The electrical performance of the antenna is clearly an improvement on those achieved by other authors in the state of the art. The second prototype corresponds to a dual reflector antenna with a liquid crystal-based reflectarray used as a sub-reflector for beam scanning in one plane (azimuth or elevation). The main objective is to obtain a higher gain than that provided by the single offset configuration, but using a more compact architecture. In this case, a maximum gain of 35 dBi is achieved, although at the expense of reducing the scanning range to 12°, which is inherent in this type of structure. As a general statement, the voltage synthesis and the design procedure of the cells, jointly make up a complete, accurate and efficient design tool of reconfigurable reflectarray antennas based on liquid crystals. The tool has been validated by testing the previously mentioned prototypes at 100 GHz, which achieve something never reached before for this type of antenna: a competitive electrical performance, and an excellent prediction of the results. The design procedure is general and therefore can be used at any frequency for which the liquid crystal exhibits dielectric anisotropy. The two prototypes designed, manufactured and tested in this thesis are also some of the first antennas that currently operate at frequencies above 100 GHz. In fact, the dual reflector antenna is the first electronically scanned dual reflector antenna at frequencies above 60 GHz (the operation frequency is 100 GHz) with a gain greater than 25 dBi, being in turn the first dual-reflector antenna with a real reconfigurable sub-reflectarray. Finally, some improvements that should be still investigated to make these antennas commercially competitive are proposed.
Resumo:
The effect of type of fiber, site of fermetation, method for quantifying insoluble and soluble dietary fiber, and their correction for intestinal mucin on fiber digestibility were examined in rabbits. Three diets differing in soluble fiber were formulated (8.5% soluble fiber, on DM basis, in the low soluble fiber [LSF] diet; 10.2% in the medium soluble fiber [MSF] diet; and 14.5% in the high soluble fiber [HSF] diet). They were obtained by replacing half of the dehydrated alfalfa in the MSF diet with a mixture of beet and apple pulp (HSF diet) or with a mix of oat hulls and soybean protein (LSF diet). Thirty rabbits with ileal T-cannulas were used to determine ileal and fecal digestibility. Cecal digestibility was determined by difference between fecal and ileal digestibility. Insoluble fiber was measured as NDF, insoluble dietary fiber (IDF), and in vitro insoluble fiber, whereas soluble fiber was calculated as the difference between total dietary fiber (TDF) and NDF (TDF_NDF), IDF (TDF-IDF), and in vitro insoluble fiber (TDF-in vitro insoluble fiber). The intestinal mucin content was used to correct the TDF and soluble fiber digestibility. Ileal and fecal concentration of mucin increased from the LSF to the HSF diet group (P < 0.01). Once corrected for intestinal mucin, ileal and fecal digestibility of TDF and soluble fiber increased whereas cecal digestibility decreased (P < 0.01). Ileal digestibility of TDF increased from the LSF to the HSF diet group (12.0 vs. 28.1%; P < 0.01), with no difference in the cecum (26.4%), resulting in a higher fecal digestibility from the LSF to the HSF diet group (P < 0.01). Ileal digestibility of insoluble fiber increased from the LSF to the HSF diet group (11.3 vs. 21.0%; P < 0.01), with no difference in the cecum (13.9%) and no effect of fiber method, resulting in a higher fecal digestibility for rabbits fed the HSF diet compared with the MSF and LSF diets groups (P < 0.01).Fecal digestibility of NDF was higher compared with IDF or in vitro insoluble fiber (P < 0.01). Ileal soluble fiber digestibility was higher for the HSF than for the LSF diet group (43.6 vs. 14.5%; P < 0.01) and fiber method did not affect it. Cecal soluble fiber digestibility decreased from the LSF to the HSF diet group (72.1 vs. 49.2%; P < 0.05). The lowest cecal and fecal soluble fiber digestibility was measured using TDF-NDF (P < 0.01). In conclusion, a correction for intestinal mucin is necessary for ileal TDF and soluble fiber digestibility whereas the selection of the fiber method has a minor relevance. The inclusion of sugar beet and apple pulp increased the amount of TDF fermented in the small intestine.
Resumo:
In this work a p-adaptation (modification of the polynomial order) strategy based on the minimization of the truncation error is developed for high order discontinuous Galerkin methods. The truncation error is approximated by means of a truncation error estimation procedure and enables the identification of mesh regions that require adaptation. Three truncation error estimation approaches are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. Fine solutions, which are obtained by enriching the polynomial order, are required to solve the numerical problem with adequate accuracy. For the three truncation error estimation methods the former needs time converged solutions, while the last two rely on non-converged solutions, which lead to faster computations. Based on these truncation error estimation methods, algorithms for mesh adaptation were designed and tested. Firstly, an isotropic adaptation approach is presented, which leads to equally distributed polynomial orders in different coordinate directions. This first implementation is improved by incorporating a method to extrapolate the truncation error. This results in a significant reduction of computational cost. Secondly, the employed high order method permits the spatial decoupling of the estimated errors and enables anisotropic p-adaptation. The incorporation of anisotropic features leads to meshes with different polynomial orders in the different coordinate directions such that flow-features related to the geometry are resolved in a better manner. These adaptations result in a significant reduction of degrees of freedom and computational cost, while the amount of improvement depends on the test-case. Finally, this anisotropic approach is extended by using error extrapolation which leads to an even higher reduction in computational cost. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. The main result is that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of a factor of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively. RESUMEN En este trabajo se ha desarrollado una estrategia de adaptación-p (modificación del orden polinómico) para métodos Galerkin discontinuo de alto orden basada en la minimización del error de truncación. El error de truncación se estima utilizando el método tau-estimation. El estimador permite la identificación de zonas de la malla que requieren adaptación. Se distinguen tres técnicas de estimación: a posteriori, quasi a priori y quasi a priori con correción. Todas las estrategias requieren una solución obtenida en una malla fina, la cual es obtenida aumentando de manera uniforme el orden polinómico. Sin embargo, mientras que el primero requiere que esta solución esté convergida temporalmente, el resto utiliza soluciones no convergidas, lo que se traduce en un menor coste computacional. En este trabajo se han diseñado y probado algoritmos de adaptación de malla basados en métodos tau-estimation. En primer lugar, se presenta un algoritmo de adaptacin isótropo, que conduce a discretizaciones con el mismo orden polinómico en todas las direcciones espaciales. Esta primera implementación se mejora incluyendo un método para extrapolar el error de truncación. Esto resulta en una reducción significativa del coste computacional. En segundo lugar, el método de alto orden permite el desacoplamiento espacial de los errores estimados, permitiendo la adaptación anisotropica. Las mallas obtenidas mediante esta técnica tienen distintos órdenes polinómicos en cada una de las direcciones espaciales. La malla final tiene una distribución óptima de órdenes polinómicos, los cuales guardan relación con las características del flujo que, a su vez, depenen de la geometría. Estas técnicas de adaptación reducen de manera significativa los grados de libertad y el coste computacional. Por último, esta aproximación anisotropica se extiende usando extrapolación del error de truncación, lo que conlleva un coste computational aún menor. Las estrategias se verifican y se comparan en téminors de precisión y coste computacional utilizando las ecuaciones de Euler y Navier Stokes. Los dos métodos quasi a priori consiguen una reducción significativa del coste computacional en comparación con aumento uniforme del orden polinómico. En concreto, para una capa límite viscosa, obtenemos una mejora en tiempo de computación de 6.6 y 7.6 respectivamente, para las aproximaciones quasi-a priori y quasi-a priori con corrección.
Resumo:
La región cerca de la pared de flujos turbulentos de pared ya está bien conocido debido a su bajo número de Reynolds local y la separación escala estrecha. La región lejos de la pared (capa externa) no es tan interesante tampoco, ya que las estadísticas allí se escalan bien por las unidades exteriores. La región intermedia (capa logarítmica), sin embargo, ha estado recibiendo cada vez más atención debido a su propiedad auto-similares. Además, de acuerdo a Flores et al. (2007) y Flores & Jiménez (2010), la capa logarítmica es más o menos independiente de otras capas, lo que implica que podría ser inspeccionado mediante el aislamiento de otras dos capas, lo que reduciría significativamente los costes computacionales para la simulación de flujos turbulentos de pared. Algunos intentos se trataron después por Mizuno & Jiménez (2013), quien simulan la capa logarítmica sin la región cerca de la pared con estadísticas obtenidas de acuerdo razonablemente bien con los de las simulaciones completas. Lo que más, la capa logarítmica podría ser imitado por otra turbulencia sencillo de cizallamiento de motor. Por ejemplo, Pumir (1996) encontró que la turbulencia de cizallamiento homogéneo estadísticamente estacionario (SS-HST) también irrumpe, de una manera muy similar al proceso de auto-sostenible en flujos turbulentos de pared. Según los consideraciones arriba, esta tesis trata de desvelar en qué medida es la capa logarítmica de canales similares a la turbulencia de cizalla más sencillo, SS-HST, mediante la comparación de ambos cinemática y la dinámica de las estructuras coherentes en los dos flujos. Resultados sobre el canal se muestran mediante Lozano-Durán et al. (2012) y Lozano-Durán & Jiménez (2014b). La hoja de ruta de esta tarea se divide en tres etapas. En primer lugar, SS-HST es investigada por medio de un código nuevo de simulación numérica directa, espectral en las dos direcciones horizontales y compacto-diferencias finitas en la dirección de la cizalla. Sin utiliza remallado para imponer la condición de borde cortante periódica. La influencia de la geometría de la caja computacional se explora. Ya que el HST no tiene ninguna longitud característica externa y tiende a llenar el dominio computacional, las simulaciopnes a largo plazo del HST son ’mínimos’ en el sentido de que contiene sólo unas pocas estructuras media a gran escala. Se ha encontrado que el límite principal es el ancho de la caja de la envergadura, Lz, que establece las escalas de longitud y velocidad de la turbulencia, y que las otras dos dimensiones de la caja debe ser suficientemente grande (Lx > 2LZ, Ly > Lz) para evitar que otras direcciones estando limitado también. También se encontró que las cajas de gran longitud, Lx > 2Ly, par con el paso del tiempo la condición de borde cortante periódica, y desarrollar fuertes ráfagas linealizadas no físicos. Dentro de estos límites, el flujo muestra similitudes y diferencias interesantes con otros flujos de cizalla, y, en particular, con la capa logarítmica de flujos turbulentos de pared. Ellos son exploradas con cierto detalle. Incluyen un proceso autosostenido de rayas a gran escala y con una explosión cuasi-periódica. La escala de tiempo de ruptura es de aproximadamente universales, ~20S~l(S es la velocidad de cizallamiento media), y la disponibilidad de dos sistemas de ruptura diferentes permite el crecimiento de las ráfagas a estar relacionado con algo de confianza a la cizalladura de turbulencia inicialmente isotrópico. Se concluye que la SS-HST, llevado a cabo dentro de los parámetros de cílculo apropiados, es un sistema muy prometedor para estudiar la turbulencia de cizallamiento en general. En segundo lugar, las mismas estructuras coherentes como en los canales estudiados por Lozano-Durán et al. (2012), es decir, grupos de vórticidad (fuerte disipación) y Qs (fuerte tensión de Reynolds tangencial, -uv) tridimensionales, se estudia mediante simulación numérica directa de SS-HST con relaciones de aspecto de cuadro aceptables y número de Reynolds hasta Rex ~ 250 (basado en Taylor-microescala). Se discute la influencia de la intermitencia de umbral independiente del tiempo. Estas estructuras tienen alargamientos similares en la dirección sentido de la corriente a las familias separadas en los canales hasta que son de tamaño comparable a la caja. Sus dimensiones fractales, longitudes interior y exterior como una función del volumen concuerdan bien con sus homólogos de canales. El estudio sobre sus organizaciones espaciales encontró que Qs del mismo tipo están alineados aproximadamente en la dirección del vector de velocidad en el cuadrante al que pertenecen, mientras Qs de diferentes tipos están restringidos por el hecho de que no debe haber ningún choque de velocidad, lo que hace Q2s (eyecciones, u < 0,v > 0) y Q4s (sweeps, u > 0,v < 0) emparejado en la dirección de la envergadura. Esto se verifica mediante la inspección de estructuras de velocidad, otros cuadrantes como la uw y vw en SS-HST y las familias separadas en el canal. La alineación sentido de la corriente de Qs ligada a la pared con el mismo tipo en los canales se debe a la modulación de la pared. El campo de flujo medio condicionado a pares Q2-Q4 encontró que los grupos de vórticidad están en el medio de los dos, pero prefieren los dos cizalla capas alojamiento en la parte superior e inferior de Q2s y Q4s respectivamente, lo que hace que la vorticidad envergadura dentro de las grupos de vórticidad hace no cancele. La pared amplifica la diferencia entre los tamaños de baja- y alta-velocidad rayas asociados con parejas de Q2-Q4 se adjuntan como los pares alcanzan cerca de la pared, el cual es verificado por la correlación de la velocidad del sentido de la corriente condicionado a Q2s adjuntos y Q4s con diferentes alturas. Grupos de vórticidad en SS-HST asociados con Q2s o Q4s también están flanqueadas por un contador de rotación de los vórtices sentido de la corriente en la dirección de la envergadura como en el canal. La larga ’despertar’ cónica se origina a partir de los altos grupos de vórticidad ligada a la pared han encontrado los del Álamo et al. (2006) y Flores et al. (2007), que desaparece en SS-HST, sólo es cierto para altos grupos de vórticidad ligada a la pared asociados con Q2s pero no para aquellos asociados con Q4s, cuyo campo de flujo promedio es en realidad muy similar a la de SS-HST. En tercer lugar, las evoluciones temporales de Qs y grupos de vórticidad se estudian mediante el uso de la método inventado por Lozano-Durán & Jiménez (2014b). Las estructuras se clasifican en las ramas, que se organizan más en los gráficos. Ambas resoluciones espaciales y temporales se eligen para ser capaz de capturar el longitud y el tiempo de Kolmogorov puntual más probable en el momento más extrema. Debido al efecto caja mínima, sólo hay un gráfico principal consiste en casi todas las ramas, con su volumen y el número de estructuras instantáneo seguien la energía cinética y enstrofía intermitente. La vida de las ramas, lo que tiene más sentido para las ramas primarias, pierde su significado en el SS-HST debido a las aportaciones de ramas primarias al total de Reynolds estrés o enstrofía son casi insignificantes. Esto también es cierto en la capa exterior de los canales. En cambio, la vida de los gráficos en los canales se compara con el tiempo de ruptura en SS-HST. Grupos de vórticidad están asociados con casi el mismo cuadrante en términos de sus velocidades medias durante su tiempo de vida, especialmente para los relacionados con las eyecciones y sweeps. Al igual que en los canales, las eyecciones de SS-HST se mueven hacia arriba con una velocidad promedio vertical uT (velocidad de fricción) mientras que lo contrario es cierto para los barridos. Grupos de vórticidad, por otra parte, son casi inmóvil en la dirección vertical. En la dirección de sentido de la corriente, que están advección por la velocidad media local y por lo tanto deforman por la diferencia de velocidad media. Sweeps y eyecciones se mueven más rápido y más lento que la velocidad media, respectivamente, tanto por 1.5uT. Grupos de vórticidad se mueven con la misma velocidad que la velocidad media. Se verifica que las estructuras incoherentes cerca de la pared se debe a la pared en vez de pequeño tamaño. Los resultados sugieren fuertemente que las estructuras coherentes en canales no son especialmente asociado con la pared, o incluso con un perfil de cizalladura dado. ABSTRACT Since the wall-bounded turbulence was first recognized more than one century ago, its near wall region (buffer layer) has been studied extensively and becomes relatively well understood due to the low local Reynolds number and narrow scale separation. The region just above the buffer layer, i.e., the logarithmic layer, is receiving increasingly more attention nowadays due to its self-similar property. Flores et al. (20076) and Flores & Jim´enez (2010) show that the statistics of logarithmic layer is kind of independent of other layers, implying that it might be possible to study it separately, which would reduce significantly the computational costs for simulations of the logarithmic layer. Some attempts were tried later by Mizuno & Jimenez (2013), who simulated the logarithmic layer without the buffer layer with obtained statistics agree reasonably well with those of full simulations. Besides, the logarithmic layer might be mimicked by other simpler sheardriven turbulence. For example, Pumir (1996) found that the statistically-stationary homogeneous shear turbulence (SS-HST) also bursts, in a manner strikingly similar to the self-sustaining process in wall-bounded turbulence. Based on these considerations, this thesis tries to reveal to what extent is the logarithmic layer of channels similar to the simplest shear-driven turbulence, SS-HST, by comparing both kinematics and dynamics of coherent structures in the two flows. Results about the channel are shown by Lozano-Dur´an et al. (2012) and Lozano-Dur´an & Jim´enez (20146). The roadmap of this task is divided into three stages. First, SS-HST is investigated by means of a new direct numerical simulation code, spectral in the two horizontal directions and compact-finite-differences in the direction of the shear. No remeshing is used to impose the shear-periodic boundary condition. The influence of the geometry of the computational box is explored. Since HST has no characteristic outer length scale and tends to fill the computational domain, longterm simulations of HST are ‘minimal’ in the sense of containing on average only a few large-scale structures. It is found that the main limit is the spanwise box width, Lz, which sets the length and velocity scales of the turbulence, and that the two other box dimensions should be sufficiently large (Lx > 2LZ, Ly > Lz) to prevent other directions to be constrained as well. It is also found that very long boxes, Lx > 2Ly, couple with the passing period of the shear-periodic boundary condition, and develop strong unphysical linearized bursts. Within those limits, the flow shows interesting similarities and differences with other shear flows, and in particular with the logarithmic layer of wallbounded turbulence. They are explored in some detail. They include a self-sustaining process for large-scale streaks and quasi-periodic bursting. The bursting time scale is approximately universal, ~ 20S~l (S is the mean shear rate), and the availability of two different bursting systems allows the growth of the bursts to be related with some confidence to the shearing of initially isotropic turbulence. It is concluded that SS-HST, conducted within the proper computational parameters, is a very promising system to study shear turbulence in general. Second, the same coherent structures as in channels studied by Lozano-Dur´an et al. (2012), namely three-dimensional vortex clusters (strong dissipation) and Qs (strong tangential Reynolds stress, -uv), are studied by direct numerical simulation of SS-HST with acceptable box aspect ratios and Reynolds number up to Rex ~ 250 (based on Taylor-microscale). The influence of the intermittency to time-independent threshold is discussed. These structures have similar elongations in the streamwise direction to detached families in channels until they are of comparable size to the box. Their fractal dimensions, inner and outer lengths as a function of volume agree well with their counterparts in channels. The study about their spatial organizations found that Qs of the same type are aligned roughly in the direction of the velocity vector in the quadrant they belong to, while Qs of different types are restricted by the fact that there should be no velocity clash, which makes Q2s (ejections, u < 0, v > 0) and Q4s (sweeps, u > 0, v < 0) paired in the spanwise direction. This is verified by inspecting velocity structures, other quadrants such as u-w and v-w in SS-HST and also detached families in the channel. The streamwise alignment of attached Qs with the same type in channels is due to the modulation of the wall. The average flow field conditioned to Q2-Q4 pairs found that vortex clusters are in the middle of the pair, but prefer to the two shear layers lodging at the top and bottom of Q2s and Q4s respectively, which makes the spanwise vorticity inside vortex clusters does not cancel. The wall amplifies the difference between the sizes of low- and high-speed streaks associated with attached Q2-Q4 pairs as the pairs reach closer to the wall, which is verified by the correlation of streamwise velocity conditioned to attached Q2s and Q4s with different heights. Vortex clusters in SS-HST associated with Q2s or Q4s are also flanked by a counter rotating streamwise vortices in the spanwise direction as in the channel. The long conical ‘wake’ originates from tall attached vortex clusters found by del A´ lamo et al. (2006) and Flores et al. (2007b), which disappears in SS-HST, is only true for tall attached vortices associated with Q2s but not for those associated with Q4s, whose averaged flow field is actually quite similar to that in SS-HST. Third, the temporal evolutions of Qs and vortex clusters are studied by using the method invented by Lozano-Dur´an & Jim´enez (2014b). Structures are sorted into branches, which are further organized into graphs. Both spatial and temporal resolutions are chosen to be able to capture the most probable pointwise Kolmogorov length and time at the most extreme moment. Due to the minimal box effect, there is only one main graph consist by almost all the branches, with its instantaneous volume and number of structures follow the intermittent kinetic energy and enstrophy. The lifetime of branches, which makes more sense for primary branches, loses its meaning in SS-HST because the contributions of primary branches to total Reynolds stress or enstrophy are almost negligible. This is also true in the outer layer of channels. Instead, the lifetime of graphs in channels are compared with the bursting time in SS-HST. Vortex clusters are associated with almost the same quadrant in terms of their mean velocities during their life time, especially for those related with ejections and sweeps. As in channels, ejections in SS-HST move upwards with an average vertical velocity uτ (friction velocity) while the opposite is true for sweeps. Vortex clusters, on the other hand, are almost still in the vertical direction. In the streamwise direction, they are advected by the local mean velocity and thus deformed by the mean velocity difference. Sweeps and ejections move faster and slower than the mean velocity respectively, both by 1.5uτ . Vortex clusters move with the same speed as the mean velocity. It is verified that the incoherent structures near the wall is due to the wall instead of small size. The results suggest that coherent structures in channels are not particularly associated with the wall, or even with a given shear profile.
Resumo:
We report a general method for screening, in solution, the impact of deviations from canonical Watson-Crick composition on the thermodynamic stability of nucleic acid duplexes. We demonstrate how fluorescence resonance energy transfer (FRET) can be used to detect directly free energy differences between an initially formed “reference” duplex (usually a Watson-Crick duplex) and a related “test” duplex containing a lesion/alteration of interest (e.g., a mismatch, a modified, a deleted, or a bulged base, etc.). In one application, one titrates into a solution containing a fluorescently labeled, FRET-active, reference duplex, an unlabeled, single-stranded nucleic acid (test strand), which may or may not compete successfully to form a new duplex. When a new duplex forms by strand displacement, it will not exhibit FRET. The resultant titration curve (normalized fluorescence intensity vs. logarithm of test strand concentration) yields a value for the difference in stability (free energy) between the newly formed, test strand-containing duplex and the initial reference duplex. The use of competitive equilibria in this assay allows the measurement of equilibrium association constants that far exceed the magnitudes accessible by conventional titrimetric techniques. Additionally, because of the sensitivity of fluorescence, the method requires several orders of magnitude less material than most other solution methods. We discuss the advantages of this method for detecting and characterizing any modification that alters duplex stability, including, but not limited to, mutagenic lesions. We underscore the wide range of accessible free energy values that can be defined by this method, the applicability of the method in probing for a myriad of nucleic acid variations, such as single nucleotide polymorphisms, and the potential of the method for high throughput screening.
Resumo:
Approximately 250,000 measurements made for the pCO2 difference between surface water and the marine atmosphere, ΔpCO2, have been assembled for the global oceans. Observations made in the equatorial Pacific during El Nino events have been excluded from the data set. These observations are mapped on the global 4° × 5° grid for a single virtual calendar year (chosen arbitrarily to be 1990) representing a non-El Nino year. Monthly global distributions of ΔpCO2 have been constructed using an interpolation method based on a lateral advection–diffusion transport equation. The net flux of CO2 across the sea surface has been computed using ΔpCO2 distributions and CO2 gas transfer coefficients across sea surface. The annual net uptake flux of CO2 by the global oceans thus estimated ranges from 0.60 to 1.34 Gt-C⋅yr−1 depending on different formulations used for wind speed dependence on the gas transfer coefficient. These estimates are subject to an error of up to 75% resulting from the numerical interpolation method used to estimate the distribution of ΔpCO2 over the global oceans. Temperate and polar oceans of the both hemispheres are the major sinks for atmospheric CO2, whereas the equatorial oceans are the major sources for CO2. The Atlantic Ocean is the most important CO2 sink, providing about 60% of the global ocean uptake, while the Pacific Ocean is neutral because of its equatorial source flux being balanced by the sink flux of the temperate oceans. The Indian and Southern Oceans take up about 20% each.
Resumo:
Molecular and fragment ion data of intact 8- to 43-kDa proteins from electrospray Fourier-transform tandem mass spectrometry are matched against the corresponding data in sequence data bases. Extending the sequence tag concept of Mann and Wilm for matching peptides, a partial amino acid sequence in the unknown is first identified from the mass differences of a series of fragment ions, and the mass position of this sequence is defined from molecular weight and the fragment ion masses. For three studied proteins, a single sequence tag retrieved only the correct protein from the data base; a fourth protein required the input of two sequence tags. However, three of the data base proteins differed by having an extra methionine or by missing an acetyl or heme substitution. The positions of these modifications in the protein examined were greatly restricted by the mass differences of its molecular and fragment ions versus those of the data base. To characterize the primary structure of an unknown represented in the data base, this method is fast and specific and does not require prior enzymatic or chemical degradation.
Resumo:
The helix-coil transition equilibrium of polypeptides in aqueous solution was studied by molecular dynamics simulation. The peptide growth simulation method was introduced to generate dynamic models of polypeptide chains in a statistical (random) coil or an alpha-helical conformation. The key element of this method is to build up a polypeptide chain during the course of a molecular transformation simulation, successively adding whole amino acid residues to the chain in a predefined conformation state (e.g., alpha-helical or statistical coil). Thus, oligopeptides of the same length and composition, but having different conformations, can be incrementally grown from a common precursor, and their relative conformational free energies can be calculated as the difference between the free energies for growing the individual peptides. This affords a straightforward calculation of the Zimm-Bragg sigma and s parameters for helix initiation and helix growth. The calculated sigma and s parameters for the polyalanine alpha-helix are in reasonable agreement with the experimental measurements. The peptide growth simulation method is an effective way to study quantitatively the thermodynamics of local protein folding.
Resumo:
A temperature jump (T-jump) method capable of initiating thermally induced processes on the picosecond time scale in aqueous solutions is introduced. Protein solutions are heated by energy from a laser pulse that is absorbed by homogeneously dispersed molecules of the dye crystal violet. These act as transducers by releasing the energy as heat to cause a T-jump of up to 10 K with a time resolution of 70 ps. The method was applied to the unfolding of RNase A. At pH 5.7 and 59 degrees C, a T-jump of 3-6 K induced unfolding which was detected by picosecond transient infrared spectroscopy of the amide I region between 1600 and 1700 cm-1. The difference spectral profile at 3.5 ns closely resembled that found for the equilibrium (native-unfolded) states. The signal at 1633 cm-1, corresponding to the beta-sheet structure, achieved 15 +/- 2% of the decrease found at equilibrium, within 5.5 ns. However, no decrease in absorbance was detected until 1 ns after the T-ump. The disruption of beta-sheet therefore appears to be subject to a delay of approximately 1 ns. Prior to 1 ns after the T-jump, water might be accessing the intact hydrophobic regions.
Resumo:
This mixed method study aimed to redress the gap in the literature on academic service-learning partnerships, especially in Eastern settings. It utilized Enos and Morton's (2003) theoretical framework to explore these partnerships at the American University in Cairo (AUC). Seventy-nine community partners, administrators, faculty members, and students from a diverse range of age, citizenship, racial, educational, and professional backgrounds participated in the study. Qualitative interviews were conducted with members of these four groups, and a survey with both close-ended and open-ended questions administered to students yielded 61 responses. Qualitative analyses revealed that the primary motivators for partners' engagement in service-learning partnerships included contributing to the community, enhancing students' learning and growth, and achieving the civic mission of the University. These partnerships were characterized by short-term relationships with partners' aspiring to progress toward long-term commitments. The challenges to these partnerships included issues pertaining to the institution, partnering organizations, culture, politics, pedagogy, students, and faculty members. Key strategies for improving these partnerships included institutionalizing service-learning in the University and cultivating an institutional culture supportive of community engagement. Quantitative analyses showed statistically significant relationships between students' scores on the Community Awareness and Interpersonal Effectiveness scales and their overall participation in community service activities inside and outside the classroom, as well as a statistically significant difference between their scores on the Community Awareness scale and department offering service-learning courses. The study's outcomes underscore the role of the local culture in shaping service-learning partnerships, as well as the role of both curricular and extracurricular activities in boosting students' awareness of their community and interpersonal effectiveness. Cultivating a culture of community engagement and building support mechanisms for engaged scholarship are among the critical steps required by public policy-makers in Egypt to promote service-learning in Egyptian higher education. Institutionalizing service-learning partnerships at AUC and enhancing the visibility of these partnerships on campus and in the community are essential to the future growth of these collaborations. Future studies should explore factors affecting community partners' satisfaction with these partnerships, top-down and bottom-up support to service-learning, the value of reflection to faculty members, and the influence of students' economic backgrounds on their involvement in service-learning partnerships.
Resumo:
Dual-phase-lagging (DPL) models constitute a family of non-Fourier models of heat conduction that allow for the presence of time lags in the heat flux and the temperature gradient. These lags may need to be considered when modeling microscale heat transfer, and thus DPL models have found application in the last years in a wide range of theoretical and technical heat transfer problems. Consequently, analytical solutions and methods for computing numerical approximations have been proposed for particular DPL models in different settings. In this work, a compact difference scheme for second order DPL models is developed, providing higher order precision than a previously proposed method. The scheme is shown to be unconditionally stable and convergent, and its accuracy is illustrated with numerical examples.
Resumo:
Bedforms both reflect and influence shallow water hydrodynamics and sediment dynamics. A correct characterization of their spatial distribution and dimensions is required for the understanding, assessment and prediction of numerous coastal processes. A method to parameterize geometrical characteristics using two-dimensional (2D) spectral analysis is presented and tested on seabed elevation data from the Knudedyb tidal inlet in the Danish Wadden Sea, where large compound bedforms are found. The bathymetric data were divided into 20x20 m areas on which a 2D spectral analysis was applied. The most energetic peak of the 2D spectrum was found and its energy, frequency and direction were calculated. A power-law was fitted to the average of slices taken through the 2D spectrum; its slope and y-intercept were calculated. Using these results the test area was morphologically classified into 4 distinct morphological regions. The most energetic peak and the slope and intercept of the power-law showed high values above the crest of the primary bedforms and scour holes, low values in areas without bedforms, and intermediate values in areas with secondary bedforms. The secondary bedform dimensions and orientations were calculated. An area of 700x700 m was used to determine the characteristics of the primary bedforms. However, they were less distinctively characterized compared to the secondary bedforms due to relatively large variations in their orientations and wavelengths. The method is thus appropriate for morphological classification of the seabed and for bedform characterization, being most efficient in areas characterized by bedforms with regular dimensions and directions.
Resumo:
Two main alternating facies were observed at Ocean Drilling Program (ODP) Site 1165, drilled in 3357 m water depth into the Wild Drift (Cooperation Sea, Antarctica): a dark gray, laminated, terrigenous one (interpreted as muddy contourites) and a greenish, homogeneous, biogenic and coarse fraction-bearing one (interpreted as hemipelagic deposits with ice rafted debris [IRD]). These two cyclically alternating facies reflect orbitally driven changes (Milankovitch periodicities) recorded in spectral reflectance, bulk density, and magnetic susceptibility data and opal content changes. Superimposed on these short-term variations, significant uphole changes in average sedimentation rates, total clay content, IRD amount, and mineral composition were interpreted to represent the long-term lower to upper Miocene transition from a temperate climate to a cold-climate glaciation. The analysis of the short-term variations (interpreted to reflect ice sheet expansions controlled by 41-k.y. insolation changes) requires a quite closely spaced sampled record like that provided by the archive multisensor track. Among those, cycles are best described by spectral reflectance data and, in particular, by a parameter calculated as the ratio of the reflectivity in the green color band and the average reflectivity (gray). In this data report a numerical evaluation of spectral reflectance data was performed and substantiated by correlation with core photos to provide an objective description of the color variations within Site 1165 sediments. The resulting color description provides a reference to categorize the available samples in terms of facies and, hence, a framework for further analyses. Moreover, a link between visually described features and numerical series suitable for spectral analyses is provided.
Resumo:
PURPOSE To evaluate image contrast and color setting on assessment of retinal structures and morphology in spectral-domain optical coherence tomography. METHODS Two hundred and forty-eight Spectralis spectral-domain optical coherence tomography B-scans of 62 patients were analyzed by 4 readers. B-scans were extracted in 4 settings: W + N = white background with black image at normal contrast 9; W + H = white background with black image at maximum contrast 16; B + N = black background with white image at normal contrast 12; B + H = black background with white image at maximum contrast 16. Readers analyzed the images to identify morphologic features. Interreader correlation was calculated. Differences between Fleiss-kappa correlation coefficients were examined using bootstrap method. Any setting with significantly higher correlation coefficient was deemed superior for evaluating specific features. RESULTS Correlation coefficients differed among settings. No single setting was superior for all respective spectral-domain optical coherence tomography parameters (P = 0.3773). Some variables showed no differences among settings. Hard exudates and subretinal fluid were best seen with B + H (κ = 0.46, P = 0.0237 and κ = 0.78, P = 0.002). Microaneurysms were best seen with W + N (κ = 0.56, P = 0.025). Vitreomacular interface, enhanced transmission signal, and epiretinal membrane were best identified using all color/contrast settings together (κ = 0.44, P = 0.042, κ = 0.57, P = 0.01, and κ = 0.62, P ≤ 0.0001). CONCLUSION Contrast and background affect the evaluation of retinal structures on spectral-domain optical coherence tomography images. No single setting was superior for all features, though certain changes were best seen with specific settings.