271 resultados para MILLIMETER
Resumo:
La región del espectro electromagnético comprendida entre 100 GHz y 10 THz alberga una gran variedad de aplicaciones en campos tan dispares como la radioastronomía, espectroscopíamolecular, medicina, seguridad, radar, etc. Los principales inconvenientes en el desarrollo de estas aplicaciones son los altos costes de producción de los sistemas trabajando a estas frecuencias, su costoso mantenimiento, gran volumen y baja fiabilidad. Entre las diferentes tecnologías a frecuencias de THz, la tecnología de los diodos Schottky juega un importante papel debido a su madurez y a la sencillez de estos dispositivos. Además, los diodos Schottky pueden operar tanto a temperatura ambiente como a temperaturas criogénicas, con altas eficiencias cuando se usan como multiplicadores y con moderadas temperaturas de ruido en mezcladores. El principal objetivo de esta tesis doctoral es analizar los fenómenos físicos responsables de las características eléctricas y del ruido en los diodos Schottky, así como analizar y diseñar circuitos multiplicadores y mezcladores en bandas milimétricas y submilimétricas. La primera parte de la tesis presenta un análisis de los fenómenos físicos que limitan el comportamiento de los diodos Schottky de GaAs y GaN y de las características del espectro de ruido de estos dispositivos. Para llevar a cabo este análisis, un modelo del diodo basado en la técnica de Monte Carlo se ha considerado como referencia debido a la elevada precisión y fiabilidad de este modelo. Además, el modelo de Monte Carlo permite calcular directamente el espectro de ruido de los diodos sin necesidad de utilizar ningún modelo analítico o empírico. Se han analizado fenómenos físicos como saturación de la velocidad, inercia de los portadores, dependencia de la movilidad electrónica con la longitud de la epicapa, resonancias del plasma y efectos no locales y no estacionarios. También se ha presentado un completo análisis del espectro de ruido para diodos Schottky de GaAs y GaN operando tanto en condiciones estáticas como variables con el tiempo. Los resultados obtenidos en esta parte de la tesis contribuyen a mejorar la comprensión de la respuesta eléctrica y del ruido de los diodos Schottky en condiciones de altas frecuencias y/o altos campos eléctricos. También, estos resultados han ayudado a determinar las limitaciones de modelos numéricos y analíticos usados en el análisis de la respuesta eléctrica y del ruido electrónico en los diodos Schottky. La segunda parte de la tesis está dedicada al análisis de multiplicadores y mezcladores mediante una herramienta de simulación de circuitos basada en la técnica de balance armónico. Diferentes modelos basados en circuitos equivalentes del dispositivo, en las ecuaciones de arrastre-difusión y en la técnica de Monte Carlo se han considerado en este análisis. El modelo de Monte Carlo acoplado a la técnica de balance armónico se ha usado como referencia para evaluar las limitaciones y el rango de validez de modelos basados en circuitos equivalentes y en las ecuaciones de arrastredifusión para el diseño de circuitos multiplicadores y mezcladores. Una notable característica de esta herramienta de simulación es que permite diseñar circuitos Schottky teniendo en cuenta tanto la respuesta eléctrica como el ruido generado en los dispositivos. Los resultados de las simulaciones presentados en esta parte de la tesis, tanto paramultiplicadores comomezcladores, se han comparado con resultados experimentales publicados en la literatura. El simulador que integra el modelo de Monte Carlo con la técnica de balance armónico permite analizar y diseñar circuitos a frecuencias superiores a 1 THz. ABSTRACT The terahertz region of the electromagnetic spectrum(100 GHz-10 THz) presents a wide range of applications such as radio-astronomy, molecular spectroscopy, medicine, security and radar, among others. The main obstacles for the development of these applications are the high production cost of the systems working at these frequencies, highmaintenance, high volume and low reliability. Among the different THz technologies, Schottky technology plays an important rule due to its maturity and the inherent simplicity of these devices. Besides, Schottky diodes can operate at both room and cryogenic temperatures, with high efficiency in multipliers and moderate noise temperature in mixers. This PhD. thesis is mainly concerned with the analysis of the physical processes responsible for the characteristics of the electrical response and noise of Schottky diodes, as well as the analysis and design of frequency multipliers and mixers at millimeter and submillimeter wavelengths. The first part of the thesis deals with the analysis of the physical phenomena limiting the electrical performance of GaAs and GaN Schottky diodes and their noise performance. To carry out this analysis, a Monte Carlo model of the diode has been used as a reference due to the high accuracy and reliability of this diode model at millimeter and submillimter wavelengths. Besides, the Monte Carlo model provides a direct description of the noise spectra of the devices without the necessity of any additional analytical or empirical model. Physical phenomena like velocity saturation, carrier inertia, dependence of the electron mobility on the epilayer length, plasma resonance and nonlocal effects in time and space have been analysed. Also, a complete analysis of the current noise spectra of GaAs and GaN Schottky diodes operating under static and time varying conditions is presented in this part of the thesis. The obtained results provide a better understanding of the electrical and the noise responses of Schottky diodes under high frequency and/or high electric field conditions. Also these results have helped to determine the limitations of numerical and analytical models used in the analysis of the electrical and the noise responses of these devices. The second part of the thesis is devoted to the analysis of frequency multipliers and mixers by means of an in-house circuit simulation tool based on the harmonic balance technique. Different lumped equivalent circuits, drift-diffusion and Monte Carlo models have been considered in this analysis. The Monte Carlo model coupled to the harmonic balance technique has been used as a reference to evaluate the limitations and range of validity of lumped equivalent circuit and driftdiffusion models for the design of frequency multipliers and mixers. A remarkable feature of this reference simulation tool is that it enables the design of Schottky circuits from both electrical and noise considerations. The simulation results presented in this part of the thesis for both multipliers and mixers have been compared with measured results available in the literature. In addition, the Monte Carlo simulation tool allows the analysis and design of circuits above 1 THz.
Resumo:
Silicon micromachined waveguide components operating in the WM-250 (WR-1) waveguide band (0.75 to 1.1 THz) are measured. Through lines are used to characterize the waveguide loss with and without an oxide etch to reduce the surface roughness. A sidewall roughness of 100nm is achieved, enabling a waveguide loss of 0.2dB/mm. A 1THz band-pass filter is also measured to characterize the precision of fabrication process. A 1.8% shift in frequency is observed and can be accounted for by the 0.5deg etch angle and 2um expansion of the features by the oxide etch. The measured filter has a 13% 3dB bandwidth and 2.5dB insertion loss through the passband.
Resumo:
La generalización del uso de dispositivos móviles, con su consiguiente aumento del tráfico de datos, está generando una demanda cada vez mayor de bandas de frecuencia para el despliegue de sistemas de comunicación inalámbrica, así como una creciente congestión en las bandas bajas del espectro (hasta 3 GHz). Entre las posibles soluciones a este problema, se ha propuesto que la próxima generación de sistemas celulares, 5G, hagan uso de la banda milimétrica, entre 30 GHz y 300 GHz, donde hay anchos de banda contiguos disponibles con tamaños muy difíciles de encontrar en las frecuencias en uso en la generación actual. Este Proyecto de Fin de Grado tiene como finalidad estudiar la viabilidad del despliegue de sistemas celulares en dicha banda, basándose en los estudios tanto empíricos como teóricos ya publicados, así como en las recomendaciones de la UIT donde se estudian las características de propagación en estas bandas. En un siguiente apartado, se han analizado los documentos disponibles de los distintos proyectos y grupos, como pueden ser METIS-2020, impulsado por la Comisión Europea o IMT-2020 promovido por la UIT, dedicados a definir los futuros estándares de comunicación y sus características, así como la evolución de los actuales. Aparte del trabajo de documentación, se han realizado una serie de simulaciones. En primer lugar, se ha utilizado MATLAB para estudiar el comportamiento y la atenuación de la onda electromagnética a las frecuencias de interés en diferentes ubicaciones y climas, tanto en ubicaciones habituales como extremas, estudiándose los efectos de los gases atmosféricos y los hidrometeoros. También se ha utilizado software de planificación radioeléctrica profesional para hacer estudios de cobertura en entornos tanto urbanos, entre ellos Madrid o Barcelona, suburbanos, como Tres Cantos (Madrid) y O Barco de Valdeorras (Orense), y rurales como Valdefuentes (Cáceres) y Quiruelas de Vidriales (Zamora). Por último se han recogido todos los resultados, tanto los provenientes de los estudios como los obtenidos de nuestras propias simulaciones, y se ha realizado un breve comentario, comparando estos y analizando su impacto para posibles despliegues futuros de redes 5G. ABSTRACT. The generalization of mobile device use, with its associated data traffic growth, is generating a growing demand of spectrum for its use in the deployment of wireless telecommunication systems, and a growing congestion in the lower end of the spectrum (until 3 GHz). Among the possible solutions for this problem, it has been proposed that the next generation of cellular systems, 5G, makes use of the millimeter band, between 30 GHz and 300 GHz, where there are contiguous bandwidths with sizes hardly available in the bands used in the present. This Project aims to study the feasibility of cellular system deployments in said band, based on published empirical and theoretical studies and papers, and the ITU recommendations, where the propagation characteristics in those bands are studied. In the next section, available documentation coming from the different study groups and projects like METIS 2020 promoted by the European Commission, or IMT-2020, promoted by the ITU has been studied. In the documentation, future telecommunication standards and its characteristics and the evolution of the current ones are defined. Besides the documentation work, a series of simulations have been carried out. First, MATLAB has been used to study the behavior and attenuation of the electromagnetic wave at the frequencies of interest in different locations and climates, studying the effects of atmospheric gasses and hydrometeors in conventional and extreme locations. Industry standard radioelectric planning software has been used to study the coverage in different environments, such as urban locations like Madrid and Barcelona, both in Spain, suburban locations like Tres Cantos (Madrid, Spain) and O Barco de Valdeorras (Orense, Spain) and rural locations such as Valdefuentes (Cáreces, Spain) and Quiruelas de Vidriales (Zamora, Spain). Finally, all the results, both from the documentation and our own simulations, have been collected, and a brief commentary has been made, comparing those results and their possible impact in the future deployment of 5G networks.
Resumo:
Se está produciendo en la geodesia un cambio de paradigma en la concepción de los modelos digitales del terreno, pasando de diseñar el modelo con el menor número de puntos posibles a hacerlo con cientos de miles o millones de puntos. Este cambio ha sido consecuencia de la introducción de nuevas tecnologías como el escáner láser, la interferometría radar y el tratamiento de imágenes. La rápida aceptación de estas nuevas tecnologías se debe principalmente a la gran velocidad en la toma de datos, a la accesibilidad por no precisar de prisma y al alto grado de detalle de los modelos. Los métodos topográficos clásicos se basan en medidas discretas de puntos que considerados en su conjunto forman un modelo; su precisión se deriva de la precisión en la toma singular de estos puntos. La tecnología láser escáner terrestre (TLS) supone una aproximación diferente para la generación del modelo del objeto observado. Las nubes de puntos, producto del escaneo con TLS, pasan a ser tratadas en su conjunto mediante análisis de áreas, de forma que ahora el modelo final no es el resultado de una agregación de puntos sino la de la mejor superficie que se adapta a las nubes de puntos. Al comparar precisiones en la captura de puntos singulares realizados con métodos taquimétricos y equipos TLS la inferioridad de estos últimos es clara; sin embargo es en el tratamiento de las nubes de puntos, con los métodos de análisis basados en áreas, se han obtenido precisiones aceptables y se ha podido considerar plenamente la incorporación de esta tecnología en estudios de deformaciones y movimientos de estructuras. Entre las aplicaciones del TLS destacan las de registro del patrimonio, registro de las fases en la construcción de plantas industriales y estructuras, atestados de accidentes y monitorización de movimientos del terreno y deformaciones de estructuras. En la auscultación de presas, comparado con la monitorización de puntos concretos dentro, en coronación o en el paramento de la presa, disponer de un modelo continuo del paramento aguas abajo de la presa abre la posibilidad de introducir los métodos de análisis de deformaciones de superficies y la creación de modelos de comportamiento que mejoren la comprensión y previsión de sus movimientos. No obstante, la aplicación de la tecnología TLS en la auscultación de presas debe considerarse como un método complementario a los existentes. Mientras que los péndulos y la reciente técnica basada en el sistema de posicionamiento global diferencial (DGPS) dan una información continua de los movimientos de determinados puntos de la presa, el TLS permite ver la evolución estacional y detectar posibles zonas problemáticas en todo el paramento. En este trabajo se analizan las características de la tecnología TLS y los parámetros que intervienen en la precisión final de los escaneos. Se constata la necesidad de utilizar equipos basados en la medida directa del tiempo de vuelo, también llamados pulsados, para distancias entre 100 m y 300 m Se estudia la aplicación del TLS a la modelización de estructuras y paramentos verticales. Se analizan los factores que influyen en la precisión final, como el registro de nubes, tipo de dianas y el efecto conjunto del ángulo y la distancia de escaneo. Finalmente, se hace una comparación de los movimientos dados por los péndulos directos de una presa con los obtenidos del análisis de las nubes de puntos correspondientes a varias campañas de escaneos de la misma presa. Se propone y valida el empleo de gráficos patrón para relacionar las variables precisión o exactitud con los factores distancia y ángulo de escaneo en el diseño de trabajos de campo. Se expone su aplicación en la preparación del trabajo de campo para la realización de una campaña de escaneos dirigida al control de movimientos de una presa y se realizan recomendaciones para la aplicación de la técnica TLS a grandes estructuras. Se ha elaborado el gráfico patrón de un equipo TLS concreto de alcance medio. Para ello se hicieron dos ensayos de campo en condiciones reales de trabajo, realizando escaneos en todo el rango de distancias y ángulos de escaneo del equipo. Se analizan dos métodos para obtener la precisión en la modelización de paramentos y la detección de movimientos de estos: el método del “plano de mejor ajuste” y el método de la “deformación simulada”. Por último, se presentan los resultados de la comparación de los movimientos estacionales de una presa arco-gravedad entre los registrados con los péndulos directos y los obtenidos a partir de los escaneos realizados con un TLS. Los resultados muestran diferencias de milímetros, siendo el mejor de ellos del orden de un milímetro. Se explica la metodología utilizada y se hacen consideraciones respecto a la densidad de puntos de las nubes y al tamaño de las mallas de triángulos. A shift of paradigm in the conception of the survey digital models is taking place in geodesy, moving from designing a model with the fewer possible number of points to models of hundreds of thousand or million points. This change has happened because of the introduction of new technologies like the laser scanner, the interferometry radar and the processing of images. The fast acceptance of these new technologies has been due mainly to the great speed getting the data, to the accessibility as reflectorless technique, and to the high degree of detail of the models. Classic survey methods are based on discreet measures of points that, considered them as a whole, form a model; the precision of the model is then derived from the precision measuring the single points. The terrestrial laser scanner (TLS) technology supposes a different approach to the model generation of the observed object. Point cloud, the result of a TLS scan, must be treated as a whole, by means of area-based analysis; so, the final model is not an aggregation of points but the one resulting from the best surface that fits with the point cloud. Comparing precisions between the one resulting from the capture of singular points made with tachometric measurement methods and with TLS equipment, the inferiority of this last one is clear; but it is in the treatment of the point clouds, using area-based analysis methods, when acceptable precisions have been obtained and it has been possible to consider the incorporation of this technology for monitoring structures deformations. Among TLS applications it have to be emphasized those of registry of the cultural heritage, stages registry during construction of industrial plants and structures, police statement of accidents and monitorization of land movements and structures deformations. Compared with the classical dam monitoring, approach based on the registry of a set of points, the fact having a continuous model of the downstream face allows the possibility of introducing deformation analysis methods and behavior models that would improve the understanding and forecast of dam movements. However, the application of TLS technology for dam monitoring must be considered like a complementary method with the existing ones. Pendulums and recently the differential global positioning system (DGPS) give a continuous information of the movements of certain points of the dam, whereas TLS allows following its seasonal evolution and to detect damaged zones of the dam. A review of the TLS technology characteristics and the factors affecting the final precision of the scanning data is done. It is stated the need of selecting TLS based on the direct time of flight method, also called pulsed, for scanning distances between 100m and 300m. Modelling of structures and vertical walls is studied. Factors that influence in the final precision, like the registry of point clouds, target types, and the combined effect of scanning distance and angle of incidence are analyzed. Finally, a comparison among the movements given by the direct pendulums of a dam and the ones obtained from the analysis of point clouds is done. A new approach to obtain a complete map-type plot of the precisions of TLS equipment based on the direct measurement of time of flight method at midrange distances is presented. Test were developed in field-like conditions, similar to dam monitoring and other civil engineering works. Taking advantage of graphic semiological techniques, a “distance - angle of incidence” map based was designed and evaluated for field-like conditions. A map-type plot was designed combining isolines with sized and grey scale points, proportional to the precision values they represent. Precisions under different field conditions were compared with specifications. For this purpose, point clouds were evaluated under two approaches: the standar "plane-of-best-fit" and the proposed "simulated deformation”, that showed improved performance. These results lead to a discussion and recommendations about optimal TLS operation in civil engineering works. Finally, results of the comparison of seasonal movements of an arc-gravity dam between the registered by the direct pendulums ant the obtained from the TLS scans, are shown. The results show differences of millimeters, being the best around one millimeter. The used methodology is explained and considerations with respect to the point cloud density and to the size of triangular meshes are done.
Resumo:
Our group recently demonstrated that autoimmune T cells directed against central nervous system-associated myelin antigens protect neurons from secondary degeneration. We further showed that the synthetic peptide copolymer 1 (Cop-1), known to suppress experimental autoimmune encephalomyelitis, can be safely substituted for the natural myelin antigen in both passive and active immunization for neuroprotection of the injured optic nerve. Here we attempted to determine whether similar immunizations are protective from retinal ganglion cell loss resulting from a direct biochemical insult caused, for example, by glutamate (a major mediator of degeneration in acute and chronic optic nerve insults) and in a rat model of ocular hypertension. Passive immunization with T cells reactive to myelin basic protein or active immunization with myelin oligodendrocyte glycoprotein-derived peptide, although neuroprotective after optic nerve injury, was ineffective against glutamate toxicity in mice and rats. In contrast, the number of surviving retinal ganglion cells per square millimeter in glutamate-injected retinas was significantly larger in mice immunized 10 days previously with Cop-1 emulsified in complete Freund's adjuvant than in mice injected with PBS in the same adjuvant (2,133 ± 270 and 1,329 ± 121, respectively, mean ± SEM; P < 0.02). A similar pattern was observed when mice were immunized on the day of glutamate injection (1,777 ± 101 compared with 1,414 ± 36; P < 0.05), but not when they were immunized 48 h later. These findings suggest that protection from glutamate toxicity requires reinforcement of the immune system by antigens that are different from those associated with myelin. The use of Cop-1 apparently circumvents this antigen specificity barrier. In the rat ocular hypertension model, which simulates glaucoma, immunization with Cop-1 significantly reduced the retinal ganglion cell loss from 27.8% ± 6.8% to 4.3% ± 1.6%, without affecting the intraocular pressure. This study may point the way to a therapy for glaucoma, a neurodegenerative disease of the optic nerve often associated with increased intraocular pressure, as well as for acute and chronic degenerative disorders in which glutamate is a prominent participant.
Resumo:
I review models for the "inner jet" in blazars, the section that connects the central engine with the radio jet. I discuss how the structure and physics of the inner jet can be explored using millimeter-wave VLBI (very-long-baseline radio interferometry) as well as multiwaveband observations of blazars. Flares at radio to gamma-ray frequencies should exhibit time delays at different wavebands that can test models for both the high-energy emission mechanisms and the nature of the inner jet in blazars.
Resumo:
We report on the long-term X-ray monitoring of the outburst decay of the low magnetic field magnetar SGR 0418+5729 using all the available X-ray data obtained with RXTE, Swift, Chandra, and XMM-Newton observations from the discovery of the source in 2009 June up to 2012 August. The timing analysis allowed us to obtain the first measurement of the period derivative of SGR 0418+5729: ˙ P = 4(1) × 10−15 s s−1, significant at a ∼3.5σ confidence level. This leads to a surface dipolar magnetic field of Bdip 6 × 1012 G. This measurement confirms SGR 0418+5729 as the lowest magnetic field magnetar. Following the flux and spectral evolution from the beginning of the outburst up to ∼1200 days, we observe a gradual cooling of the tiny hot spot responsible for the X-ray emission, from a temperature of ∼0.9 to 0.3 keV. Simultaneously, the X-ray flux decreased by about three orders of magnitude: from about 1.4 × 10−11 to 1.2 × 10−14 erg s−1 cm−2. Deep radio, millimeter, optical, and gamma-ray observations did not detect the source counterpart, implying stringent limits on its multi-band emission, as well as constraints on the presence of a fossil disk. By modeling the magneto-thermal secular evolution of SGR 0418+5729, we infer a realistic age of ∼550 kyr, and a dipolar magnetic field at birth of ∼1014 G. The outburst characteristics suggest the presence of a thin twisted bundle with a small heated spot at its base. The bundle untwisted in the first few months following the outburst, while the hot spot decreases in temperature and size. We estimate the outburst rate of low magnetic field magnetars to be about one per year per galaxy, and we briefly discuss the consequences of such a result in several other astrophysical contexts.
Resumo:
Continuous sediment color records with a resolution of one measurement per millimeter were generated for Site 1098 (Palmer Deep, Antarctic Peninsula) from digital images of the core surfaces to test if the laminated intervals at this site will allow for analysis of high-frequency climate variability in the Circum-Antarctic. Long-term variation in color values correlates with gamma-ray attenuation bulk density. Darker colors are found in laminated intervals with lower bulk density, high biogenic silica, and high total organic carbon content. Darker color values result from the addition of dark laminae to background sediments that show little variation in color. The thicknesses of dark and light laminae were measured in the top 25 meters composite depth to determine the temporal resolution of the laminae. The alternation between dark, biogenic-rich laminae and background sediment essentially represents an annual cycle, but the sediment is not consistently varved. The modal thickness of light laminae is close to the long-term average annual accumulation rate, and results indicate that approximately half of the dark/light couplets in distinctly laminated intervals represent a single year. Missing biogenic laminae are interpreted to represent reduced primary productivity during cold years with delayed breakup of the sea-ice cover.
Resumo:
We analyzed size-specific dry mass, sinking velocity, and apparent diffusivity in field-sampled marine snow, laboratory-made aggregates formed by diatoms or coccolithophorids, and small and large zooplankton fecal pellets with naturally varying content of ballast materials. Apparent diffusivity was measured directly inside aggregates and large (millimeter-long) fecal pellets using microsensors. Large fecal pellets, collected in the coastal upwelling off Cape Blanc, Mauritania, showed the highest volume-specific dry mass and sinking velocities because of a high content of opal, carbonate, and lithogenic material (mostly Saharan dust), which together comprised ~80% of the dry mass. The average solid matter density within these large fecal pellets was 1.7 g cm**-3, whereas their excess density was 0.25 ± 0.07 g cm**-3. Volume-specific dry mass of all sources of aggregates and fecal pellets ranged from 3.8 to 960 µg mm**-3, and average sinking velocities varied between 51 and 732 m d**-1. Porosity was >0.43 and >0.96 within fecal pellets and phytoplankton-derived aggregates, respectively. Averaged values of apparent diffusivity of gases within large fecal pellets and aggregates were 0.74 and 0.95 times that of the free diffusion coefficient in sea water, respectively. Ballast increases sinking velocity and, thus, also potential O2 fluxes to sedimenting aggregates and fecal pellets. Hence, ballast minerals limit the residence time of aggregates in the water column by increasing sinking velocity, but apparent diffusivity and potential oxygen supply within aggregates are high, whereby a large fraction of labile organic carbon can be respired during sedimentation.
Resumo:
New dredge-disposal techniques may serve the dual role of aiding sand by-passing across coastal inlets, and beach nourishment, provided the dredged sediments placed seaward of the surf zone move shoreward into that zone. During the summer of 1976, 26,750 cubic meters of relatively coarse sediment was dredged from New River Inlet, North Carolina, moved down coast by a split-hull barge, and placed in a 215-meter coastal reach between the 2- and 4-meter depth contours. Bathymetric changes on the disposal piles and in the adjacent beach and nearshore area were studied for a 13-week period (August to November 1976) to determine the modification of the surrounding beach and nearshore profile, and the net transport direction of the disposal sediment. The sediment piles initially created a local shoal zone with minimum depths of 0.6 meter. Disposal sediment was coarser (Mn = 0.49 millimeter) than the native sand at the disposal site (Mn = 0.14 millimeter) and coarser than the composite mean grain size of the entire profile (Mn = 0.21 millimeter). Shoaling and breaking waves caused rapid erosion of the pile tops and a gradual coalescing of the piles to form a disposal bar located seaward (= 90 meters) of a naturally occurring surf zone bar. As the disposal bar relief was reduced, the disposal bar-associated breaker zone was restricted to low tide times or periods of high wave conditions.
Resumo:
The development of sand ripples in an oscillatory-flow water tunnel was observed in 104 laboratory experiments approximating conditions at the seabed under steady progressive surface waves. The period, T, and amplitude, a, of the water motion were varied over wide ranges. Three quartz sands were used, with mean grain diameters, D = 0.55, 0.21, and 0.18 millimeter. In 24 experiments, with the bed initially leveled, T was reduced until ripples appeared, and their development to final equilibrium form was observed without further change in T. The remaining 80 experiments investigated the response of previously established bed forms to changes in T or a or both. The ripple length, lambda, and height, eta, were measured from photos, except when bed forms were three dimensional.
Resumo:
The world's largest fossil oyster reef, formed by the giant oyster Crassostrea gryphoides and located in Stetten (north of Vienna, Austria) is studied by Harzhauser et al., 2015, 2016; Djuricic et al., 2016. Digital documentation of the unique geological site is provided by terrestrial laser scanning (TLS) at the millimeter scale. Obtaining meaningful results is not merely a matter of data acquisition with a suitable device; it requires proper planning, data management, and postprocessing. Terrestrial laser scanning technology has a high potential for providing precise 3D mapping that serves as the basis for automatic object detection in different scenarios; however, it faces challenges in the presence of large amounts of data and the irregular geometry of an oyster reef. We provide a detailed description of the techniques and strategy used for data collection and processing in Djuricic et al., 2016. The use of laser scanning provided the ability to measure surface points of 46,840 (estimated) shells. They are up to 60-cm-long oyster specimens, and their surfaces are modeled with a high accuracy of 1 mm. In addition to laser scanning measurements, more than 300 photographs were captured, and an orthophoto mosaic was generated with a ground sampling distance (GSD) of 0.5 mm. This high-resolution 3D information and the photographic texture serve as the basis for ongoing and future geological and paleontological analyses. Moreover, they provide unprecedented documentation for conservation issues at a unique natural heritage site.
Resumo:
In this study, we investigated the size, submicrometer-scale structure, and aggregation state of ZnS formed by sulfate-reducing bacteria (SRB) in a SRB-dominated biofilm growing on degraded wood in cold (Tsimilar to8degreesC), circumneutral-pH (7.2-8.5) waters draining from an abandoned, carbonate-hosted Pb-Zn mine. High-resolution transmission electron microscope (HRTEM) data reveal that the earliest biologically induced precipitates are crystalline ZnS nanoparticles 1-5 nm in diameter. Although most nanocrystals have the sphalerite structure, nanocrystals of wurtzite are also present, consistent with a predicted size dependence for ZnS phase stability. Nearly all the nanocrystals are concentrated into 1-5 mum diameter spheroidal aggregates that display concentric banding patterns indicative of episodic precipitation and flocculation. Abundant disordered stacking sequences and faceted, porous crystal-aggregate morphologies are consistent with aggregation-driven growth of ZnS nanocrystals prior to and/or during spheroid formation. Spheroids are typically coated by organic polymers or associated with microbial cellular surfaces, and are concentrated roughly into layers within the biofilm. Size, shape, structure, degree of crystallinity, and polymer associations will all impact ZnS solubility, aggregation and coarsening behavior, transport in groundwater, and potential for deposition by sedimentation. Results presented here reveal nanometer- to micrometer-scale attributes of biologically induced ZnS formation likely to be relevant to sequestration via bacterial sulfate reduction (BSR) of other potential contaminant metal(loid)s, such as Pb2+, Cd2+, As3+ and Hg2+, into metal sulfides. The results highlight the importance of basic mineralogical information for accurate prediction and monitoring of long-term contaminant metal mobility and bioavailability in natural and constructed bioremediation systems. Our observations also provoke interesting questions regarding the role of size-dependent phase stability in biomineralization and provide new insights into the origin of submicrometer- to millimeter-scale petrographic features observed in low-temperature sedimentary sulfide ore deposits.
Resumo:
The type 1 polyaxonal (PA1) cell is a distinct type of axon-bearing amacrine cell whose soma commonly occupies an interstitial position in the inner plexiform layer; the proximal branches of the sparse dendritic tree produce 1-4 axon-like processes, which form an extensive axonal arbor that is concentric with the smaller dendritic tree (Dacey, 1989; Famiglietti, 1992a,b). In this study, intracellular injections of Neurobiotin have revealed the complete dendritic and axonal morphology of the PA1 cells in the rabbit retina, as well as labeling the local array of PA1 cells through homologous tracer coupling. The dendritic-field area of the PA1 cells increased from a minimum of 0.15 mm(2) (0.44-mm equivalent diameter) on the visual streak to a maximum of 0.67 mm(2) (0.92-mm diameter) in the far periphery; the axonal-field area also showed a 3-fold variation across the retina, ranging from 3.1 mm(2) (2.0-mm diameter) to 10.2 mm(2) (3.6-mm diameter). The increase in dendritic- and axonal-field size was accompanied by a reduction in cell density, from 60 cells/mm(2) in the visual streak to 20 cells/mm(2) in the far periphery, so that the PA1 cells showed a 12 times overlap of their dendritic fields across the retina and a 200-300 times overlap of their axonal fields. Consequently, the axonal plexus was much denser than the dendritic plexus, with each square millimeter of retina containing similar to100 mm of dendrites and similar to1000 mm of axonal processes. The strong homologous tracer coupling revealed that similar to45% of the PA1 somata were located in the inner nuclear layer, similar to50% in the inner plexiform layer, and similar to5% in the ganglion cell layer. In addition, the Neurobiotin-injected PA1 cells sometimes showed clear heterologous tracer coupling to a regular array of small ganglion cells, which were present at half the density of the PA1 cells. The PA1 cells were also shown to contain elevated levels of gamma-aminobutyric acid (GABA), like other axon-bearing amacrine cells.
Resumo:
Optical coherence tomography (OCT) is an emerging coherence-domain technique capable of in vivo imaging of sub-surface structures at millimeter-scale depth. Its steady progress over the last decade has been galvanized by a breakthrough detection concept, termed spectral-domain OCT, which has resulted in a dramatic improvement of the OCT signal-to-noise ratio of 150 times demonstrated for weakly scattering objects at video-frame-rates. As we have realized, however, an important OCT sub-system remains sub-optimal: the sample arm traditionally operates serially, i.e. in flying-spot mode. To realize the full-field image acquisition, a Fourier holography system illuminated with a swept-source is employed instead of a Michelson interferometer commonly used in OCT. The proposed technique, termed Fourier-domain OCT, offers a new leap in signal-to-noise ratio improvement, as compared to flying-spot OCT systems, and represents the main thrust of this paper. Fourier-domain OCT is described, and its basic theoretical aspects, including the reconstruction algorithm, are discussed. (C) 2004 Elsevier B.V. All rights reserved.