40 resultados para pulse compression


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advanced optical modulation format polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) has become a key ingredient in the design of 100 and 200-Gb/s dense wavelength-division multiplexed (DWDM) networks. The performance of this format varies according to the shape of the pulses employed by the optical carrier: non-return to zero (NRZ), return to zero (RZ) or carrier-suppressed return to zero (CSRZ). In this paper we analyze the tolerance of PDM-QPSK to linear and nonlinear optical impairments: amplified spontaneous emission (ASE) noise, crosstalk, distortion by optical filtering, chromatic dispersion (CD), polarization mode dispersion (PMD) and fiber Kerr nonlinearities. RZ formats with a low duty cycle value reduce pulse-to-pulse interaction obtaining a higher tolerance to CD, PMD and intrachannel nonlinearities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first feasibility study of using dual-probe heated fiber optics with distributed temperature sensing to measure soil volumetric heat capacity and soil water content is presented. Although results using different combinations of cables demonstrate feasibility, further work is needed to gain accuracy, including a model to account for the finite dimension and the thermal influence of the probes. Implementation of the dual-probe heat-pulse (DPHP) approach for measurement of volumetric heat capacity (C) and water content (θ) with distributed temperature sensing heated fiber optic (FO) systems presents an unprecedented opportunity for environmental monitoring (e.g., simultaneous measurement at thousands of points). We applied uniform heat pulses along a FO cable and monitored the thermal response at adjacent cables. We tested the DPHP method in the laboratory using multiple FO cables at a range of spacings. The amplitude and phase shift in the heat signal with distance was found to be a function of the soil volumetric heat capacity. Estimations of C at a range of moisture contents (θ = 0.09– 0.34 m3 m−3) suggest the feasibility of measurement via responsiveness to the changes in θ, although we observed error with decreasing soil water contents (up to 26% at θ = 0.09 m3 m−3). Optimization will require further models to account for the finite radius and thermal influence of the FO cables. Although the results indicate that the method shows great promise, further study is needed to quantify the effects of soil type, cable spacing, and jacket configurations on accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many applications (like social or sensor networks) the in- formation generated can be represented as a continuous stream of RDF items, where each item describes an application event (social network post, sensor measurement, etc). In this paper we focus on compressing RDF streams. In particular, we propose an approach for lossless RDF stream compression, named RDSZ (RDF Differential Stream compressor based on Zlib). This approach takes advantage of the structural similarities among items in a stream by combining a differential item encoding mechanism with the general purpose stream compressor Zlib. Empirical evaluation using several RDF stream datasets shows that this combi- nation produces gains in compression ratios with respect to using Zlib alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose the use of a polarization based interferometer with variable transfer function for the generation of temporally flat top pulses from gain switched single mode semiconductor lasers. The main advantage of the presented technique is its flexibility in terms of input pulse characteristics, as pulse duration, spectral bandwidth and operating wavelength. Theoretical predictions and experimental demonstrations are presented and the proposed technique is applied to two different semiconductor laser sources emitting in the 1550 nm region. Flat top pulses are successfully obtained with input seed pulses with duration ranging from 40 ps to 100 ps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a general situation a non-uniform velocity field gives rise to a shift of the otherwise straight acoustic pulse trajectory between the transmitter and receiver transducers of a sonic anemometer. The aim of this paper is to determine the effects of trajectory shifts on the velocity as measured by the sonic anemometer. This determination has been accomplished by developing a mathematical model of the measuring process carried out by sonic anemometers; a model which includes the non-straight trajectory effect. The problem is solved by small perturbation techniques, based on the relevant small parameter of the problem, the Mach number of the reference flow, M. As part of the solution, a general analytical expression for the deviations of the computed measured speed from the nominal speed has been obtained. The correction terms of both the transit time and of the measured speed are of M 2 order in rotational velocity field. The method has been applied to three simple, paradigmatic flows: one-directional horizontal and vertical shear flows, and mixed with a uniform horizontal flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los sensores de fibra óptica son una tecnología que ha madurado en los últimos años, sin embargo, se requiere un mayor desarrollo de aplicaciones para materiales naturales como las rocas, que por ser agregados complejos pueden contener partículas minerales y fracturas de tamaño mucho mayor que las galgas eléctricas usadas tradicionalmente para medir deformaciones en las pruebas de laboratorio, ocasionando que los resultados obtenidos puedan ser no representativos. En este trabajo fueron diseñados, fabricados y probados sensores de deformación de gran área y forma curvada, usando redes de Bragg en fibra óptica (FBG) con el objetivo de obtener registros representativos en rocas que contienen minerales y estructuras de diversas composiciones, tamaños y direcciones. Se presenta el proceso de elaboración del transductor, su caracterización mecánica, su calibración y su evaluación en pruebas de compresión uniaxial en muestras de roca. Para verificar la eficiencia en la transmisión de la deformación de la roca al sensor una vez pegado, también fue realizado el análisis de la transferencia incluyendo los efectos del adhesivo, de la muestra y del transductor. Los resultados experimentales indican que el sensor desarrollado permite registro y transferencia de la deformación fiables, avance necesario para uso en rocas y otros materiales heterogénos, señalando una interesante perspectiva para aplicaciones sobre superficies irregulares, pues permite aumentar a voluntad el tamaño y forma del área de registro, posibilita también obtener mayor fiabilidad de resultados en muestras de pequeño tamaño y sugiere su conveniencia en obras, en las cuales los sistemas eléctricos tradicionales tienen limitaciones. ABSTRACT Optical fiber sensors are a technology that has matured in recent years, however, further development for rock applications is needed. Rocks contain mineral particles and features larger than electrical strain gauges traditionally used in laboratory tests, causing the results to be unrepresentative. In this work were designed, manufactured, and tested large area and curved shape strain gages, using fiber Bragg gratings in optical fiber (FBG) in order to obtain representative measurement on surface rocks samples containing minerals and structures of different compositions, sizes and directions. This reports presents the processes of manufacturing, mechanical characterization, calibration and evaluation under uniaxial compression tests on rock samples. To verify the efficiency of rock deformation transmitted to attached sensor, it was also performed the analysis of the strain transfer including the effects of the bonding, the sample and the transducer. The experimental results indicate that the developed sensor enables reliable measurements of the strain and its transmission from rock to sensor, appropriate for use in heterogeneous materials, pointing an interesting perspective for applications on irregular surfaces, allowing increasing at will the size and shape of the measurement area. This research suggests suitability of the optical strain gauge for real scale, where traditional electrical systems have demonstrated some limitations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this letter, we propose and experimentally demonstrate a compact, flexible, and scalable ultrawideband (UWB) generator based on the merge of phase-to-intensity conversion and pulse shaping employing an fiber Bragg Grating-based superstructure. Our approach offers the capacity for generating high-order UWB pulses by means of the combination of various low-order derivatives. Moreover, the scheme permits the implementation of binary and multilevel modulation formats. Experimental measurements of the generated UWB pulses, in both time and frequency domain, are presented revealing efficiency and a proper fit in terms of Federal Communications Commission settled standards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimal design of a vertical cantilever beam is presented in this paper. The beam is assumed immersed in an elastic Winkler soil and subjected to several loads: a point force at the tip section, its self weight and a uniform distributed load along its length. lbe optimal design problem is to find the beam of a given length and minimum volume, such that the resultant compressive stresses are admisible. This prohlem is analyzed according to linear elasticity theory and within different alternative structural models: column, Navier-Bernoulli beam-column, Timoshenko beamcolumn (i.e. with shear strain) under conservative loads, typically, constant direction loads. Results obtained in each case are compared, in order to evaluate the sensitivity of model on the numerical results. The beam optimal design is described by the section distribution layout (area, second moment, shear area etc.) along the beam span and the corresponding beam total volume. Other situations, some of them very interesting from a theoretical point of view, with follower loads (Beck and Leipholz problems) are also discussed, leaving for future work numerical details and results.