983 resultados para Nonlinear Optical Processes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In traditional electrical sensing applications, multiplexing and interconnecting the different sensing elements is a major challenge. Recently, many optical alternatives have been investigated including optical fiber sensors of which the sensing elements consist of fiber Bragg gratings. Different sensing points can be integrated in one optical fiber solving the interconnection problem and avoiding any electromagnetical interference (EMI). Many new sensing applications also require flexible or stretchable sensing foils which can be attached to or wrapped around irregularly shaped objects such as robot fingers and car bumpers or which can even be applied in biomedical applications where a sensor is fixed on a human body. The use of these optical sensors however always implies the use of a light-source, detectors and electronic circuitry to be coupled and integrated with these sensors. The coupling of these fibers with these light sources and detectors is a critical packaging problem and as it is well-known the costs for packaging, especially with optoelectronic components and fiber alignment issues are huge. The end goal of this embedded sensor is to create a flexible optical sensor integrated with (opto)electronic modules and control circuitry. To obtain this flexibility, one can embed the optical sensors and the driving optoelectronics in a stretchable polymer host material. In this article different embedding techniques for optical fiber sensors are described and characterized. Initial tests based on standard manufacturing processes such as molding and laser structuring are reported as well as a more advanced embedding technique based on soft lithography processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear distortion in delay-compensated spans for intermediate coupling is studied for the first time. Coupling strengths under -30dB/100m allow distortion reduction using shorter compensation lengths and higher delays. For higher coupling strengths no significant penalty results from shorter compensation lengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose the design of communication systems based on using periodic nonlinear Fourier transform (PNFT), following the introduction of the method in the Part I. We show that the famous "eigenvalue communication" idea [A. Hasegawa and T. Nyu, J. Lightwave Technol. 11, 395 (1993)] can also be generalized for the PNFT application: In this case, the main spectrum attributed to the PNFT signal decomposition remains constant with the propagation down the optical fiber link. Therefore, the main PNFT spectrum can be encoded with data in the same way as soliton eigenvalues in the original proposal. The results are presented in terms of the bit-error rate (BER) values for different modulation techniques and different constellation sizes vs. the propagation distance, showing a good potential of the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel artificial neural network (ANN)-based nonlinear equalizer (NLE) of low complexity is demonstrated for 40-Gb/s CO-OFDM at 2000 km, revealing ∼1.5 dB enhancement in Q-factor compared to inverse Volterra-series transfer function based NLE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anthropogenic elevation of atmospheric pCO2 is predicted to cause the pH of surface seawater to decline by 0.3-0.4 units by 2100 AD, causing a 50% reduction in seawater [CO3] and undersaturation with respect to aragonite in high-latitude surface waters. We investigated the impact of CO2-induced ocean acidification on the temperate scleractinian coral Oculina arbuscula by rearing colonies for 60 days in experimental seawaters bubbled with air-CO2 gas mixtures of 409, 606, 903, and 2,856 ppm pCO2, yielding average aragonite saturation states (Omega aragonite) of 2.6, 2.3, 1.6, and 0.8. Measurement of calcification (via buoyant weighing) and linear extension (relative to a 137Ba/138Ba spike) revealed that skeletal accretion was only minimally impaired by reductions in Omega aragonite from 2.6 to 1.6, although major reductions were observed at 0.8 (undersaturation). Notably, the corals continued accreting new skeletal material even in undersaturated conditions, although at reduced rates. Correlation between rates of linear extension and calcification suggests that reduced calcification under Omega aragonite = 0.8 resulted from reduced aragonite accretion, rather than from localized dissolution. Accretion of pure aragonite under each Omega aragonite discounts the possibility that these corals will begin producing calcite, a less soluble form of CaCO3, as the oceans acidify. The corals' nonlinear response to reduced Omega aragonite and their ability to accrete new skeletal material in undersaturated conditions suggest that they strongly control the biomineralization process. However, our data suggest that a threshold seawater [CO3] exists, below which calcification within this species (and possibly others) becomes impaired. Indeed, the strong negative response of O. arbuscula to Omega aragonite= 0.8 indicates that their response to future pCO2-induced ocean acidification could be both abrupt and severe once the critical Omega aragoniteis reached.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increased atmospheric CO2 concentrations are causing greater dissolution of CO2 into seawater, and are ultimately responsible for today's ongoing ocean acidification. We manipulated seawater acidity by addition of HCl and by increasing CO2 concentration and observed that two coastal harpacticoid copepods, Amphiascoides atopus and Schizopera knabeni were both more sensitive to increased acidity when generated by CO2. The present study indicates that copepods living in environments more prone to hypercapnia, such as mudflats where S. knabeni lives, may be less sensitive to future acidification. Ocean acidification is also expected to alter the toxicity of waterborne metals by influencing their speciation in seawater. CO2 enrichment did not affect the free-ion concentration of Cd but did increase the free-ion concentration of Cu. Antagonistic toxicities were observed between CO2 with Cd, Cu and Cu free-ion in A. atopus. This interaction could be due to a competition for H+ and metals for binding sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acidification of ocean surface waters by anthropogenic carbon dioxide (CO2) emissions is a currently developing scenario that warrants a broadening of research foci in the study of acid-base physiology. Recent studies working with environmentally relevant CO2 levels, indicate that some echinoderms and molluscs reduce metabolic rates, soft tissue growth and calcification during hypercapnic exposure. In contrast to all prior invertebrate species studied so far, growth trials with the cuttlefish Sepia officinalis found no indication of reduced growth or calcification performance during long-term exposure to 0.6 kPa CO2. It is hypothesized that the differing sensitivities to elevated seawater pCO2 could be explained by taxa specific differences in acid-base regulatory capacity. In this study, we examined the acid-base regulatory ability of S. officinalis in vivo, using a specially modified cannulation technique as well as 31P NMR spectroscopy. During acute exposure to 0.6 kPa CO2, S. officinalis rapidly increased its blood [HCO3] to 10.4 mM through active ion-transport processes, and partially compensated the hypercapnia induced respiratory acidosis. A minor decrease in intracellular pH (pHi) and stable intracellular phosphagen levels indicated efficient pHi regulation. We conclude that S. officinalis is not only an efficient acid-base regulator, but is also able to do so without disturbing metabolic equilibria in characteristic tissues or compromising aerobic capacities. The cuttlefish did not exhibit acute intolerance to hypercapnia that has been hypothesized for more active cephalopod species (squid). Even though blood pH (pHe) remained 0.18 pH units below control values, arterial O2 saturation was not compromised in S. officinalis because of the comparatively lower pH sensitivity of oxygen binding to its blood pigment. This raises questions concerning the potentially broad range of sensitivity to changes in acid-base status amongst invertebrates, as well as to the underlying mechanistic origins. Further studies are needed to better characterize the connection between acid-base status and animal fitness in various marine species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pipelines extend thousands of kilometers across wide geographic areas as a network to provide essential services for modern life. It is inevitable that pipelines must pass through unfavorable ground conditions, which are susceptible to natural disasters. This thesis investigates the behaviour of buried pressure pipelines experiencing ground distortions induced by normal faulting. A recent large database of physical modelling observations on buried pipes of different stiffness relative to the surrounding soil subjected to normal faults provided a unique opportunity to calibrate numerical tools. Three-dimensional finite element models were developed to enable the complex soil-structure interaction phenomena to be further understood, especially on the subjects of gap formation beneath the pipe and the trench effect associated with the interaction between backfill and native soils. Benchmarked numerical tools were then used to perform parametric analysis regarding project geometry, backfill material, relative pipe-soil stiffness and pipe diameter. Seismic loading produces a soil displacement profile that can be expressed by isoil, the distance between the peak curvature and the point of contraflexure. A simplified design framework based on this length scale (i.e., the Kappa method) was developed, which features estimates of longitudinal bending moments of buried pipes using a characteristic length, ipipe, the distance from peak to zero curvature. Recent studies indicated that empirical soil springs that were calibrated against rigid pipes are not suitable for analyzing flexible pipes, since they lead to excessive conservatism (for design). A large-scale split-box normal fault simulator was therefore assembled to produce experimental data for flexible PVC pipe responses to a normal fault. Digital image correlation (DIC) was employed to analyze the soil displacement field, and both optical fibres and conventional strain gauges were used to measure pipe strains. A refinement to the Kappa method was introduced to enable the calculation of axial strains as a function of pipe elongation induced by flexure and an approximation of the longitudinal ground deformations. A closed-form Winkler solution of flexural response was also derived to account for the distributed normal fault pattern. Finally, these two analytical solutions were evaluated against the pipe responses observed in the large-scale laboratory tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by environmental protection concerns, monitoring the flue gas of thermal power plant is now often mandatory due to the need to ensure that emission levels stay within safe limits. Optical based gas sensing systems are increasingly employed for this purpose, with regression techniques used to relate gas optical absorption spectra to the concentrations of specific gas components of interest (NOx, SO2 etc.). Accurately predicting gas concentrations from absorption spectra remains a challenging problem due to the presence of nonlinearities in the relationships and the high-dimensional and correlated nature of the spectral data. This article proposes a generalized fuzzy linguistic model (GFLM) to address this challenge. The GFLM is made up of a series of “If-Then” fuzzy rules. The absorption spectra are input variables in the rule antecedent. The rule consequent is a general nonlinear polynomial function of the absorption spectra. Model parameters are estimated using least squares and gradient descent optimization algorithms. The performance of GFLM is compared with other traditional prediction models, such as partial least squares, support vector machines, multilayer perceptron neural networks and radial basis function networks, for two real flue gas spectral datasets: one from a coal-fired power plant and one from a gas-fired power plant. The experimental results show that the generalized fuzzy linguistic model has good predictive ability, and is competitive with alternative approaches, while having the added advantage of providing an interpretable model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by environmental protection concerns, monitoring the flue gas of thermal power plant is now often mandatory due to the need to ensure that emission levels stay within safe limits. Optical based gas sensing systems are increasingly employed for this purpose, with regression techniques used to relate gas optical absorption spectra to the concentrations of specific gas components of interest (NOx, SO2 etc.). Accurately predicting gas concentrations from absorption spectra remains a challenging problem due to the presence of nonlinearities in the relationships and the high-dimensional and correlated nature of the spectral data. This article proposes a generalized fuzzy linguistic model (GFLM) to address this challenge. The GFLM is made up of a series of “If-Then” fuzzy rules. The absorption spectra are input variables in the rule antecedent. The rule consequent is a general nonlinear polynomial function of the absorption spectra. Model parameters are estimated using least squares and gradient descent optimization algorithms. The performance of GFLM is compared with other traditional prediction models, such as partial least squares, support vector machines, multilayer perceptron neural networks and radial basis function networks, for two real flue gas spectral datasets: one from a coal-fired power plant and one from a gas-fired power plant. The experimental results show that the generalized fuzzy linguistic model has good predictive ability, and is competitive with alternative approaches, while having the added advantage of providing an interpretable model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To maintain the pace of development set by Moore's law, production processes in semiconductor manufacturing are becoming more and more complex. The development of efficient and interpretable anomaly detection systems is fundamental to keeping production costs low. As the dimension of process monitoring data can become extremely high anomaly detection systems are impacted by the curse of dimensionality, hence dimensionality reduction plays an important role. Classical dimensionality reduction approaches, such as Principal Component Analysis, generally involve transformations that seek to maximize the explained variance. In datasets with several clusters of correlated variables the contributions of isolated variables to explained variance may be insignificant, with the result that they may not be included in the reduced data representation. It is then not possible to detect an anomaly if it is only reflected in such isolated variables. In this paper we present a new dimensionality reduction technique that takes account of such isolated variables and demonstrate how it can be used to build an interpretable and robust anomaly detection system for Optical Emission Spectroscopy data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In situ methods used for water quality assessment have both physical and time constraints. Just a limited number of sampling points can be performed due to this, making it difficult to capture the range and variability of coastal processes and constituents. In addition, the mixing between fresh and oceanic water creates complex physical, chemical and biological environment that are difficult to understand, causing the existing measurement methodologies to have significant logistical, technical, and economic challenges and constraints. Remote sensing of ocean colour makes it possible to acquire information on the distribution of chlorophyll and other constituents over large areas of the oceans in short periods. There are many potential applications of ocean colour data. Satellite-derived products are a key data source to study the distribution pattern of organisms and nutrients (Guillaud et al. 2008) and fishery research (Pillai and Nair 2010; Solanki et al. 2001. Also, the study of spatial and temporal variability of phytoplankton blooms, red tide identification or harmful algal blooms monitoring (Sarangi et al. 2001; Sarangi et al. 2004; Sarangi et al. 2005; Bhagirathan et al., 2014), river plume or upwelling assessments (Doxaran et al. 2002; Sravanthi et al. 2013), global productivity analyses (Platt et al. 1988; Sathyendranath et al. 1995; IOCCG2006) and oil spill detection (Maianti et al. 2014). For remote sensing to be accurate in the complex coastal waters, it has to be validated with the in situ measured values. In this thesis an attempt to study, measure and validate the complex waters with the help of satellite data has been done. Monitoring of coastal ecosystem health of Arabian Sea in a synoptic way requires an intense, extensive and continuous monitoring of the water quality indicators. Phytoplankton determined from chl-a concentration, is considered as an indicator of the state of the coastal ecosystems. Currently, satellite sensors provide the most effective means for frequent, synoptic, water-quality observations over large areas and represent a potential tool to effectively assess chl-a concentration over coastal and oceanic waters; however, algorithms designed to estimate chl-a at global scales have been shown to be less accurate in Case 2 waters, due to the presence of water constituents other than phytoplankton which do not co-vary with the phytoplankton. The constituents of Arabian Sea coastal waters are region-specific because of the inherent variability of these optically-active substances affected by factors such as riverine input (e.g. suspended matter type and grain size, CDOM) and phytoplankton composition associated with seasonal changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden optische Filterarrays für hochqualitative spektroskopische Anwendungen im sichtbaren (VIS) Wellenlängenbereich untersucht. Die optischen Filter, bestehend aus Fabry-Pérot (FP)-Filtern für hochauflösende miniaturisierte optische Nanospektrometer, basieren auf zwei hochreflektierenden dielektrischen Spiegeln und einer zwischenliegenden Resonanzkavität aus Polymer. Jeder Filter erlaubt einem schmalbandigem spektralen Band (in dieser Arbeit Filterlinie genannt) ,abhängig von der Höhe der Resonanzkavität, zu passieren. Die Effizienz eines solchen optischen Filters hängt von der präzisen Herstellung der hochselektiven multispektralen Filterfelder von FP-Filtern mittels kostengünstigen und hochdurchsatz Methoden ab. Die Herstellung der multiplen Spektralfilter über den gesamten sichtbaren Bereich wird durch einen einzelnen Prägeschritt durch die 3D Nanoimprint-Technologie mit sehr hoher vertikaler Auflösung auf einem Substrat erreicht. Der Schlüssel für diese Prozessintegration ist die Herstellung von 3D Nanoimprint-Stempeln mit den gewünschten Feldern von Filterkavitäten. Die spektrale Sensitivität von diesen effizienten optischen Filtern hängt von der Genauigkeit der vertikalen variierenden Kavitäten ab, die durch eine großflächige ‚weiche„ Nanoimprint-Technologie, UV oberflächenkonforme Imprint Lithographie (UV-SCIL), ab. Die Hauptprobleme von UV-basierten SCIL-Prozessen, wie eine nichtuniforme Restschichtdicke und Schrumpfung des Polymers ergeben Grenzen in der potenziellen Anwendung dieser Technologie. Es ist sehr wichtig, dass die Restschichtdicke gering und uniform ist, damit die kritischen Dimensionen des funktionellen 3D Musters während des Plasmaätzens zur Entfernung der Restschichtdicke kontrolliert werden kann. Im Fall des Nanospektrometers variieren die Kavitäten zwischen den benachbarten FP-Filtern vertikal sodass sich das Volumen von jedem einzelnen Filter verändert , was zu einer Höhenänderung der Restschichtdicke unter jedem Filter führt. Das volumetrische Schrumpfen, das durch den Polymerisationsprozess hervorgerufen wird, beeinträchtigt die Größe und Dimension der gestempelten Polymerkavitäten. Das Verhalten des großflächigen UV-SCIL Prozesses wird durch die Verwendung von einem Design mit ausgeglichenen Volumen verbessert und die Prozessbedingungen werden optimiert. Das Stempeldesign mit ausgeglichen Volumen verteilt 64 vertikal variierenden Filterkavitäten in Einheiten von 4 Kavitäten, die ein gemeinsames Durchschnittsvolumen haben. Durch die Benutzung der ausgeglichenen Volumen werden einheitliche Restschichtdicken (110 nm) über alle Filterhöhen erhalten. Die quantitative Analyse der Polymerschrumpfung wird in iii lateraler und vertikaler Richtung der FP-Filter untersucht. Das Schrumpfen in vertikaler Richtung hat den größten Einfluss auf die spektrale Antwort der Filter und wird durch die Änderung der Belichtungszeit von 12% auf 4% reduziert. FP Filter die mittels des Volumengemittelten Stempels und des optimierten Imprintprozesses hergestellt wurden, zeigen eine hohe Qualität der spektralen Antwort mit linearer Abhängigkeit zwischen den Kavitätshöhen und der spektralen Position der zugehörigen Filterlinien.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les convertisseurs de longueur d’onde sont essentiels pour la réalisation de réseaux de communications optiques à routage en longueur d’onde. Dans la littérature, les convertisseurs de longueur d’onde basés sur le mélange à quatre ondes dans les amplificateurs optiques à semi-conducteur constituent une solution extrêmement intéressante, et ce, en raison de leurs nombreuses caractéristiques nécessaires à l’implémentation de tels réseaux de communications. Avec l’émergence des systèmes commerciaux de détection cohérente, ainsi qu’avec les récentes avancées dans le domaine du traitement de signal numérique, il est impératif d’évaluer la performance des convertisseurs de longueur d’onde, et ce, dans le contexte des formats de modulation avancés. Les objectifs de cette thèse sont : 1) d’étudier la faisabilité des convertisseurs de longueur d’onde basés sur le mélange à quatre ondes dans les amplificateurs optiques à semi-conducteur pour les formats de modulation avancés et 2) de proposer une technique basée sur le traitement de signal numérique afin d’améliorer leur performance. En premier lieu, une étude expérimentale de la conversion de longueur d’onde de formats de modulation d’amplitude en quadrature (quadrature amplitude modulation - QAM) est réalisée. En particulier, la conversion de longueur d’onde de signaux 16-QAM à 16 Gbaud et 64-QAM à 5 Gbaud dans un amplificateur optique à semi-conducteur commercial est réalisée sur toute la bande C. Les résultats démontrent qu’en raison des distorsions non-linéaires induites sur le signal converti, le point d’opération optimal du convertisseur de longueur d’onde est différent de celui obtenu lors de la conversion de longueur d’onde de formats de modulation en intensité. En effet, dans le contexte des formats de modulation avancés, c’est le compromis entre la puissance du signal converti et les non-linéarités induites qui détermine le point d’opération optimal du convertisseur de longueur d’onde. Les récepteurs cohérents permettent l’utilisation de techniques de traitement de signal numérique afin de compenser la détérioration du signal transmis suite à sa détection. Afin de mettre à profit les nouvelles possibilités offertes par le traitement de signal numérique, une technique numérique de post-compensation des distorsions induites sur le signal converti, basée sur une analyse petit-signal des équations gouvernant la dynamique du gain à l’intérieur des amplificateurs optiques à semi-conducteur, est développée. L’efficacité de cette technique est démontrée à l’aide de simulations numériques et de mesures expérimentales de conversion de longueur d’onde de signaux 16-QAM à 10 Gbaud et 64-QAM à 5 Gbaud. Cette méthode permet d’améliorer de façon significative les performances du convertisseur de longueur d’onde, et ce, principalement pour les formats de modulation avancés d’ordre supérieur tel que 64-QAM. Finalement, une étude expérimentale exhaustive de la technique de post-compensation des distorsions induites sur le signal converti est effectuée pour des signaux 64-QAM. Les résultats démontrent que, même en présence d’un signal à bruité à l’entrée du convertisseur de longueur d’onde, la technique proposée améliore toujours la qualité du signal reçu. De plus, une étude du point d’opération optimal du convertisseur de longueur d’onde est effectuée et démontre que celui-ci varie en fonction des pertes optiques suivant la conversion de longueur d’onde. Dans un réseau de communication optique à routage en longueur d’onde, le signal est susceptible de passer par plusieurs étages de conversion de longueur d’onde. Pour cette raison, l’efficacité de la technique de post-compensation est démontrée, et ce pour la première fois dans la littérature, pour deux étages successifs de conversion de longueur d’onde de signaux 64-QAM à 5 Gbaud. Les résultats de cette thèse montrent que les convertisseurs de longueur d’ondes basés sur le mélange à quatre ondes dans les amplificateurs optiques à semi-conducteur, utilisés en conjonction avec des techniques de traitement de signal numérique, constituent une technologie extrêmement prometteuse pour les réseaux de communications optiques modernes à routage en longueur d’onde.