918 resultados para Non-linear systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

El propósito de esta tesis fue estudiar el rendimiento ofensivo de los equipos de balonmano de élite cuando se considera el balonmano como un sistema dinámico complejo no lineal. La perspectiva de análisis dinámica dependiente del tiempo fue adoptada para evaluar el rendimiento de los equipos durante el partido. La muestra general comprendió los 240 partidos jugados en la temporada 2011-2012 de la liga profesional masculina de balonmano de España (Liga ASOBAL). En el análisis posterior solo se consideraron los partidos ajustados (diferencia final de goles ≤ 5; n = 142). El estado del marcador, la localización del partido, el nivel de los oponentes y el periodo de juego fueron incorporados al análisis como variables situacionales. Tres estudios compusieron el núcleo de la tesis. En el primer estudio, analizamos la coordinación entre las series temporales que representan el proceso goleador a lo largo del partido de cada uno de los dos equipos que se enfrentan. Autocorrelaciones, correlaciones cruzadas, doble media móvil y transformada de Hilbert fueron usadas para el análisis. El proceso goleador de los equipos presentó una alta consistencia a lo largo de todos los partidos, así como fuertes modos de coordinación en fase en todos los contextos de juego. Las únicas diferencias se encontraron en relación al periodo de juego. La coordinación en los procesos goleadores de los equipos fue significativamente menor en el 1er y 2º periodo (0–10 min y 10–20 min), mostrando una clara coordinación creciente a medida que el partido avanzaba. Esto sugiere que son los 20 primeros minutos aquellos que rompen los partidos. En el segundo estudio, analizamos los efectos temporales (efecto inmediato, a corto y a medio plazo) de los tiempos muertos en el rendimiento goleador de los equipos. Modelos de regresión lineal múltiple fueron empleados para el análisis. Los resultados mostraron incrementos de 0.59, 1.40 y 1.85 goles para los periodos que comprenden la primera, tercera y quinta posesión de los equipos que pidieron el tiempo muerto. Inversamente, se encontraron efectos significativamente negativos para los equipos rivales, con decrementos de 0.50, 1.43 y 2.05 goles en los mismos periodos respectivamente. La influencia de las variables situacionales solo se registró en ciertos periodos de juego. Finalmente, en el tercer estudio, analizamos los efectos temporales de las exclusiones de los jugadores sobre el rendimiento goleador de los equipos, tanto para los equipos que sufren la exclusión (inferioridad numérica) como para los rivales (superioridad numérica). Se emplearon modelos de regresión lineal múltiple para el análisis. Los resultados mostraron efectos negativos significativos en el número de goles marcados por los equipos con un jugador menos, con decrementos de 0.25, 0.40, 0.61, 0.62 y 0.57 goles para los periodos que comprenden el primer, segundo, tercer, cuarto y quinto minutos previos y posteriores a la exclusión. Para los rivales, los resultados mostraron efectos positivos significativos, con incrementos de la misma magnitud en los mismos periodos. Esta tendencia no se vio afectada por el estado del marcador, localización del partido, nivel de los oponentes o periodo de juego. Los incrementos goleadores fueron menores de lo que se podría esperar de una superioridad numérica de 2 minutos. Diferentes teorías psicológicas como la paralización ante situaciones de presión donde se espera un gran rendimiento pueden ayudar a explicar este hecho. Los últimos capítulos de la tesis enumeran las conclusiones principales y presentan diferentes aplicaciones prácticas que surgen de los tres estudios. Por último, se presentan las limitaciones y futuras líneas de investigación. ABSTRACT The purpose of this thesis was to investigate the offensive performance of elite handball teams when considering handball as a complex non-linear dynamical system. The time-dependent dynamic approach was adopted to assess teams’ performance during the game. The overall sample comprised the 240 games played in the season 2011-2012 of men’s Spanish Professional Handball League (ASOBAL League). In the subsequent analyses, only close games (final goal-difference ≤ 5; n = 142) were considered. Match status, game location, quality of opposition, and game period situational variables were incorporated into the analysis. Three studies composed the core of the thesis. In the first study, we analyzed the game-scoring coordination between the time series representing the scoring processes of the two opposing teams throughout the game. Autocorrelation, cross-correlation, double moving average, and Hilbert transform were used for analysis. The scoring processes of the teams presented a high consistency across all the games as well as strong in-phase modes of coordination in all the game contexts. The only differences were found when controlling for the game period. The coordination in the scoring processes of the teams was significantly lower for the 1st and 2nd period (0–10 min and 10–20 min), showing a clear increasing coordination behavior as the game progressed. This suggests that the first 20 minutes are those that break the game-scoring. In the second study, we analyzed the temporal effects (immediate effect, short-term effect, and medium-term effect) of team timeouts on teams’ scoring performance. Multiple linear regression models were used for the analysis. The results showed increments of 0.59, 1.40 and 1.85 goals for the periods within the first, third and fifth timeout ball possessions for the teams that requested the timeout. Conversely, significant negative effects on goals scored were found for the opponent teams, with decrements of 0.59, 1.43 and 2.04 goals for the same periods, respectively. The influence of situational variables on the scoring performance was only registered in certain game periods. Finally, in the third study, we analyzed the players’ exclusions temporal effects on teams’ scoring performance, for the teams that suffer the exclusion (numerical inferiority) and for the opponents (numerical superiority). Multiple linear regression models were used for the analysis. The results showed significant negative effects on the number of goals scored for the teams with one less player, with decrements of 0.25, 0.40, 0.61, 0.62, and 0.57 goals for the periods within the previous and post one, two, three, four and five minutes of play. For the opponent teams, the results showed positive effects, with increments of the same magnitude in the same game periods. This trend was not affected by match status, game location, quality of opposition, or game period. The scoring increments were smaller than might be expected from a 2-minute numerical playing superiority. Psychological theories such as choking under pressure situations where good performance is expected could contribute to explain this finding. The final chapters of the thesis enumerate the main conclusions and underline the main practical applications that arise from the three studies. Lastly, limitations and future research directions are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The original motivation for this paper was to provide an efficient quantitative analysis of convex infinite (or semi-infinite) inequality systems whose decision variables run over general infinite-dimensional (resp. finite-dimensional) Banach spaces and that are indexed by an arbitrary fixed set J. Parameter perturbations on the right-hand side of the inequalities are required to be merely bounded, and thus the natural parameter space is l ∞(J). Our basic strategy consists of linearizing the parameterized convex system via splitting convex inequalities into linear ones by using the Fenchel–Legendre conjugate. This approach yields that arbitrary bounded right-hand side perturbations of the convex system turn on constant-by-blocks perturbations in the linearized system. Based on advanced variational analysis, we derive a precise formula for computing the exact Lipschitzian bound of the feasible solution map of block-perturbed linear systems, which involves only the system’s data, and then show that this exact bound agrees with the coderivative norm of the aforementioned mapping. In this way we extend to the convex setting the results of Cánovas et al. (SIAM J. Optim. 20, 1504–1526, 2009) developed for arbitrary perturbations with no block structure in the linear framework under the boundedness assumption on the system’s coefficients. The latter boundedness assumption is removed in this paper when the decision space is reflexive. The last section provides the aimed application to the convex case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new method for producing a functional-structural plant model that simulates response to different growth conditions, yet does not require detailed knowledge of underlying physiology. The example used to present this method is the modelling of the mountain birch tree. This new functional-structural modelling approach is based on linking an L-system representation of the dynamic structure of the plant with a canonical mathematical model of plant function. Growth indicated by the canonical model is allocated to the structural model according to probabilistic growth rules, such as rules for the placement and length of new shoots, which were derived from an analysis of architectural data. The main advantage of the approach is that it is relatively simple compared to the prevalent process-based functional-structural plant models and does not require a detailed understanding of underlying physiological processes, yet it is able to capture important aspects of plant function and adaptability, unlike simple empirical models. This approach, combining canonical modelling, architectural analysis and L-systems, thus fills the important role of providing an intermediate level of abstraction between the two extremes of deeply mechanistic process-based modelling and purely empirical modelling. We also investigated the relative importance of various aspects of this integrated modelling approach by analysing the sensitivity of the standard birch model to a number of variations in its parameters, functions and algorithms. The results show that using light as the sole factor determining the structural location of new growth gives satisfactory results. Including the influence of additional regulating factors made little difference to global characteristics of the emergent architecture. Changing the form of the probability functions and using alternative methods for choosing the sites of new growth also had little effect. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines experimentally options for optical fibre transmission over oceanic distances. Its format follows the chronological evolution of ultra-long haul optical systems, commencing with opto-electronic regenerators as repeaters, progressing to optically amplified NRZ systems and finally solitonic propagation. In each case recirculating loop techniques are deployed to simplify the transmission experiments. Advances in high speed electronics have allowed regenerators operating at 10 Gbit/s to become a practical reality. By augmenting such devices with optical amplifiers it is possible to greatly enhance the repeater spacing. Work detailed in this thesis has culminated in the propagation of 10 Gbit/s data over 400,000 km with a repeater spacing of 160 km. System reliability and robustness are enhanced by the use of a directly modulated DFB laser transmitter and total insensitivity of the system to the signal state of polarisation. Optically amplified ultra-long haul NRZ systems have taken on particular importance with the impending deployment of TAT 12/13 and TPC 5. The performance of these systems is demonstrated to be primarily limited by analogue impairments such as the accumulation of amplifier noise, polarisation effects and optical non-linearities. These degradations may be reduced by the use of appropriate dispersion maps and by scrambling the transmitted state of signal polarisation. A novel high speed optically passive polarisation scrambler is detailed for the first time. At bit rates in excess of 10 Gbit/s it is shown that these systems are severely limited and do not offer the advantages that might be expected over regenerated links. Propagation using solitons as the data bits appears particularly attractive since the dispersive and non-linear effects of the fibre allow distortion free transmission. However, the generation of pure solitons is difficult but must be achieved if the uncontrolled transmission distance is to be maximised. This thesis presents a new technique for the stabilisation of an erbium fibre ring laser that has aUowed propagation of 2.5 Gbit/s solitons to the theoretical limit of ~ 18,000 km. At higher bit rates temporal jitter becomes a significant impairment and to aUow an increase in the aggregate line rate multiplexing in both time and polarisation domains has been proposed. These techniques are shown to be of only limited benefit in practical systems and ultimately some form of soliton transmission control is required. The thesis demonstrates synchronous retiming by amplitude modulation that has allowed 20 Gbit/s data to propagate 125,000 km error free with an amplifier spacing approaching the soliton period. Ultimately the speed of operation of such systems is limited by the electronics used and, thus, a new form of soliton control is demonstrated using all optical techniques to achieve synchronous phase modulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reversed-pahse high-performance liquid chromatographic (HPLC) methods were developed for the assay of indomethacin, its decomposition products, ibuprofen and its (tetrahydro-2-furanyl)methyl-, (tetrahydro-2-(2H)pyranyl)methyl- and cyclohexylmethyl esters. The development and application of these HPLC systems were studied. A number of physico-chemical parameters that affect percutaneous absorption were investigated. The pKa values of indomethacin and ibuprofen were determined using the solubility method. Potentiometric titration and the Taft equation were also used for ibuprofen. The incorporation of ethanol or propylene glycol in the solvent resulted in an improvement in the aqueous solubility of these compounds. The partition coefficients were evaluated in order to establish the affinity of these drugs towards the stratum corneum. The stability of indomethacin and of ibuprofen esters were investigated and the effect of temperature and pH on the decomposition rates were studied. The effect of cetyltrimethylammonium bromide on the alkaline degradation of indomethacin was also followed. In the presence of alcohol, indomethacin alcoholysis was observed and the kinetics of decomposition were subjected to non-linear regression analysis and the rate constants for the various pathways were quantified. The non-isothermal, sufactant non-isoconcentration and non-isopH degradation of indomethacin were investigated. The analysis of the data was undertaken using NONISO, a BASIC computer program. The degradation profiles obtained from both non-iso and iso-kinetic studies show that there is close concordance in the results. The metabolic biotransformation of ibuprofen esters was followed using esterases from hog liver and rat skin homogenates. The results showed that the esters were very labile under these conditions. The presence of propylene glycol affected the rates of enzymic hydrolysis of the ester. The hydrolysis is modelled using an equation involving the dielectric constant of the medium. The percutaneous absorption of indomethacin and of ibuprofen and its esters was followed from solutions using an in vitro excised human skin model. The absorption profiles followed first order kinetics. The diffusion process was related to their solubility and to the human skin/solvent partition coefficient. The percutaneous absorption of two ibuprofen esters from suspensions in 20% propylene glycol-water were also followed through rat skin with only ibuprofen being detected in the receiver phase. The sensitivity of ibuprofen esters to enzymic hydrolysis compared to the chemical hydrolysis may prove valuable in the formulation of topical delivery systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate electronic mitigation of linear and non-linear fibre impairments and compare various digital signal processing techniques, including electronic dispersion compensation (EDC), single-channel back-propagation (SC-BP) and back-propagation with multiple channel processing (MC-BP) in a nine-channel 112 Gb/s PM-mQAM (m=4,16) WDM system, for reaches up to 6,320 km. We show that, for a sufficiently high local dispersion, SC-BP is sufficient to provide a significant performance enhancement when compared to EDC, and is adequate to achieve BER below FEC threshold. For these conditions we report that a sampling rate of two samples per symbol is sufficient for practical SC-BP, without significant penalties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Timing jitter is a major factor limiting the performance of any high-speed, long-haul data transmission system. It arises from a number of reasons, such as interaction with accumulated spontaneous emission, inter-symbol interference (ISI), electrostriction etc. Some effects causing timing jitter can be reduced by means of non-linear filtering, using, for example, a nonlinear optical loop mirror (NOLM) [1]. The NOLM has been shown to reduce the timing jitter by suppressing the ASE and by stabilising the pulse duration [2, 3]. In this paper, we investigate the dynamics of timing jitter in a 2R regenerated system, nonlinearly guided by NOLMs at bit rates of 10, 20, 40, and 80- Gbit/s. Transmission performance of an equivalent non-regenerated (generic) system is taken as a reference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper are examined some classes of linear and non-linear analytical systems of partial differential equations. Compatibility conditions are found and if they are satisfied, the solutions are given as functional series in a neighborhood of a given point (x = 0).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a computational, called MOMENTS, code developed to be used in process control to determine a characteristic transfer function to industrial units when radiotracer techniques were been applied to study the unit´s performance. The methodology is based on the measuring the residence time distribution function (RTD) and calculate the first and second temporal moments of the tracer data obtained by two scintillators detectors NaI positioned to register a complete tracer movement inside the unit. Non linear regression technique has been used to fit various mathematical models and a statistical test was used to select the best result to the transfer function. Using the code MOMENTS, twelve different models can be used to fit a curve and calculate technical parameters to the unit.