981 resultados para Numerical optimization
Resumo:
Máster en Oceanografía
Resumo:
[EN]This Ph.D. thesis presents a general, robust methodology that may cover any type of 2D acoustic optimization problem. A procedure involving the coupling of Boundary Elements (BE) and Evolutionary Algorithms is proposed for systematic geometric modifications of road barriers that lead to designs with ever-increasing screening performance. Numerical simulations involving single- and multi-objective optimizations of noise barriers of varied nature are included in this document. results disclosed justify the implementation of this methodology by leading to optimal solutions of previously defined topologies that, in general, greatly outperform the acoustic efficiency of classical, widely used barrier designs normally erected near roads.
Resumo:
To continuously improve the performance of metal-oxide-semiconductor field-effect-transistors (MOSFETs), innovative device architectures, gate stack engineering and mobility enhancement techniques are under investigation. In this framework, new physics-based models for Technology Computer-Aided-Design (TCAD) simulation tools are needed to accurately predict the performance of upcoming nanoscale devices and to provide guidelines for their optimization. In this thesis, advanced physically-based mobility models for ultrathin body (UTB) devices with either planar or vertical architectures such as single-gate silicon-on-insulator (SOI) field-effect transistors (FETs), double-gate FETs, FinFETs and silicon nanowire FETs, integrating strain technology and high-κ gate stacks are presented. The effective mobility of the two-dimensional electron/hole gas in a UTB FETs channel is calculated taking into account its tensorial nature and the quantization effects. All the scattering events relevant for thin silicon films and for high-κ dielectrics and metal gates have been addressed and modeled for UTB FETs on differently oriented substrates. The effects of mechanical stress on (100) and (110) silicon band structures have been modeled for a generic stress configuration. Performance will also derive from heterogeneity, coming from the increasing diversity of functions integrated on complementary metal-oxide-semiconductor (CMOS) platforms. For example, new architectural concepts are of interest not only to extend the FET scaling process, but also to develop innovative sensor applications. Benefiting from properties like large surface-to-volume ratio and extreme sensitivity to surface modifications, silicon-nanowire-based sensors are gaining special attention in research. In this thesis, a comprehensive analysis of the physical effects playing a role in the detection of gas molecules is carried out by TCAD simulations combined with interface characterization techniques. The complex interaction of charge transport in silicon nanowires of different dimensions with interface trap states and remote charges is addressed to correctly reproduce experimental results of recently fabricated gas nanosensors.
Resumo:
Traditionally, the study of internal combustion engines operation has focused on the steady-state performance. However, the daily driving schedule of automotive engines is inherently related to unsteady conditions. There are various operating conditions experienced by (diesel) engines that can be classified as transient. Besides the variation of the engine operating point, in terms of engine speed and torque, also the warm up phase can be considered as a transient condition. Chapter 2 has to do with this thermal transient condition; more precisely the main issue is the performance of a Selective Catalytic Reduction (SCR) system during cold start and warm up phases of the engine. The proposal of the underlying work is to investigate and identify optimal exhaust line heating strategies, to provide a fast activation of the catalytic reactions on SCR. Chapters 3 and 4 focus the attention on the dynamic behavior of the engine, when considering typical driving conditions. The common approach to dynamic optimization involves the solution of a single optimal-control problem. However, this approach requires the availability of models that are valid throughout the whole engine operating range and actuator ranges. In addition, the result of the optimization is meaningful only if the model is very accurate. Chapter 3 proposes a methodology to circumvent those demanding requirements: an iteration between transient measurements to refine a purpose-built model and a dynamic optimization which is constrained to the model validity region. Moreover all numerical methods required to implement this procedure are presented. Chapter 4 proposes an approach to derive a transient feedforward control system in an automated way. It relies on optimal control theory to solve a dynamic optimization problem for fast transients. From the optimal solutions, the relevant information is extracted and stored in maps spanned by the engine speed and the torque gradient.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
We investigate a class of optimal control problems that exhibit constant exogenously given delays in the control in the equation of motion of the differential states. Therefore, we formulate an exemplary optimal control problem with one stock and one control variable and review some analytic properties of an optimal solution. However, analytical considerations are quite limited in case of delayed optimal control problems. In order to overcome these limits, we reformulate the problem and apply direct numerical methods to calculate approximate solutions that give a better understanding of this class of optimization problems. In particular, we present two possibilities to reformulate the delayed optimal control problem into an instantaneous optimal control problem and show how these can be solved numerically with a stateof- the-art direct method by applying Bock’s direct multiple shooting algorithm. We further demonstrate the strength of our approach by two economic examples.
Resumo:
This paper deals with “The Enchanted Journey,” which is a daily event tour booked by Bollywood-film fans. During the tour, the participants visit original sites of famous Bollywood films at various locations in Switzerland; moreover, the tour includes stops for lunch and shopping. Each day, up to five buses operate the tour. For operational reasons, however, two or more buses cannot stay at the same location simultaneously. Further operative constraints include time windows for all activities and precedence constraints between some activities. The planning problem is how to compute a feasible schedule for each bus. We implement a two-step hierarchical approach. In the first step, we minimize the total waiting time; in the second step, we minimize the total travel time of all buses. We present a basic formulation of this problem as a mixed-integer linear program. We enhance this basic formulation by symmetry-breaking constraints, which reduces the search space without loss of generality. We report on computational results obtained with the Gurobi Solver. Our numerical results show that all relevant problem instances can be solved using the basic formulation within reasonable CPU time, and that the symmetry-breaking constraints reduce that CPU time considerably.
Resumo:
SOMS is a general surrogate-based multistart algorithm, which is used in combination with any local optimizer to find global optima for computationally expensive functions with multiple local minima. SOMS differs from previous multistart methods in that a surrogate approximation is used by the multistart algorithm to help reduce the number of function evaluations necessary to identify the most promising points from which to start each nonlinear programming local search. SOMS’s numerical results are compared with four well-known methods, namely, Multi-Level Single Linkage (MLSL), MATLAB’s MultiStart, MATLAB’s GlobalSearch, and GLOBAL. In addition, we propose a class of wavy test functions that mimic the wavy nature of objective functions arising in many black-box simulations. Extensive comparisons of algorithms on the wavy testfunctions and on earlier standard global-optimization test functions are done for a total of 19 different test problems. The numerical results indicate that SOMS performs favorably in comparison to alternative methods and does especially well on wavy functions when the number of function evaluations allowed is limited.
Resumo:
The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.
Resumo:
The algorithms and graphic user interface software package ?OPT-PROx? are developed to meet food engineering needs related to canned food thermal processing simulation and optimization. The adaptive random search algorithm and its modification coupled with penalty function?s approach, and the finite difference methods with cubic spline approximation are utilized by ?OPT-PROx? package (http://tomakechoice. com/optprox/index.html). The diversity of thermal food processing optimization problems with different objectives and required constraints are solvable by developed software. The geometries supported by the ?OPT-PROx? are the following: (1) cylinder, (2) rectangle, (3) sphere. The mean square error minimization principle is utilized in order to estimate the heat transfer coefficient of food to be heated under optimal condition. The developed user friendly dialogue and used numerical procedures makes the ?OPT-PROx? software useful to food scientists in research and education, as well as to engineers involved in optimization of thermal food processing.
Resumo:
The solution to the problem of finding the optimum mesh design in the finite element method with the restriction of a given number of degrees of freedom, is an interesting problem, particularly in the applications method. At present, the usual procedures introduce new degrees of freedom (remeshing) in a given mesh in order to obtain a more adequate one, from the point of view of the calculation results (errors uniformity). However, from the solution of the optimum mesh problem with a specific number of degrees of freedom some useful recommendations and criteria for the mesh construction may be drawn. For 1-D problems, namely for the simple truss and beam elements, analytical solutions have been found and they are given in this paper. For the more complex 2-D problems (plane stress and plane strain) numerical methods to obtain the optimum mesh, based on optimization procedures have to be used. The objective function, used in the minimization process, has been the total potential energy. Some examples are presented. Finally some conclusions and hints about the possible new developments of these techniques are also given.
Resumo:
Dentro de los materiales estructurales, el magnesio y sus aleaciones están siendo el foco de una de profunda investigación. Esta investigación está dirigida a comprender la relación existente entre la microestructura de las aleaciones de Mg y su comportamiento mecánico. El objetivo es optimizar las aleaciones actuales de magnesio a partir de su microestructura y diseñar nuevas aleaciones. Sin embargo, el efecto de los factores microestructurales (como la forma, el tamaño, la orientación de los precipitados y la morfología de los granos) en el comportamiento mecánico de estas aleaciones está todavía por descubrir. Para conocer mejor de la relación entre la microestructura y el comportamiento mecánico, es necesaria la combinación de técnicas avanzadas de caracterización experimental como de simulación numérica, a diferentes longitudes de escala. En lo que respecta a las técnicas de simulación numérica, la homogeneización policristalina es una herramienta muy útil para predecir la respuesta macroscópica a partir de la microestructura de un policristal (caracterizada por el tamaño, la forma y la distribución de orientaciones de los granos) y el comportamiento del monocristal. La descripción de la microestructura se lleva a cabo mediante modernas técnicas de caracterización (difracción de rayos X, difracción de electrones retrodispersados, así como con microscopia óptica y electrónica). Sin embargo, el comportamiento del cristal sigue siendo difícil de medir, especialmente en aleaciones de Mg, donde es muy complicado conocer el valor de los parámetros que controlan el comportamiento mecánico de los diferentes modos de deslizamiento y maclado. En la presente tesis se ha desarrollado una estrategia de homogeneización computacional para predecir el comportamiento de aleaciones de magnesio. El comportamiento de los policristales ha sido obtenido mediante la simulación por elementos finitos de un volumen representativo (RVE) de la microestructura, considerando la distribución real de formas y orientaciones de los granos. El comportamiento del cristal se ha simulado mediante un modelo de plasticidad cristalina que tiene en cuenta los diferentes mecanismos físicos de deformación, como el deslizamiento y el maclado. Finalmente, la obtención de los parámetros que controlan el comportamiento del cristal (tensiones críticas resueltas (CRSS) así como las tasas de endurecimiento para todos los modos de maclado y deslizamiento) se ha resuelto mediante la implementación de una metodología de optimización inversa, una de las principales aportaciones originales de este trabajo. La metodología inversa pretende, por medio del algoritmo de optimización de Levenberg-Marquardt, obtener el conjunto de parámetros que definen el comportamiento del monocristal y que mejor ajustan a un conjunto de ensayos macroscópicos independientes. Además de la implementación de la técnica, se han estudiado tanto la objetividad del metodología como la unicidad de la solución en función de la información experimental. La estrategia de optimización inversa se usó inicialmente para obtener el comportamiento cristalino de la aleación AZ31 de Mg, obtenida por laminado. Esta aleación tiene una marcada textura basal y una gran anisotropía plástica. El comportamiento de cada grano incluyó cuatro mecanismos de deformación diferentes: deslizamiento en los planos basal, prismático, piramidal hc+ai, junto con el maclado en tracción. La validez de los parámetros resultantes se validó mediante la capacidad del modelo policristalino para predecir ensayos macroscópicos independientes en diferentes direcciones. En segundo lugar se estudió mediante la misma estrategia, la influencia del contenido de Neodimio (Nd) en las propiedades de una aleación de Mg-Mn-Nd, obtenida por extrusión. Se encontró que la adición de Nd produce una progresiva isotropización del comportamiento macroscópico. El modelo mostró que este incremento de la isotropía macroscópica era debido tanto a la aleatoriedad de la textura inicial como al incremento de la isotropía del comportamiento del cristal, con valores similares de las CRSSs de los diferentes modos de deformación. Finalmente, el modelo se empleó para analizar el efecto de la temperatura en el comportamiento del cristal de la aleación de Mg-Mn-Nd. La introducción en el modelo de los efectos non-Schmid sobre el modo de deslizamiento piramidal hc+ai permitió capturar el comportamiento mecánico a temperaturas superiores a 150_C. Esta es la primera vez, de acuerdo con el conocimiento del autor, que los efectos non-Schmid han sido observados en una aleación de Magnesio. The study of Magnesium and its alloys is a hot research topic in structural materials. In particular, special attention is being paid in understanding the relationship between microstructure and mechanical behavior in order to optimize the current alloy microstructures and guide the design of new alloys. However, the particular effect of several microstructural factors (precipitate shape, size and orientation, grain morphology distribution, etc.) in the mechanical performance of a Mg alloy is still under study. The combination of advanced characterization techniques and modeling at several length scales is necessary to improve the understanding of the relation microstructure and mechanical behavior. Respect to the simulation techniques, polycrystalline homogenization is a very useful tool to predict the macroscopic response from polycrystalline microstructure (grain size, shape and orientation distributions) and crystal behavior. The microstructure description is fully covered with modern characterization techniques (X-ray diffraction, EBSD, optical and electronic microscopy). However, the mechanical behaviour of single crystals is not well-known, especially in Mg alloys where the correct parameterization of the mechanical behavior of the different slip/twin modes is a very difficult task. A computational homogenization framework for predicting the behavior of Magnesium alloys has been developed in this thesis. The polycrystalline behavior was obtained by means of the finite element simulation of a representative volume element (RVE) of the microstructure including the actual grain shape and orientation distributions. The crystal behavior for the grains was accounted for a crystal plasticity model which took into account the physical deformation mechanisms, e.g. slip and twinning. Finally, the problem of the parametrization of the crystal behavior (critical resolved shear stresses (CRSS) and strain hardening rates of all the slip and twinning modes) was obtained by the development of an inverse optimization methodology, one of the main original contributions of this thesis. The inverse methodology aims at finding, by means of the Levenberg-Marquardt optimization algorithm, the set of parameters defining crystal behavior that best fit a set of independent macroscopic tests. The objectivity of the method and the uniqueness of solution as function of the input information has been numerically studied. The inverse optimization strategy was first used to obtain the crystal behavior of a rolled polycrystalline AZ31 Mg alloy that showed a marked basal texture and a strong plastic anisotropy. Four different deformation mechanisms: basal, prismatic and pyramidal hc+ai slip, together with tensile twinning were included to characterize the single crystal behavior. The validity of the resulting parameters was proved by the ability of the polycrystalline model to predict independent macroscopic tests on different directions. Secondly, the influence of Neodymium (Nd) content on an extruded polycrystalline Mg-Mn-Nd alloy was studied using the same homogenization and optimization framework. The effect of Nd addition was a progressive isotropization of the macroscopic behavior. The model showed that this increase in the macroscopic isotropy was due to a randomization of the initial texture and also to an increase of the crystal behavior isotropy (similar values of the CRSSs of the different modes). Finally, the model was used to analyze the effect of temperature on the crystal behaviour of a Mg-Mn-Nd alloy. The introduction in the model of non-Schmid effects on the pyramidal hc+ai slip allowed to capture the inverse strength differential that appeared, between the tension and compression, above 150_C. This is the first time, to the author's knowledge, that non-Schmid effects have been reported for Mg alloys.
Resumo:
The design of shell and spatial structures represents an important challenge even with the use of the modern computer technology.If we concentrate in the concrete shell structures many problems must be faced,such as the conceptual and structural disposition, optimal shape design, analysis, construction methods, details etc. and all these problems are interconnected among them. As an example the shape optimization requires the use of several disciplines like structural analysis, sensitivity analysis, optimization strategies and geometrical design concepts. Similar comments can be applied to other space structures such as steel trusses with single or double shape and tension structures. In relation to the analysis the Finite Element Method appears to be the most extended and versatile technique used in the practice. In the application of this method several issues arise. First the derivation of the pertinent shell theory or alternatively the degenerated 3-D solid approach should be chosen. According to the previous election the suitable FE model has to be adopted i.e. the displacement,stress or mixed formulated element. The good behavior of the shell structures under dead loads that are carried out towards the supports by mainly compressive stresses is impaired by the high imperfection sensitivity usually exhibited by these structures. This last effect is important particularly if large deformation and material nonlinearities of the shell may interact unfavorably, as can be the case for thin reinforced shells. In this respect the study of the stability of the shell represents a compulsory step in the analysis. Therefore there are currently very active fields of research such as the different descriptions of consistent nonlinear shell models given by Simo, Fox and Rifai, Mantzenmiller and Buchter and Ramm among others, the consistent formulation of efficient tangent stiffness as the one presented by Ortiz and Schweizerhof and Wringgers, with application to concrete shells exhibiting creep behavior given by Scordelis and coworkers; and finally the development of numerical techniques needed to trace the nonlinear response of the structure. The objective of this paper is concentrated in the last research aspect i.e. in the presentation of a state-of-the-art on the existing solution techniques for nonlinear analysis of structures. In this presentation the following excellent reviews on this subject will be mainly used.
Resumo:
The paper presents a high accuracy fully analytical formulation to compute the miss distance and collision probability of two approaching objects following an impulsive collision avoidance maneuver. The formulation hinges on a linear relation between the applied impulse and the objects relative motion in the b-plane, which allows to formulate the maneuver optimization problem as an eigenvalue problem. The optimization criterion consists of minimizing the maneuver cost in terms of delta-V magnitude in order to either maximize collision miss distance or to minimize Gaussian collision probability. The algorithm, whose accuracy is verified in representative mission scenarios, can be employed for collision avoidance maneuver planning with reduced computational cost when compared to fully numerical algorithms.
Resumo:
El objetivo principal de esta tesis ha sido el diseño y la optimización de receptores implementados con fibra óptica, para ser usados en redes ópticas de alta velocidad que empleen formatos de modulación de fase. En los últimos años, los formatos de modulación de fase (Phase Shift keying, PSK) han captado gran atención debido a la mejora de sus prestaciones respecto a los formatos de modulación convencionales. Principalmente, presentan una mejora de la eficiencia espectral y una mayor tolerancia a la degradación de la señal causada por la dispersión cromática, la dispersión por modo de polarización y los efectos no-lineales en la fibra óptica. En este trabajo, se analizan en detalle los formatos PSK, incluyendo sus variantes de modulación de fase diferencial (Differential Phase Shift Keying, DPSK), en cuadratura (Differential Quadrature Phase Shift Keying, DQPSK) y multiplexación en polarización (Polarization Multiplexing Differential Quadrature Phase Shift Keying, PM-DQPSK), con la finalidad de diseñar y optimizar los receptores que permita su demodulación. Para ello, se han analizado y desarrollado nuevas estructuras que ofrecen una mejora en las prestaciones del receptor y una reducción de coste comparadas con las actualmente disponibles. Para la demodulación de señales DPSK, en esta tesis, se proponen dos nuevos receptores basados en un interferómetro en línea Mach-Zehnder (MZI) implementado con tecnología todo-fibra. El principio de funcionamiento de los MZI todo-fibra propuestos se asienta en la interferencia modal que se produce en una fibra multimodo (MMF) cuando se situada entre dos monomodo (SMF). Este tipo de configuración (monomodo-multimodo-monomodo, SMS) presenta un buen ratio de extinción interferente si la potencia acoplada en la fibra multimodo se reparte, principal y equitativamente, entre dos modos dominantes. Con este objetivo, se han estudiado y demostrado tanto teórica como experimentalmente dos nuevas estructuras SMS que mejoran el ratio de extinción. Una de las propuestas se basa en emplear una fibra multimodo de índice gradual cuyo perfil del índice de refracción presenta un hundimiento en su zona central. La otra consiste en una estructura SMS con las fibras desalineadas y donde la fibra multimodo es una fibra de índice gradual convencional. Para las dos estructuras, mediante el análisis teórico desarrollado, se ha demostrado que el 80 – 90% de la potencia de entrada se acopla a los dos modos dominantes de la fibra multimodo y se consigue una diferencia inferior al 10% entre ellos. También se ha demostrado experimentalmente que se puede obtener un ratio de extinción de al menos 12 dB. Con el objeto de demostrar la capacidad de estas estructuras para ser empleadas como demoduladores de señales DPSK, se han realizado numerosas simulaciones de un sistema de transmisión óptico completo y se ha analizado la calidad del receptor bajo diferentes perspectivas, tales como la sensibilidad, la tolerancia a un filtrado óptico severo o la tolerancia a las dispersiones cromática y por modo de polarización. En todos los casos se ha concluido que los receptores propuestos presentan rendimientos comparables a los obtenidos con receptores convencionales. En esta tesis, también se presenta un diseño alternativo para la implementación de un receptor DQPSK, basado en el uso de una fibra mantenedora de la polarización (PMF). A través del análisi teórico y del desarrollo de simulaciones numéricas, se ha demostrado que el receptor DQPSK propuesto presenta prestaciones similares a los convencionales. Para complementar el trabajo realizado sobre el receptor DQPSK basado en PMF, se ha extendido el estudio de su principio de demodulación con el objeto de demodular señales PM-DQPSK, obteniendo como resultado la propuesta de una nueva estructura de demodulación. El receptor PM-DQPSK propuesto se basa en la estructura conjunta de una única línea de retardo junto con un rotador de polarización. Se ha analizado la calidad de los receptores DQPSK y PM-DQPSK bajo diferentes perspectivas, tales como la sensibilidad, la tolerancia a un filtrado óptico severo, la tolerancia a las dispersiones cromática y por modo de polarización o su comportamiento bajo condiciones no-ideales. En comparación con los receptores convencionales, nuestra propuesta exhibe prestaciones similares y además permite un diseño más simple que redunda en un coste potencialmente menor. En las redes de comunicaciones ópticas actuales se utiliza la tecnología de multimplexación en longitud de onda (WDM) que obliga al uso de filtros ópticos con bandas de paso lo más estrechas posibles y a emplear una serie de dispositivos que incorporan filtros en su arquitectura, tales como los multiplexores, demultiplexores, ROADMs, conmutadores y OXCs. Todos estos dispositivos conectados entre sí son equivalentes a una cadena de filtros cuyo ancho de banda se va haciendo cada vez más estrecho, llegando a distorsionar la forma de onda de las señales. Por esto, además de analizar el impacto del filtrado óptico en las señales de 40 Gbps DQPSK y 100 Gbps PM-DQPSK, este trabajo de tesis se completa estudiando qué tipo de filtro óptico minimiza las degradaciones causadas en la señal y analizando el número máximo de filtros concatenados que permiten mantener la calidad requerida al sistema. Se han estudiado y simulado cuatro tipos de filtros ópticos;Butterworth, Bessel, FBG y F-P. ABSTRACT The objective of this thesis is the design and optimization of optical fiber-based phase shift keying (PSK) demodulators for high-bit-rate optical networks. PSK modulation formats have attracted significant attention in recent years, because of the better performance with respect to conventional modulation formats. Principally, PSK signals can improve spectrum efficiency and tolerate more signal degradation caused by chromatic dispersion, polarization mode dispersion and nonlinearities in the fiber. In this work, many PSK formats were analyzed in detail, including the variants of differential phase modulation (Differential Phase Shift Keying, DPSK), in quadrature (Differential Quadrature Phase Shift Keying, DQPSK) and polarization multiplexing (Polarization Multiplexing Differential Quadrature Phase Shift Keying, PM-DQPSK), in order to design and optimize receivers enabling demodulations. Therefore, novel structures, which offer good receiver performances and a reduction in cost compared to the current structures, have been analyzed and developed. Two novel receivers based on an all-fiber in-line Mach-Zehnder interferometer (MZI) were proposed for DPSK signal demodulation in this thesis. The operating principle of the all-fiber MZI is based on the modal interference that occurs in a multimode fiber (MMF) when it is located between two single-mode fibers (SMFs). This type of configuration (Single-mode-multimode-single-mode, SMS) can provide a good extinction ratio if the incoming power from the SMF could be coupled equally into two dominant modes excited in the MMF. In order to improve the interference extinction ratio, two novel SMS structures have been studied and demonstrated, theoretically and experimentally. One of the two proposed MZIs is based on a graded-index multimode fiber (MMF) with a central dip in the index profile, located between two single-mode fibers (SMFs). The other one is based on a conventional graded-index MMF mismatch spliced between two SMFs. Theoretical analysis has shown that, in these two schemes, 80 – 90% of the incoming power can be coupled into the two dominant modes exited in the MMF, and the power difference between them is only ~10%. Experimental results show that interference extinction ratio of 12 dB could be obtained. In order to demonstrate the capacity of these two structures for use as DPSK signal demodulators, numerical simulations in a completed optical transmission system have been carried out, and the receiver quality has been analyzed under different perspectives, such as sensitivity, tolerance to severe optical filtering or tolerance to chromatic and polarization mode dispersion. In all cases, from the simulation results we can conclude that the two proposed receivers can provide performances comparable to conventional ones. In this thesis, an alternative design for the implementation of a DQPSK receiver, which is based on a polarization maintaining fiber (PMF), was also presented. To complement the work made for the PMF-based DQPSK receiver, the study of the demodulation principle has been extended to demodulate PM-DQPSK signals, resulting in the proposal of a novel demodulation structure. The proposed PM-DQPSK receiver is based on only one delay line and a polarization rotator. The quality of the proposed DQPSK and PM-DQPSK receivers under different perspectives, such as sensitivity, tolerance to severe optical filtering, tolerance to chromatic dispersion and polarization mode dispersion, or behavior under non-ideal conditions. Compared with the conventional receivers, our proposals exhibit similar performances but allow a simpler design which can potentially reduce the cost. The wavelength division multiplexing (WDM) technology used in current optical communications networks requires the use of optical filters with a passband as narrow as possible, and the use of a series of devices that incorporate filters in their architecture, such as multiplexers, demultiplexers, switches, reconfigurable add-drop multiplexers (ROADMs) and optical cross-connects (OXCs). All these devices connected together are equivalent to a chain of filters whose bandwidth becomes increasingly narrow, resulting in distortion to the waveform of the signals. Therefore, in addition to analyzing the impact of optical filtering on signal of 40 Gbps DQPSK and 100 Gbps PM-DQPSK, we study which kind of optical filter minimizes the signal degradation and analyze the maximum number of concatenated filters for maintaining the required quality of the system. Four types of optical filters, including Butterworth, Bessel, FBG and FP, have studied and simulated.