951 resultados para Non-Linear Elliptic Systems
Resumo:
Reversed-pahse high-performance liquid chromatographic (HPLC) methods were developed for the assay of indomethacin, its decomposition products, ibuprofen and its (tetrahydro-2-furanyl)methyl-, (tetrahydro-2-(2H)pyranyl)methyl- and cyclohexylmethyl esters. The development and application of these HPLC systems were studied. A number of physico-chemical parameters that affect percutaneous absorption were investigated. The pKa values of indomethacin and ibuprofen were determined using the solubility method. Potentiometric titration and the Taft equation were also used for ibuprofen. The incorporation of ethanol or propylene glycol in the solvent resulted in an improvement in the aqueous solubility of these compounds. The partition coefficients were evaluated in order to establish the affinity of these drugs towards the stratum corneum. The stability of indomethacin and of ibuprofen esters were investigated and the effect of temperature and pH on the decomposition rates were studied. The effect of cetyltrimethylammonium bromide on the alkaline degradation of indomethacin was also followed. In the presence of alcohol, indomethacin alcoholysis was observed and the kinetics of decomposition were subjected to non-linear regression analysis and the rate constants for the various pathways were quantified. The non-isothermal, sufactant non-isoconcentration and non-isopH degradation of indomethacin were investigated. The analysis of the data was undertaken using NONISO, a BASIC computer program. The degradation profiles obtained from both non-iso and iso-kinetic studies show that there is close concordance in the results. The metabolic biotransformation of ibuprofen esters was followed using esterases from hog liver and rat skin homogenates. The results showed that the esters were very labile under these conditions. The presence of propylene glycol affected the rates of enzymic hydrolysis of the ester. The hydrolysis is modelled using an equation involving the dielectric constant of the medium. The percutaneous absorption of indomethacin and of ibuprofen and its esters was followed from solutions using an in vitro excised human skin model. The absorption profiles followed first order kinetics. The diffusion process was related to their solubility and to the human skin/solvent partition coefficient. The percutaneous absorption of two ibuprofen esters from suspensions in 20% propylene glycol-water were also followed through rat skin with only ibuprofen being detected in the receiver phase. The sensitivity of ibuprofen esters to enzymic hydrolysis compared to the chemical hydrolysis may prove valuable in the formulation of topical delivery systems.
Resumo:
We investigate electronic mitigation of linear and non-linear fibre impairments and compare various digital signal processing techniques, including electronic dispersion compensation (EDC), single-channel back-propagation (SC-BP) and back-propagation with multiple channel processing (MC-BP) in a nine-channel 112 Gb/s PM-mQAM (m=4,16) WDM system, for reaches up to 6,320 km. We show that, for a sufficiently high local dispersion, SC-BP is sufficient to provide a significant performance enhancement when compared to EDC, and is adequate to achieve BER below FEC threshold. For these conditions we report that a sampling rate of two samples per symbol is sufficient for practical SC-BP, without significant penalties.
Resumo:
Kozlov & Maz'ya (1989, Algebra Anal., 1, 144–170) proposed an alternating iterative method for solving Cauchy problems for general strongly elliptic and formally self-adjoint systems. However, in many applied problems, operators appear that do not satisfy these requirements, e.g. Helmholtz-type operators. Therefore, in this study, an alternating procedure for solving Cauchy problems for self-adjoint non-coercive elliptic operators of second order is presented. A convergence proof of this procedure is given.
Resumo:
Timing jitter is a major factor limiting the performance of any high-speed, long-haul data transmission system. It arises from a number of reasons, such as interaction with accumulated spontaneous emission, inter-symbol interference (ISI), electrostriction etc. Some effects causing timing jitter can be reduced by means of non-linear filtering, using, for example, a nonlinear optical loop mirror (NOLM) [1]. The NOLM has been shown to reduce the timing jitter by suppressing the ASE and by stabilising the pulse duration [2, 3]. In this paper, we investigate the dynamics of timing jitter in a 2R regenerated system, nonlinearly guided by NOLMs at bit rates of 10, 20, 40, and 80- Gbit/s. Transmission performance of an equivalent non-regenerated (generic) system is taken as a reference.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
This work reports on a new software for solving linear systems involving affine-linear dependencies between complex-valued interval parameters. We discuss the implementation of a parametric residual iteration for linear interval systems by advanced communication between the system Mathematica and the library C-XSC supporting rigorous complex interval arithmetic. An example of AC electrical circuit illustrates the use of the presented software.
Resumo:
MSC 2010: 26A33, 34D05, 37C25
Resumo:
Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.
Resumo:
This work presents a computational, called MOMENTS, code developed to be used in process control to determine a characteristic transfer function to industrial units when radiotracer techniques were been applied to study the unit´s performance. The methodology is based on the measuring the residence time distribution function (RTD) and calculate the first and second temporal moments of the tracer data obtained by two scintillators detectors NaI positioned to register a complete tracer movement inside the unit. Non linear regression technique has been used to fit various mathematical models and a statistical test was used to select the best result to the transfer function. Using the code MOMENTS, twelve different models can be used to fit a curve and calculate technical parameters to the unit.
Resumo:
We study the existence of positive solutions of Hamiltonian-type systems of second-order elliptic PDE in the whole space. The systems depend on a small parameter and involve a potential having a global well structure. We use dual variational methods, a mountain-pass type approach and Fourier analysis to prove positive solutions exist for sufficiently small values of the parameter.
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the development of several alternative novel hybrid/multi-field variational formulations of the geometrically exact three-dimensional elastostatic beam boundary-value problem. In the framework of the complementary energy-based formulations, a Legendre transformation is used to introduce the complementary energy density in the variational statements as a function of stresses only. The corresponding variational principles are shown to feature stationarity within the framework of the boundary-value problem. Both weak and linearized weak forms of the principles are presented. The main features of the principles are highlighted, giving special emphasis to their relationships from both theoretical and computational standpoints. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
There is an increasing need to treat effluents contaminated with phenol with advanced oxidation processes (AOPs) to minimize their impact on the environment as well as on bacteriological populations of other wastewater treatment systems. One of the most promising AOPs is the Fenton process that relies on the Fenton reaction. Nevertheless, there are no systematic studies on Fenton reactor networks. The objective of this paper is to develop a strategy for the optimal synthesis of Fenton reactor networks. The strategy is based on a superstructure optimization approach that is represented as a mixed integer non-linear programming (MINLP) model. Network superstructures with multiple Fenton reactors are optimized with the objective of minimizing the sum of capital, operation and depreciation costs of the effluent treatment system. The optimal solutions obtained provide the reactor volumes and network configuration, as well as the quantities of the reactants used in the Fenton process. Examples based on a case study show that multi-reactor networks yield decrease of up to 45% in overall costs for the treatment plant. (C) 2010 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.