946 resultados para Polynomial Invariants
Resumo:
In this article, a new technique for grooming low-speed traffic demands into high-speed optical routes is proposed. This enhancement allows a transparent wavelength-routing switch (WRS) to aggregate traffic en route over existing optical routes without incurring expensive optical-electrical-optical (OEO) conversions. This implies that: a) an optical route may be considered as having more than one ingress node (all inline) and, b) traffic demands can partially use optical routes to reach their destination. The proposed optical routes are named "lighttours" since the traffic originating from different sources can be forwarded together in a single optical route, i.e., as taking a "tour" over different sources towards the same destination. The possibility of creating lighttours is the consequence of a novel WRS architecture proposed in this article, named "enhanced grooming" (G+). The ability to groom more traffic in the middle of a lighttour is achieved with the support of a simple optical device named lambda-monitor (previously introduced in the RingO project). In this article, we present the new WRS architecture and its advantages. To compare the advantages of lighttours with respect to classical lightpaths, an integer linear programming (ILP) model is proposed for the well-known multilayer problem: traffic grooming, routing and wavelength assignment The ILP model may be used for several objectives. However, this article focuses on two objectives: maximizing the network throughput, and minimizing the number of optical-electro-optical conversions used. Experiments show that G+ can route all the traffic using only half of the total OEO conversions needed by classical grooming. An heuristic is also proposed, aiming at achieving near optimal results in polynomial time
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
It is argued that the invariants associated to the First Law of Thermodynamics and to the concept of identical processes lead to a clear definition of heat and work. The conditions for heat and work to be invariant under a system-surroundings interchange are also investigated. Finally, examples are presented to illustrate the above conditions.
Resumo:
The most widespread literature for the evaluation of uncertainty - GUM and Eurachem - does not describe explicitly how to deal with uncertainty of the concentration coming from non-linear calibration curves. This work had the objective of describing and validating a methodology, as recommended by the recent GUM Supplement approach, to evaluate the uncertainty through polynomial models of the second order. In the uncertainty determination of the concentration of benzatone (C) by chromatography, it is observed that the uncertainty of measurement between the methodology proposed and Monte Carlo Simulation, does not diverge by more than 0.0005 unit, thus validating the model proposed for one significant digit.
Resumo:
Extended Hildebrand Solubility Approach (EHSA) developed by Martin et al. was applied to evaluate the solubility of ketoprofen (KTP) in ethanol + water cosolvent mixtures at 298.15 K. Calculated values of molar volume and solubility parameter for KTP were used. A good predictive capacity of EHSA was found by using a regular polynomial model in order five to correlate the W interaction parameter and the solubility parameters of cosolvent mixtures (δmix). Nevertheless, the deviations obtained in the estimated solubilities with respect to the experimental solubilities were on the same order like those obtained directly by means of an empiric regression of the logarithmic experimental solubilities as a function of δmix values.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
Extended Hildebrand Solubility Approach (EHSA) was successfully applied to evaluate the solubility of Indomethacin in 1,4-dioxane + water mixtures at 298.15 K. An acceptable correlation-performance of EHSA was found by using a regular polynomial model in order four of the W interaction parameter vs. solubility parameter of the mixtures (overall deviation was 8.9%). Although the mean deviation obtained was similar to that obtained directly by means of an empiric regression of the experimental solubility vs. mixtures solubility parameters, the advantages of EHSA are evident because it requires physicochemical properties easily available for drugs.
Resumo:
Classification of biodiesel by oilseed type using pattern recognition techniques is described. The spectra of the samples were performed in the Visible region, requiring noise removal by use of a first derivative by the Savitzky-Golay method, employing a second-order polynomial and a window of 21 points. The characterization of biodiesel was performed using HCA, PCA and SIMCA. For HCA and PCA methods, one can observe the separation of each group of biodiesel in a spectral region of 405-500 nm. SIMCA model was used in a test group composed of 28 spectral measurements and no errors are obtained.
Resumo:
The quantum harmonic oscillator is described by the Hermite equation.¹ The asymptotic solution is predominantly used to obtain its analytical solutions. Wave functions (solutions) are quadratically integrable if taken as the product of the convergent asymptotic solution (Gaussian function) and Hermite polynomial,¹ whose degree provides the associated quantum number. Solving it numerically, quantization is observed when a control real variable is "tuned" to integer values. This can be interpreted by graphical reading of Y(x) and |Y(x)|², without other mathematical analysis, and prove useful for teaching fundamentals of quantum chemistry to undergraduates.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
This PhD thesis in Mathematics belongs to the field of Geometric Function Theory. The thesis consists of four original papers. The topic studied deals with quasiconformal mappings and their distortion theory in Euclidean n-dimensional spaces. This theory has its roots in the pioneering papers of F. W. Gehring and J. Väisälä published in the early 1960’s and it has been studied by many mathematicians thereafter. In the first paper we refine the known bounds for the so-called Mori constant and also estimate the distortion in the hyperbolic metric. The second paper deals with radial functions which are simple examples of quasiconformal mappings. These radial functions lead us to the study of the so-called p-angular distance which has been studied recently e.g. by L. Maligranda and S. Dragomir. In the third paper we study a class of functions of a real variable studied by P. Lindqvist in an influential paper. This leads one to study parametrized analogues of classical trigonometric and hyperbolic functions which for the parameter value p = 2 coincide with the classical functions. Gaussian hypergeometric functions have an important role in the study of these special functions. Several new inequalities and identities involving p-analogues of these functions are also given. In the fourth paper we study the generalized complete elliptic integrals, modular functions and some related functions. We find the upper and lower bounds of these functions, and those bounds are given in a simple form. This theory has a long history which goes back two centuries and includes names such as A. M. Legendre, C. Jacobi, C. F. Gauss. Modular functions also occur in the study of quasiconformal mappings. Conformal invariants, such as the modulus of a curve family, are often applied in quasiconformal mapping theory. The invariants can be sometimes expressed in terms of special conformal mappings. This fact explains why special functions often occur in this theory.
Resumo:
Bauhinia x blakeana (B. purpurea x B. variegata) is a natural hybrid that has been cultivated in gardens, streets and parks. Due to its sterility, it must be vegetatively propagated. The objective of this work was to evaluate the viability of cuttings and grafting on its propagation. Semi-woody cuttings were collected during four seasons and treated with 0; 1,000; 2,000; and 3,000 mg L-1 of IBA. The experimental design was entirely randomized and the treatments were arranged in a 4x4 factorial scheme (four collecting times x four IBA concentrations) and five replications with 10 cuttings each, per collecting time and per IBA concentration. Characteristics of roots and shoots were evaluated after 90 days. The data means were compared by the Tukey test and submitted to the polynomial regression analysis. For the grafting experiment, B. variegata and B. variegata var. candida plants of six and 12 months were used as rootstocks and the splice graft and T-budding methods were tested. The experimental design was entirely randomized and the treatments were arranged in a 2x2x2 factorial scheme (two rootstock species x two grafting methods x two rootstock ages) and four replications with five plants each, per rootstock species, per grafting method and per rootstock age. Characteristics of shoots were evaluated after 90 days and the data means were compared by the Tukey test. B. x blakeana can be propagated by semi-woody cuttings collected in spring, without IBA application, or in summer, with the application of 3,000 mg L-1 of IBA. The tested grafting methods were not effective.
Resumo:
The irrigation management based on the monitoring of the soil water content allows for the minimization of the amount of water applied, making its use more efficient. Taking into account these aspects, in this work, a sensor for measuring the soil water content was developed to allow real time automation of irrigation systems. This way, problems affecting crop yielding such as irregularities in the time to turn on or turn off the pump, and excess or deficit of water can be solved. To develop the sensors were used stainless steel rods, resin, and insulating varnish. The sensors measuring circuit was based on a microcontroller, which gives its output signal in the digital format. The sensors were calibrated using soil of the type Quartzarenic Neosoil. A third order polynomial model was fitted to the experimental data between the values of water content corresponding to the field capacity and the wilting point to correlate the soil water content obtained by the oven standard method with those measured by the electronic circuit, with a coefficient of determination of 93.17%, and an accuracy in the measures of ±0.010 kg kg-1. Based on the results, it was concluded that the sensor and its implemented measuring circuit can be used in the automation process of irrigation systems.
Resumo:
Most studies on measures of transpiration of plants, especially woody fruit, relies on methods of heat supply in the trunk. This study aimed to calibrate the Thermal Dissipation Probe Method (TDP) to estimate the transpiration, study the effects of natural thermal gradients and determine the relation between outside diameter and area of xylem in 'Valencia' orange young plants. TDP were installed in 40 orange plants of 15 months old, planted in boxes of 500 L, in a greenhouse. It was tested the correction of the natural thermal differences (DTN) for the estimation based on two unheated probes. The area of the conductive section was related to the outside diameter of the stem by means of polynomial regression. The equation for estimation of sap flow was calibrated having as standard lysimeter measures of a representative plant. The angular coefficient of the equation for estimating sap flow was adjusted by minimizing the absolute deviation between the sap flow and daily transpiration measured by lysimeter. Based on these results, it was concluded that the method of TDP, adjusting the original calibration and correction of the DTN, was effective in transpiration assessment.