993 resultados para linear approximation
Resumo:
A linear prediction procedure is one of the approved numerical methods of signal processing. In the field of optical spectroscopy it is used mainly for extrapolation known parts of an optical signal in order to obtain a longer one or deduce missing signal samples. The first is needed particularly when narrowing spectral lines for the purpose of spectral information extraction. In the present paper the coherent anti-Stokes Raman scattering (CARS) spectra were under investigation. The spectra were significantly distorted by the presence of nonlinear nonresonant background. In addition, line shapes were far from Gaussian/Lorentz profiles. To overcome these disadvantages the maximum entropy method (MEM) for phase spectrum retrieval was used. The obtained broad MEM spectra were further underwent the linear prediction analysis in order to be narrowed.
Resumo:
Asymmetric synthesis using modified heterogeneous catalysts has gained lots of interest in the production of optically pure chemicals, such as pharmaceuticals, nutraceuticals, fragrances and agrochemicals. Heterogeneous modified catalysts capable of inducing high enantioselectivities are preferred in industrial scale due to their superior separation and handling properties. The topic has been intensively investigated both in industry and academia. The enantioselective hydrogenation of ethyl benzoylformate (EBF) to (R)-ethyl mandelate over (-)-cinchonidine (CD)-modified Pt/Al2O3 catalyst in a laboratory-scale semi-batch reactor was studied as a function of modifier concentration, reaction temperature, stirring rate and catalyst particle size. The main product was always (R)-ethyl mandelate while small amounts of (S)-ethyl mandelate were obtained as by product. The kinetic results showed higher enantioselectivity and lower initial rates approaching asymptotically to a constant value as the amount of modifier was increased. Additionally, catalyst deactivation due to presence of impurities in the feed was prominent in some cases; therefore activated carbon was used as a cleaning agent of the raw material to remove impurities prior to catalyst addition. Detailed characterizations methods (SEM, EDX, TPR, BET, chemisorption, particle size distribution) of the catalysts were carried out. Solvent effects were also studied in the semi-batch reactor. Solvents with dielectric constant (e) between 2 and 25 were applied. The enantiomeric excess (ee) increased with an increase of the dielectric coefficient up to a maximum followed by a nonlinear decrease. A kinetic model was proposed for the enantioselectivity dependence on the dielectric constant based on the Kirkwood treatment. The non-linear dependence of ee on (e) successfully described the variation of ee in different solvents. Systematic kinetic experiments were carried out in the semi-batch reactor. Toluene was used as a solvent. Based on these results, a kinetic model based on the assumption of different number of sites was developed. Density functional theory calculations were applied to study the energetics of the EBF adsorption on pure Pt(1 1 1). The hydrogenation rate constants were determined along with the adsorption parameters by non-linear regression analysis. A comparison between the model and the experimental data revealed a very good correspondence. Transient experiments in a fixed-bed reactor were also carried out in this work. The results demonstrated that continuous enantioselective hydrogenation of EBF in hexane/2-propanol 90/10 (v/v) is possible and that continuous feeding of (-)-cinchonidine is needed to maintain a high steady-state enantioselectivity. The catalyst showed a good stability and high enantioselectivity was achieved in the fixed-bed reactor. Chromatographic separation of (R)- and (S)-ethyl mandelate originating from the continuous reactor was investigated. A commercial column filled with a chiral resin was chosen as a perspective preparative-scale adsorbent. Since the adsorption equilibrium isotherms were linear within the entire investigated range of concentrations, they were determined by pulse experiments for the isomers present in a post-reaction mixture. Breakthrough curves were measured and described successfully by the dispersive plug flow model with a linear driving force approximation. The focus of this research project was the development of a new integrated production concept of optically active chemicals by combining heterogeneous catalysis and chromatographic separation technology. The proposed work is fundamental research in advanced process technology aiming to improve efficiency and enable clean and environmentally benign production of enantiomeric pure chemicals.
Resumo:
Data of corn ear production (kg/ha) of 196 half-sib progenies (HSP) of the maize population CMS-39 obtained from experiments carried out in four environments were used to adapt and assess the BLP method (best linear predictor) in comparison with to the selection among and within half-sib progenies (SAWHSP). The 196 HSP of the CMS-39 population developed by the National Center for Maize and Sorghum Research (CNPMS-EMBRAPA) were related through their pedigree with the recombined progenies of the previous selection cycle. The two methodologies used for the selection of the twenty best half-sib progenies, BLP and SAWHSP, led to similar expected genetic gains. There was a tendency in the BLP methodology to select a greater number of related progenies because of the previous generation (pedigree) than the other method. This implies that greater care with the effective size of the population must be taken with this method. The SAWHSP methodology was efficient in isolating the additive genetic variance component from the phenotypic component. The pedigree system, although unnecessary for the routine use of the SAWHSP methodology, allowed the prediction of an increase in the inbreeding of the population in the long term SAWHSP selection when recombination is simultaneous to creation of new progenies.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Hydrolysis of D-valyl-L-leucyl-L-arginine p-nitroanilide (7.5-90.0 µM) by human tissue kallikrein (hK1) (4.58-5.27 nM) at pH 9.0 and 37ºC was studied in the absence and in the presence of increasing concentrations of 4-aminobenzamidine (96-576 µM), benzamidine (1.27-7.62 mM), 4-nitroaniline (16.5-66 µM) and aniline (20-50 mM). The kinetic parameters determined in the absence of inhibitors were: Km = 12.0 ± 0.8 µM and k cat = 48.4 ± 1.0 min-1. The data indicate that the inhibition of hK1 by 4-aminobenzamidine and benzamidine is linear competitive, while the inhibition by 4-nitroaniline and aniline is linear mixed, with the inhibitor being able to bind both to the free enzyme with a dissociation constant Ki yielding an EI complex, and to the ES complex with a dissociation constant Ki', yielding an ESI complex. The calculated Ki values for 4-aminobenzamidine, benzamidine, 4-nitroaniline and aniline were 146 ± 10, 1,098 ± 91, 38.6 ± 5.2 and 37,340 ± 5,400 µM, respectively. The calculated Ki' values for 4-nitroaniline and aniline were 289.3 ± 92.8 and 310,500 ± 38,600 µM, respectively. The fact that Ki'>Ki indicates that 4-nitroaniline and aniline bind to a second binding site in the enzyme with lower affinity than they bind to the active site. The data about the inhibition of hK1 by 4-aminobenzamidine and benzamidine help to explain previous observations that esters, anilides or chloromethyl ketone derivatives of Nalpha-substituted arginine are more sensitive substrates or inhibitors of hK1 than the corresponding lysine compounds.
Resumo:
Concentrated solar power (CSP) is a renewable energy technology, which could contribute to overcoming global problems related to pollution emissions and increasing energy demand. CSP utilizes solar irradiation, which is a variable source of energy. In order to utilize CSP technology in energy production and reliably operate a solar field including thermal energy storage system, dynamic simulation tools are needed in order to study the dynamics of the solar field, to optimize production and develop control systems. The object of this Master’s Thesis is to compare different concentrated solar power technologies and configure a dynamic solar field model of one selected CSP field design in the dynamic simulation program Apros, owned by VTT and Fortum. The configured model is based on German Novatec Solar’s linear Fresnel reflector design. Solar collector components including dimensions and performance calculation were developed, as well as a simple solar field control system. The preliminary simulation results of two simulation cases under clear sky conditions were good; the desired and stable superheated steam conditions were maintained in both cases, while, as expected, the amount of steam produced was reduced in the case having lower irradiation conditions. As a result of the model development process, it can be concluded, that the configured model is working successfully and that Apros is a very capable and flexible tool for configuring new solar field models and control systems and simulating solar field dynamic behaviour.
Resumo:
This research work addresses the problem of building a mathematical model for the given system of heat exchangers and to determine the temperatures, pressures and velocities at the intermediate positions. Such model could be used in nding an optimal design for such a superstructure. To limit the size and computing time a reduced network model was used. The method can be generalized to larger network structures. A mathematical model which includes a system of non-linear equations has been built and solved according to the Newton-Raphson algorithm. The results obtained by the proposed mathematical model were compared with the results obtained by the Paterson approximation and Chen's Approximation. Results of this research work in collaboration with a current ongoing research at the department will optimize the valve positions and hence, minimize the pumping cost and maximize the heat transfer of the system of heat exchangers.
Resumo:
The objectives of this study were to evaluate and compare the use of linear and nonlinear methods for analysis of heart rate variability (HRV) in healthy subjects and in patients after acute myocardial infarction (AMI). Heart rate (HR) was recorded for 15 min in the supine position in 10 patients with AMI taking β-blockers (aged 57 ± 9 years) and in 11 healthy subjects (aged 53 ± 4 years). HRV was analyzed in the time domain (RMSSD and RMSM), the frequency domain using low- and high-frequency bands in normalized units (nu; LFnu and HFnu) and the LF/HF ratio and approximate entropy (ApEn) were determined. There was a correlation (P < 0.05) of RMSSD, RMSM, LFnu, HFnu, and the LF/HF ratio index with the ApEn of the AMI group on the 2nd (r = 0.87, 0.65, 0.72, 0.72, and 0.64) and 7th day (r = 0.88, 0.70, 0.69, 0.69, and 0.87) and of the healthy group (r = 0.63, 0.71, 0.63, 0.63, and 0.74), respectively. The median HRV indexes of the AMI group on the 2nd and 7th day differed from the healthy group (P < 0.05): RMSSD = 10.37, 19.95, 24.81; RMSM = 23.47, 31.96, 43.79; LFnu = 0.79, 0.79, 0.62; HFnu = 0.20, 0.20, 0.37; LF/HF ratio = 3.87, 3.94, 1.65; ApEn = 1.01, 1.24, 1.31, respectively. There was agreement between the methods, suggesting that these have the same power to evaluate autonomic modulation of HR in both AMI patients and healthy subjects. AMI contributed to a reduction in cardiac signal irregularity, higher sympathetic modulation and lower vagal modulation.
Differential effects of aging on spatial contrast sensitivity to linear and polar sine-wave gratings
Resumo:
Changes in visual function beyond high-contrast acuity are known to take place during normal aging. We determined whether sensitivity to linear sine-wave gratings and to an elementary stimulus preferentially processed in extrastriate areas could be distinctively affected by aging. We measured spatial contrast sensitivity twice for concentric polar (Bessel) and vertical linear gratings of 0.6, 2.5, 5, and 20 cycles per degree (cpd) in two age groups (20-30 and 60-70 years). All participants were free of identifiable ocular disease and had normal or corrected-to-normal visual acuity. Participants were more sensitive to Cartesian than to polar gratings in all frequencies tested, and the younger adult group was more sensitive to all stimuli tested. Significant differences between sensitivities of the two groups were found for linear (only 20 cpd; P<0.01) and polar gratings (all frequencies tested; P<0.01). The young adult group was significantly more sensitive to linear than to circular gratings in the 20 cpd frequency. The older adult group was significantly more sensitive to linear than to circular gratings in all spatial frequencies, except in the 20 cpd frequency. The results suggest that sensitivity to the two kinds of stimuli is affected differently by aging. We suggest that neural changes in the aging brain are important determinants of this difference and discuss the results according to current models of human aging.
Resumo:
Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates.
Resumo:
Foi estudada a transferência de calor transiente na agitação linear e intermitente (ALI) de embalagens metálicas contendo simulantes de alimentos, objetivando-se sua aplicação em processos de pasteurização ou esterilização e conseqüentes tratamentos térmicos mais eficientes, homogêneos e com produto de melhor qualidade. Foram utilizados quatro meios fluidos simulantes de alimentos de diferentes viscosidades e massas específicas: três óleos e água. Foram combinados efeitos de cinco tratamentos, sendo: meio simulante (4 níveis), espaço livre (3 níveis), freqüência de agitação (4 níveis), amplitude de agitação (2 níveis) e posição das latas (4 níveis). Os ensaios de aquecimento e resfriamento foram feitos em tanque com água à temperatura de 98 °C e 17-20 °C, respectivamente. Com os dados de penetração de calor em cada experimento, foram calculados os parâmetros de penetração de calor fh, jh, fc e jc. Os resultados foram modelados utilizando-se grupos de números adimensionais e expressos em termos de Nusselt, Prandtl, Reynolds e funções trigonométricas (com medidas de amplitude e freqüência de agitação, espaço livre e dimensões da embalagem). Foram estabelecidas as duas Equações gerais para as fases de aquecimento e resfriamento: Nu = ReA 0,199.Pr 0,288.sen(xa/AM)0,406.cos(xf/FA)1,039.cos((xf/FA).(EL/H).p)4,556 Aquecimento Nu = 0,1295.ReA0,047.Pr 0,193.sen(xa/AM)0,114.cos(xf/FA)0,641.cos((xf/FA).(EL/H).p)2,476 Resfriamento O processo de ALI pode ser aplicado em pasteurizadores ou autoclaves estáticas horizontais e verticais, com modificações simples. Concluiu-se que a ALI aumenta significativamente a taxa de transferência de calor, tanto no aquecimento como no resfriamento.
Resumo:
This research is the continuation and a joint work with a master thesis that has been done in this department recently by Hemamali Chathurangani Yashika Jayathunga. The mathematical system of the equations in the designed Heat Exchanger Network synthesis has been extended by adding a number of equipment; such as heat exchangers, mixers and dividers. The solutions of the system is obtained and the optimal setting of the valves (Each divider contains a valve) is calculated by introducing grid-based optimization. Finding the best position of the valves will lead to maximization of the transferred heat in the hot stream and minimization of the pressure drop in the cold stream. The aim of the following thesis will be achieved by practicing the cost optimization to model an optimized network.
Resumo:
In this work we look at two different 1-dimensional quantum systems. The potentials for these systems are a linear potential in an infinite well and an inverted harmonic oscillator in an infinite well. We will solve the Schrödinger equation for both of these systems and get the energy eigenvalues and eigenfunctions. The solutions are obtained by using the boundary conditions and numerical methods. The motivation for our study comes from experimental background. For the linear potential we have two different boundary conditions. The first one is the so called normal boundary condition in which the wave function goes to zero on the edge of the well. The second condition is called derivative boundary condition in which the derivative of the wave function goes to zero on the edge of the well. The actual solutions are Airy functions. In the case of the inverted oscillator the solutions are parabolic cylinder functions and they are solved only using the normal boundary condition. Both of the potentials are compared with the particle in a box solutions. We will also present figures and tables from which we can see how the solutions look like. The similarities and differences with the particle in a box solution are also shown visually. The figures and calculations are done using mathematical software. We will also compare the linear potential to a case where the infinite wall is only on the left side. For this case we will also show graphical information of the different properties. With the inverted harmonic oscillator we will take a closer look at the quantum mechanical tunneling. We present some of the history of the quantum tunneling theory, its developers and finally we show the Feynman path integral theory. This theory enables us to get the instanton solutions. The instanton solutions are a way to look at the tunneling properties of the quantum system. The results are compared with the solutions of the double-well potential which is very similar to our case as a quantum system. The solutions are obtained using the same methods which makes the comparison relatively easy. All in all we consider and go through some of the stages of the quantum theory. We also look at the different ways to interpret the theory. We also present the special functions that are needed in our solutions, and look at the properties and different relations to other special functions. It is essential to notice that it is possible to use different mathematical formalisms to get the desired result. The quantum theory has been built for over one hundred years and it has different approaches. Different aspects make it possible to look at different things.