16 resultados para Extremal polynomial ultraspherical polynomials
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Selostus: Sian kasvuominaisuuksien perinnölliset tunnusluvut arvioituna kolmannen asteen polynomifunktion avulla
Resumo:
The main topic of the thesis is optimal stopping. This is treated in two research articles. In the first article we introduce a new approach to optimal stopping of general strong Markov processes. The approach is based on the representation of excessive functions as expected suprema. We present a variety of examples, in particular, the Novikov-Shiryaev problem for Lévy processes. In the second article on optimal stopping we focus on differentiability of excessive functions of diffusions and apply these results to study the validity of the principle of smooth fit. As an example we discuss optimal stopping of sticky Brownian motion. The third research article offers a survey like discussion on Appell polynomials. The crucial role of Appell polynomials in optimal stopping of Lévy processes was noticed by Novikov and Shiryaev. They described the optimal rule in a large class of problems via these polynomials. We exploit the probabilistic approach to Appell polynomials and show that many classical results are obtained with ease in this framework. In the fourth article we derive a new relationship between the generalized Bernoulli polynomials and the generalized Euler polynomials.
Resumo:
The properties and cosmological importance of a class of non-topological solitons, Q-balls, are studied. Aspects of Q-ball solutions and Q-ball cosmology discussed in the literature are reviewed. Q-balls are particularly considered in the Minimal Supersymmetric Standard Model with supersymmetry broken by a hidden sector mechanism mediated by either gravity or gauge interactions. Q-ball profiles, charge-energy relations and evaporation rates for realistic Q-ball profiles are calculated for general polynomial potentials and for the gravity mediated scenario. In all of the cases, the evaporation rates are found to increase with decreasing charge. Q-ball collisions are studied by numerical means in the two supersymmetry breaking scenarios. It is noted that the collision processes can be divided into three types: fusion, charge transfer and elastic scattering. Cross-sections are calculated for the different types of processes in the different scenarios. The formation of Q-balls from the fragmentation of the Aflieck-Dine -condensate is studied by numerical and analytical means. The charge distribution is found to depend strongly on the initial energy-charge ratio of the condensate. The final state is typically noted to consist of Q- and anti-Q-balls in a state of maximum entropy. By studying the relaxation of excited Q-balls the rate at which excess energy can be emitted is calculated in the gravity mediated scenario. The Q-ball is also found to withstand excess energy well without significant charge loss. The possible cosmological consequences of these Q-ball properties are discussed.
Resumo:
In this paper, a new two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation is proposed. The nonlinear elastic forces of the beam element are obtained using a continuum mechanics approach without employing a local element coordinate system. In this study, linear polynomials are used to interpolate both the transverse and longitudinal components of the displacement. This is different from other absolute nodal-coordinate-based beam elements where cubic polynomials are used in the longitudinal direction. The accompanying defects of the phenomenon known as shear locking are avoided through the adoption of selective integration within the numerical integration method. The proposed element is verified using several numerical examples, and the results are compared to analytical solutions and the results for an existing shear deformable beam element. It is shown that by using the proposed element, accurate linear and nonlinear static deformations, as well as realistic dynamic behavior, can be achieved with a smaller computational effort than by using existing shear deformable two-dimensional beam elements.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.
Resumo:
The purpose of this research was to do a repeated cross-sectional research on class teachers who study in the 4th year and also graduated at the Faculty of Education, University of Turku between the years of 2000 through 2004. Specifically, seven research questions were addressed to target the main purpose of the study: How do class teacher education masters’ degree senior students and graduates rate “importance; effectiveness; and quality” of training they have received at the Faculty of Education? Are there significant differences between overall ratings of importance; effectiveness and quality of training by year of graduation, sex, and age (for graduates) and sex and age (for senior students)? Is there significant relationship between respondents’ overall ratings of importance; effectiveness and their overall ratings of the quality of training and preparation they have received? Are there significant differences between graduates and senior students about importance, effectiveness, and quality of teacher education programs? And what do teachers’ [Graduates] believe about how increasing work experience has changed their opinions of their preservice training? Moreover the following concepts related to the instructional activities were studied: critical thinking skills, communication skills, attention to ethics, curriculum and instruction (planning), role of teacher and teaching knowledge, assessment skills, attention to continuous professional development, subject matters knowledge, knowledge of learning environment, and using educational technology. Researcher also tried to find influence of some moderator variables e.g. year of graduation, sex, and age on the dependent and independent variables. This study consisted of two questionnaires (a structured likert-scale and an open ended questionnaire). The population in study 1 was all senior students and 2000-2004 class teacher education masters’ degree from the departments of Teacher Education Faculty of Education at University of Turku. Of the 1020 students and graduates the researcher was able to find current addresses of 675 of the subjects and of the 675 graduates contacted, 439 or 66.2 percent responded to the survey. The population in study 2 was all class teachers who graduated from Turku University and now work in the few basic schools (59 Schools) in South- West Finland. 257 teachers answered to the open ended web-based questions. SPSS was used to produce standard deviations; Analysis of Variance; Pearson Product Moment Correlation (r); T-test; ANOVA, Bonferroni post-hoc test; and Polynomial Contrast tests meant to analyze linear trend. An alpha level of .05 was used to determine statistical significance. The results of the study showed that: A majority of the respondents (graduates and students) rated the overall importance, effectiveness and quality of the teacher education programs as important, effective and good. Generally speaking there were only a few significant differences between the cohorts and groups related to the background variables (gender, age). The different cohorts were rating the quality of the programs very similarly but some differences between the cohorts were found in the importance and effectiveness ratings. Graduates of 2001 and 2002 rated the importance of the program significantly higher than 2000 graduates. The effectiveness of the programs was rated significantly higher by 2001 and 2003 graduates than other groups. In spite of these individual differences between cohorts there were no linear trends among the year cohorts in any measure. In respondents’ ratings of the effectiveness of teacher education programs there was significant difference between males and females; females rated it higher than males. There were no significant differences between males’ and females’ ratings of the importance and quality of programs. In the ratings there was only one difference between age groups. Older graduates (35 years or older) rated the importance of the teacher training significantly higher that 25-35 years old graduates. In graduates’ ratings there were positive but relatively low correlations between all variables related to importance, effectiveness and quality of Teacher Education Programs. Generally speaking students’ ratings about importance, effectiveness and quality of teacher education program were very positive. There was only one significant difference related to the background variables. Females rated higher the effectiveness of the program. The comparison of students’ and graduates’ perception about importance, effectiveness, and quality of teacher education programs showed that there were no significant differences between graduates and students in the overall ratings. However there were differences in some individual variables. Students rated higher in importance of “Continuous Professional Development”, effectiveness of “Critical Thinking Skills” and “Using Educational Technology” and quality of “Advice received from the advisor”. Graduates rated higher in importance of “Knowledge of Learning Environment” and effectiveness of “Continuous Professional Development”. According to the qualitative data of study 2 some graduates expressed that their perceptions have not changed about the importance, effectiveness, and quality of training that they received during their study time. They pointed out that teacher education programs have provided them the basic theoretical/formal knowledge and some training of practical routines. However, a majority of the teachers seems to have somewhat critical opinions about the teacher education. These teachers were not satisfied with teacher education programs because they argued that the programs failed to meet their practical demands in different everyday situations of the classroom e.g. in coping with students’ learning difficulties, multiprofessional communication with parents and other professional groups (psychologists and social workers), and classroom management problems. Participants also emphasized more practice oriented knowledge of subject matter, evaluation methods and teachers’ rights and responsibilities. Therefore, they (54.1% of participants) suggested that teacher education departments should provide more practice-based courses and programs as well as closer collaboration between regular schools and teacher education departments in order to fill gap between theory and practice.
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
This master thesis investigates the moduli of families of curves and the capacities of the Gr¨otzsch and Teichm¨uller rings, which are applied in the main parts of this master thesis. The extremal properties of these rings are discussed in connection with the spherical symmetrization. Applications are given to the study of distortion of quasiconformal maps in the euclidean n-dimensional space.
Resumo:
Adaptive control systems are one of the most significant research directions of modern control theory. It is well known that every mechanical appliance’s behavior noticeably depends on environmental changes, functioning-mode parameter changes and changes in technical characteristics of internal functional devices. An adaptive controller involved in control process allows reducing an influence of such changes. In spite of this such type of control methods is applied seldom due to specifics of a controller designing. The work presented in this paper shows the design process of the adaptive controller built by Lyapunov’s function method for the Hydraulic Drive. The calculation needed and the modeling were conducting with MATLAB® software including Simulink® and Symbolic Math Toolbox™ etc. In the work there was applied the Jacobi matrix linearization of the object’s mathematical model and derivation of the suitable reference models based on Newton’s characteristic polynomial. The intelligent adaptive to nonlinearities algorithm for solving Lyapunov’s equation was developed. Developed algorithm works properly but considered plant is not met requirement of functioning with. The results showed confirmation that adaptive systems application significantly increases possibilities in use devices and might be used for correction a system’s behavior dynamics.
Resumo:
Distributed storage systems are studied. The interest in such system has become relatively wide due to the increasing amount of information needed to be stored in data centers or different kinds of cloud systems. There are many kinds of solutions for storing the information into distributed devices regarding the needs of the system designer. This thesis studies the questions of designing such storage systems and also fundamental limits of such systems. Namely, the subjects of interest of this thesis include heterogeneous distributed storage systems, distributed storage systems with the exact repair property, and locally repairable codes. For distributed storage systems with either functional or exact repair, capacity results are proved. In the case of locally repairable codes, the minimum distance is studied. Constructions for exact-repairing codes between minimum bandwidth regeneration (MBR) and minimum storage regeneration (MSR) points are given. These codes exceed the time-sharing line of the extremal points in many cases. Other properties of exact-regenerating codes are also studied. For the heterogeneous setup, the main result is that the capacity of such systems is always smaller than or equal to the capacity of a homogeneous system with symmetric repair with average node size and average repair bandwidth. A randomized construction for a locally repairable code with good minimum distance is given. It is shown that a random linear code of certain natural type has a good minimum distance with high probability. Other properties of locally repairable codes are also studied.
Resumo:
Tässä työssä esitetään venäläisen matemaatikon A.I. Shirshovin teorioita ja tuloksia sanojen kombinatoriikasta. Lisäksi näytetään miten ne soveltuvat PI-algebrojen maailmaan. Shirshovin tuloksia tarkasteltaessa käsitellään sanoja erillisinä kombinatorisina objekteina ja todistetaan Shirshovin Lemma, joka on tämän työn perusta. Lemmanmukaan tarpeeksi pitkille sanoille saadaan tiettyä säännönmukaisuutta ja se todistetaan kolme kertaa. Ensimmäisestä saadaan tarpeeksi pitkän sanan olemassaolo.Toinen todistus mukailee Shirshovin alkuperäistä todistusta. Kolmannessa todistuksessa annetaan tarpeeksi pitkälle sanalle paremmin käytäntöön soveltuva raja. Tämän jälkeen käsitellään sanoja algebrallisina objekteina. Työn päätuloksena todistetaan Shirshovin Korkeuslause, jonka mukaan jokainen äärellisesti generoidunPI-algebran alkio on sanojen ω1k1 ···ωdkd lineaarikombinaatio, missä sanojen ωi pi-tuudet sekä indeksi i ovat rajatut. Shirshovin Korkeuslauseesta seuraa suoraan positiivinen ratkaisu Kurochin ongelmaan PI-algebroilla sekä saadaan raja alkioiden lukumäärälle, jolla algebra generoituu moduliksi. Lisäksi esitetään toisena sovelluksena ilman todistuksia Shirshovin soveltuvuus Jacobsonin radikaalin nilpotenttisuuteen. Pääsääntöisenä lähteenä käytetään A. Kanel-Belowin ja L. H. Rowenin kirjaa: Computational aspects of polynomial identities.