45 resultados para prediction equations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The offset printing process is complex and involves the meeting of two essentially complex materials, printing ink and paper, upon which the final product is formed. It can therefore be expected that a multitude of chemical and physical interactions and mechanisms take place at the ink-paper interface. Interactions between ink and paper are of interest to both the papermakers and ink producers, as they wish to achieve better quality in the final product. The objective of this work is to clarify the combined influence of paper coating structure, printing ink and fountain solution on ink setting and the problems related to ink setting. A further aim is to identify the mechanisms that influence ink setting problems, and to be able to counteract them by changing properties of the coating layer or by changing the properties of the ink. The work carried out for this thesis included use of many techniques ranging from standard paper and printability tests to advanced optical techniques for detection of ink filaments during ink levelling. Modern imaging methods were applied for assessment of ink filament remain sizes and distribution of ink components inside pigment coating layers. Gravimetric filtration method and assessment of print rub using Ink-Surface-Interaction-Tester (ISIT) were utilized to study the influence of ink properties on ink setting. The chemical interactions were observed with the help of modified thin layer chromatography and contact angle measurements using both conventional and high speed imaging. The results of the papers in this thesis link the press operational parameters to filament sizes and show the influence of these parameters to filament size distribution. The relative importance between the press operation parameters was shown to vary. The size distribution of filaments is important in predicting the ink setting behaviour, which was highlighted by the dynamic gloss and ink setting studies. Prediction of ink setting behaviour was also further improved by use of separate permeability factors for different ink types in connection to filtration equations. The roles of ink components were studied in connection to ink absorption and mechanism of print rub. Total solids content and ratio of linseed oil to mineral oil were found to determine the degree of print rub on coated papers. Wax addition improved print rub resistance, but would not decrease print rub as much as lowering the total solids content in the ink. Linseed oil was shown to absorb into pigment coating pores by mechanism of adsorption to pore walls, which highlights the need for sufficient pore surface area for improved chromatographic separation of ink components. These results should help press operators, suppliers of printing presses, papermakers and suppliers to papermakers, to better understand the material and operating conditions of the press as it relates to various print quality issues. Even though paper is in competition with electronic media, high quality printed products are still in demand. The results should provide useful information for this segment of the industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The three main topics of this work are independent systems and chains of word equations, parametric solutions of word equations on three unknowns, and unique decipherability in the monoid of regular languages. The most important result about independent systems is a new method giving an upper bound for their sizes in the case of three unknowns. The bound depends on the length of the shortest equation. This result has generalizations for decreasing chains and for more than three unknowns. The method also leads to shorter proofs and generalizations of some old results. Hmelevksii’s theorem states that every word equation on three unknowns has a parametric solution. We give a significantly simplified proof for this theorem. As a new result we estimate the lengths of parametric solutions and get a bound for the length of the minimal nontrivial solution and for the complexity of deciding whether such a solution exists. The unique decipherability problem asks whether given elements of some monoid form a code, that is, whether they satisfy a nontrivial equation. We give characterizations for when a collection of unary regular languages is a code. We also prove that it is undecidable whether a collection of binary regular languages is a code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, a classi cation problem in predicting credit worthiness of a customer is tackled. This is done by proposing a reliable classi cation procedure on a given data set. The aim of this thesis is to design a model that gives the best classi cation accuracy to e ectively predict bankruptcy. FRPCA techniques proposed by Yang and Wang have been preferred since they are tolerant to certain type of noise in the data. These include FRPCA1, FRPCA2 and FRPCA3 from which the best method is chosen. Two di erent approaches are used at the classi cation stage: Similarity classi er and FKNN classi er. Algorithms are tested with Australian credit card screening data set. Results obtained indicate a mean classi cation accuracy of 83.22% using FRPCA1 with similarity classi- er. The FKNN approach yields a mean classi cation accuracy of 85.93% when used with FRPCA2, making it a better method for the suitable choices of the number of nearest neighbors and fuzziness parameters. Details on the calibration of the fuzziness parameter and other parameters associated with the similarity classi er are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cyanobacteria are unicellular, non-nitrogen-fixing prokaryotes, which perform photosynthesis similarly as higher plants. The cyanobacterium Synechocystis sp. strain PCC 6803 is used as a model organism in photosynthesis research. My research described herein aims at understanding the function of the photosynthetic machinery and how it responds to changes in the environment. Detailed knowledge of the regulation of photosynthesis in cyanobacteria can be utilized for biotechnological purposes, for example in the harnessing of solar energy for biofuel production. In photosynthesis, iron participates in electron transfer. Here, we focused on iron transport in Synechocystis sp. strain PCC 6803 and particularly on the environmental regulation of the genes encoding the FutA2BC ferric iron transporter, which belongs to the ABC transporter family. A homology model built for the ATP-binding subunit FutC indicates that it has a functional ATPbinding site as well as conserved interactions with the channel-forming subunit FutB in the transporter complex. Polyamines are important for the cell proliferation, differentiation and apoptosis in prokaryotic and eukaryotic cells. In plants, polyamines have special roles in stress response and in plant survival. The polyamine metabolism in cyanobacteria in response to environmental stress is of interest in research on stress tolerance of higher plants. In this thesis, the potd gene encoding an polyamine transporter subunit from Synechocystis sp. strain PCC 6803 was characterized for the first time. A homology model built for PotD protein indicated that it has capability of binding polyamines, with the preference for spermidine. Furthermore, in order to investigate the structural features of the substrate specificity, polyamines were docked into the binding site. Spermidine was positioned very similarly in Synechocystis PotD as in the template structure and had most favorable interactions of the docked polyamines. Based on the homology model, experimental work was conducted, which confirmed the binding preference. Flavodiiron proteins (Flv) are enzymes, which protect the cell against toxicity of oxygen and/or nitric oxide by reduction. In this thesis, we present a novel type of photoprotection mechanism in cyanobacteria by the heterodimer of Flv2/Flv4. The constructed homology model of Flv2/Flv4 suggests a functional heterodimer capable of rapid electron transfer. The unknown protein sll0218, encoded by the flv2-flv4 operon, is assumed to facilitate the interaction of the Flv2/Flv4 heterodimer and energy transfer between the phycobilisome and PSII. Flv2/Flv4 provides an alternative electron transfer pathway and functions as an electron sink in PSII electron transfer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic differential equation (SDE) is a differential equation in which some of the terms and its solution are stochastic processes. SDEs play a central role in modeling physical systems like finance, Biology, Engineering, to mention some. In modeling process, the computation of the trajectories (sample paths) of solutions to SDEs is very important. However, the exact solution to a SDE is generally difficult to obtain due to non-differentiability character of realizations of the Brownian motion. There exist approximation methods of solutions of SDE. The solutions will be continuous stochastic processes that represent diffusive dynamics, a common modeling assumption for financial, Biology, physical, environmental systems. This Masters' thesis is an introduction and survey of numerical solution methods for stochastic differential equations. Standard numerical methods, local linearization methods and filtering methods are well described. We compute the root mean square errors for each method from which we propose a better numerical scheme. Stochastic differential equations can be formulated from a given ordinary differential equations. In this thesis, we describe two kind of formulations: parametric and non-parametric techniques. The formulation is based on epidemiological SEIR model. This methods have a tendency of increasing parameters in the constructed SDEs, hence, it requires more data. We compare the two techniques numerically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we examine four well-known and traditional concepts of combinatorics on words. However the contexts in which these topics are treated are not the traditional ones. More precisely, the question of avoidability is asked, for example, in terms of k-abelian squares. Two words are said to be k-abelian equivalent if they have the same number of occurrences of each factor up to length k. Consequently, k-abelian equivalence can be seen as a sharpening of abelian equivalence. This fairly new concept is discussed broader than the other topics of this thesis. The second main subject concerns the defect property. The defect theorem is a well-known result for words. We will analyze the property, for example, among the sets of 2-dimensional words, i.e., polyominoes composed of labelled unit squares. From the defect effect we move to equations. We will use a special way to define a product operation for words and then solve a few basic equations over constructed partial semigroup. We will also consider the satisfiability question and the compactness property with respect to this kind of equations. The final topic of the thesis deals with palindromes. Some finite words, including all binary words, are uniquely determined up to word isomorphism by the position and length of some of its palindromic factors. The famous Thue-Morse word has the property that for each positive integer n, there exists a factor which cannot be generated by fewer than n palindromes. We prove that in general, every non ultimately periodic word contains a factor which cannot be generated by fewer than 3 palindromes, and we obtain a classification of those binary words each of whose factors are generated by at most 3 palindromes. Surprisingly these words are related to another much studied set of words, Sturmian words.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A linear prediction procedure is one of the approved numerical methods of signal processing. In the field of optical spectroscopy it is used mainly for extrapolation known parts of an optical signal in order to obtain a longer one or deduce missing signal samples. The first is needed particularly when narrowing spectral lines for the purpose of spectral information extraction. In the present paper the coherent anti-Stokes Raman scattering (CARS) spectra were under investigation. The spectra were significantly distorted by the presence of nonlinear nonresonant background. In addition, line shapes were far from Gaussian/Lorentz profiles. To overcome these disadvantages the maximum entropy method (MEM) for phase spectrum retrieval was used. The obtained broad MEM spectra were further underwent the linear prediction analysis in order to be narrowed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies the predictability of market switching and delisting events from OMX First North Nordic multilateral stock exchange by using financial statement information and market information from 2007 to 2012. This study was conducted by using a three stage process. In first stage relevant theoretical framework and initial variable pool were constructed. Then, explanatory analysis of the initial variable pool was done in order to further limit and identify relevant variables. The explanatory analysis was conducted by using self-organizing map methodology. In the third stage, the predictive modeling was carried out with random forests and support vector machine methodologies. It was found that the explanatory analysis was able to identify relevant variables. The results indicate that the market switching and delisting events can be predicted in some extent. The empirical results also support the usability of financial statement and market information in the prediction of market switching and delisting events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this master’s thesis is to examine if Weibull analysis is suitable method for warranty forecasting in the Case Company. The Case Company has used Reliasoft’s Weibull++ software, which is basing on the Weibull method, but the Company has noticed that the analysis has not given right results. This study was conducted making Weibull simulations in different profit centers of the Case Company and then comparing actual cost and forecasted cost. Simula-tions were made using different time frames and two methods for determining future deliveries. The first sub objective is to examine, which parameters of simulations will give the best result to each profit center. The second sub objective of this study is to create a simple control model for following forecasted costs and actual realized costs. The third sub objective is to document all Qlikview-parameters of profit centers. This study is a constructive research, and solutions for company’s problems are figured out in this master’s thesis. In the theory parts were introduced quality issues, for example; what is quality, quality costing and cost of poor quality. Quality is one of the major aspects in the Case Company, so understand-ing the link between quality and warranty forecasting is important. Warranty management was also introduced and other different tools for warranty forecasting. The Weibull method and its mathematical properties and reliability engineering were introduced. The main results of this master’s thesis are that the Weibull analysis forecasted too high costs, when calculating provision. Although, some forecasted values of profit centers were lower than actual values, the method works better for planning purposes. One of the reasons is that quality improving or alternatively quality decreasing is not showing in the results of the analysis in the short run. The other reason for too high values is that the products of the Case Company are com-plex and analyses were made in the profit center-level. The Weibull method was developed for standard products, but products of the Case Company consists of many complex components. According to the theory, this method was developed for homogeneous-data. So the most im-portant notification is that the analysis should be made in the product level, not the profit center level, when the data is more homogeneous.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind energy has obtained outstanding expectations due to risks of global warming and nuclear energy production plant accidents. Nowadays, wind farms are often constructed in areas of complex terrain. A potential wind farm location must have the site thoroughly surveyed and the wind climatology analyzed before installing any hardware. Therefore, modeling of Atmospheric Boundary Layer (ABL) flows over complex terrains containing, e.g. hills, forest, and lakes is of great interest in wind energy applications, as it can help in locating and optimizing the wind farms. Numerical modeling of wind flows using Computational Fluid Dynamics (CFD) has become a popular technique during the last few decades. Due to the inherent flow variability and large-scale unsteadiness typical in ABL flows in general and especially over complex terrains, the flow can be difficult to be predicted accurately enough by using the Reynolds-Averaged Navier-Stokes equations (RANS). Large- Eddy Simulation (LES) resolves the largest and thus most important turbulent eddies and models only the small-scale motions which are more universal than the large eddies and thus easier to model. Therefore, LES is expected to be more suitable for this kind of simulations although it is computationally more expensive than the RANS approach. With the fast development of computers and open-source CFD software during the recent years, the application of LES toward atmospheric flow is becoming increasingly common nowadays. The aim of the work is to simulate atmospheric flows over realistic and complex terrains by means of LES. Evaluation of potential in-land wind park locations will be the main application for these simulations. Development of the LES methodology to simulate the atmospheric flows over realistic terrains is reported in the thesis. The work also aims at validating the LES methodology at a real scale. In the thesis, LES are carried out for flow problems ranging from basic channel flows to real atmospheric flows over one of the most recent real-life complex terrain problems, the Bolund hill. All the simulations reported in the thesis are carried out using a new OpenFOAM® -based LES solver. The solver uses the 4th order time-accurate Runge-Kutta scheme and a fractional step method. Moreover, development of the LES methodology includes special attention to two boundary conditions: the upstream (inflow) and wall boundary conditions. The upstream boundary condition is generated by using the so-called recycling technique, in which the instantaneous flow properties are sampled on aplane downstream of the inlet and mapped back to the inlet at each time step. This technique develops the upstream boundary-layer flow together with the inflow turbulence without using any precursor simulation and thus within a single computational domain. The roughness of the terrain surface is modeled by implementing a new wall function into OpenFOAM® during the thesis work. Both, the recycling method and the newly implemented wall function, are validated for the channel flows at relatively high Reynolds number before applying them to the atmospheric flow applications. After validating the LES model over simple flows, the simulations are carried out for atmospheric boundary-layer flows over two types of hills: first, two-dimensional wind-tunnel hill profiles and second, the Bolund hill located in Roskilde Fjord, Denmark. For the twodimensional wind-tunnel hills, the study focuses on the overall flow behavior as a function of the hill slope. Moreover, the simulations are repeated using another wall function suitable for smooth surfaces, which already existed in OpenFOAM® , in order to study the sensitivity of the flow to the surface roughness in ABL flows. The simulated results obtained using the two wall functions are compared against the wind-tunnel measurements. It is shown that LES using the implemented wall function produces overall satisfactory results on the turbulent flow over the two-dimensional hills. The prediction of the flow separation and reattachment-length for the steeper hill is closer to the measurements than the other numerical studies reported in the past for the same hill geometry. The field measurement campaign performed over the Bolund hill provides the most recent field-experiment dataset for the mean flow and the turbulence properties. A number of research groups have simulated the wind flows over the Bolund hill. Due to the challenging features of the hill such as the almost vertical hill slope, it is considered as an ideal experimental test case for validating micro-scale CFD models for wind energy applications. In this work, the simulated results obtained for two wind directions are compared against the field measurements. It is shown that the present LES can reproduce the complex turbulent wind flow structures over a complicated terrain such as the Bolund hill. Especially, the present LES results show the best prediction of the turbulent kinetic energy with an average error of 24.1%, which is a 43% smaller than any other model results reported in the past for the Bolund case. Finally, the validated LES methodology is demonstrated to simulate the wind flow over the existing Muukko wind farm located in South-Eastern Finland. The simulation is carried out only for one wind direction and the results on the instantaneous and time-averaged wind speeds are briefly reported. The demonstration case is followed by discussions on the practical aspects of LES for the wind resource assessment over a realistic inland wind farm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Very preterm birth is a risk for brain injury and abnormal neurodevelopment. While the incidence of cerebral palsy has decreased due to advances in perinatal and neonatal care, the rate of less severe neuromotor problems continues to be high in very prematurely born children. Neonatal brain imaging can aid in identifying children for closer follow-up and in providing parents information on developmental risks. This thesis aimed to study the predictive value of structural brain magnetic resonance imaging (MRI) at term age, serial neonatal cranial ultrasound (cUS), and structured neurological examinations during the longitudinal follow-up for the neurodevelopment of very preterm born children up to 11 years of age as a part of the PIPARI Study (The Development and Functioning of Very Low Birth Weight Infants from Infancy to School Age). A further aim was to describe the associations between regional brain volumes and long-term neuromotor profile. The prospective follow-up comprised of the assessment of neurosensory development at 2 years of corrected age, cognitive development at 5 years of chronological age, and neuromotor development at 11 years of age. Neonatal brain imaging and structured neurological examinations predicted neurodevelopment at all age-points. The combination of neurological examination and brain MRI or cUS improved the predictive value of neonatal brain imaging alone. Decreased brain volumes associated with neuromotor performance. At the age of 11 years, the majority of the very preterm born children had age-appropriate neuromotor development and after-school sporting activities. Long-term clinical follow-up is recommended at least for all very preterm infants with major brain pathologies.