129 resultados para Modal methods
Resumo:
Suihku/viira-nopeussuhde on perälaatikon huulisuihkun ja viiran välinen nopeusero. Se vaikuttaa suuresti paperin ja kartongin loppuominaisuuksiin, kuten formaatioon sekä kuituorientaatioon ja näin ollen paperin lujuusominaisuuksiin. Tämän johdosta on erityisen tärkeää tietää todellinen suihku/viira-nopeussuhde paperin- ja kartonginvalmistuksessa. Perinteinen suihku/viira-nopeussuhteen määritysmenetelmä perustuu perälaatikon kokonaispaineeseen. Tällä menetelmällä kuitenkin todellinen huulisuihkun nopeus saattaa usein jäädä tietämättä johtuen mahdollisesta virheellisestä painemittarin kalibroinnista sekä laskuyhtälön epätarkkuuksista. Tämän johdosta on kehitetty useita reaaliaikaisia huulisuihkun mittausmenetelmiä. Perälaatikon parametrien optimaaliset asetukset ovat mahdollista määrittää ja ylläpitää huulisuihkun nopeuden “on-line” määrityksellä. Perälaatikon parametrejä ovat mm. huulisuihku, huuliaukon korkeusprofiili, reunavirtaukset ja syöttövirtauksen tasaisuus. Huulisuihkun nopeuden on-line mittauksella paljastuu myös muita perälaatikon ongelmakohtia, kuten mekaaniset viat, joita on perinteisesti tutkittu aikaa vievillä paperin ja kartongin lopputuoteanalyyseillä.
Resumo:
Työn tavoitteena oli toteuttaa simulointimalli, jolla pystytään tutkimaan kestomagnetoidun tahtikoneen aiheuttaman vääntömomenttivärähtelyn vaikutuksia sähkömoottoriin liitetyssä mekaniikassa. Tarkoitus oli lisäksi selvittää kuinka kyseinen simulointimalli voidaan toteuttaa nykyaikaisia simulointiohjelmia käyttäen. Saatujen simulointitulosten oikeellisuus varmistettiin tätä työtä varten rakennetulla verifiointilaitteistolla. Tutkittava rakenne koostui akselista, johon kiinnitettiin epäkeskotanko. Epäkeskotankoon kiinnitettiin massa, jonka sijaintia voitiin muunnella. Massan asemaa muuttamalla saatiin rakenteelle erilaisia ominaistaajuuksia. Epäkeskotanko mallinnettiin joustavana elementtimenetelmää apuna käyttäen. Mekaniikka mallinnettiin dynamiikan simulointiin tarkoitetussa ADAMS –ohjelmistossa, johon joustavana mallinnettu epäkeskotanko tuotiin ANSYS –elementtimenetelmäohjelmasta. Mekaniikan malli siirrettiin SIMULINK –ohjelmistoon, jossa mallinnettiin myös sähkökäyttö. SIMULINK –ohjelmassa mallinnettiin sähkökäyttö, joka kuvaa kestomagnetoitua tahtikonetta. Kestomagnetoidun tahtikoneen yhtälöt perustuvat lineaarisiin differentiaaliyhtälöihin, joihin hammasvääntömomentin vaikutus on lisätty häiriösignaalina. Sähkökäytön malli tuottaa vääntömomenttia, joka syötetään ADAMS –ohjelmistolla mallinnettuun mekaniikkaan. Mekaniikan mallista otetaan roottorin kulmakiihtyvyyden arvo takaisinkytkentänä sähkömoottorin malliin. Näin saadaan aikaiseksi yhdistetty simulointi, joka koostuu sähkötoimilaitekäytöstä ja mekaniikasta. Tulosten perusteella voidaan todeta, että sähkökäyttöjen ja mekaniikan yhdistetty simulointi on mahdollista toteuttaa valituilla menetelmillä. Simuloimalla saadut tulokset vastaavat hyvin mitattuja tuloksia.
Resumo:
Phlorotannins are the least studied group of tannins and are found only in brown algae. Hitherto the roles of phlorotannins, e.g. in plant-herbivore interactions, have been studied by quantifying the total contents of the soluble phlorotannins with a variety of methods. Little attention has been given to either quantitative variation in cell-wall-bound and exuded phlorotannins or to qualitative variation in individual compounds. A quantification procedure was developed to measure the amount of cell-wall-bound phlorotannins. The quantification of soluble phlorotannins was adjusted for both large- and small-scale samples and used to estimate the amounts of exuded phlorotannins using bladder wrack (Fucus vesiculosus) as a model species. In addition, separation of individual soluble phlorotannins to produce a phlorotannin profile from the phenolic crude extract was achieved by high-performance liquid chromatography (HPLC). Along with these methodological studies, attention was focused on the factors in the procedure which generated variation in the yield of phlorotannins. The objective was to enhance the efficiency of the sample preparation procedure. To resolve the problem of rapid oxidation of phlorotannins in HPLC analyses, ascorbic acid was added to the extractant. The widely used colourimetric method was found to produce a variation in the yield that was dependent upon the pH and concentration of the sample. Using these developed, adjusted and modified methods, the phenotypic plasticity of phlorotannins was studied with respect to nutrient availability and herbivory. An increase in nutrients decreased the total amount of soluble phlorotannins but did not affect the cell-wall-bound phlorotannins, the exudation of phlorotannins or the phlorotannin profile achieved with HPLC. The presence of the snail Thedoxus fluviatilis on the thallus induced production of soluble phlorotannins, and grazing by the herbivorous isopod Idotea baltica increased the exudation of phlorotannins. To study whether the among-population variations in phlorotannin contents arise from the genetic divergence or from the plastic response of algae, or both, algae from separate populations were reared in a common garden. Genetic variation among local populations was found in both the phlorotannin profile and the content of total phlorotannins. Phlorotannins were also genetically variable within populations. This suggests that local algal populations have diverged in their contents of phlorotannins, and that they may respond to natural selection and evolve both quantitatively and qualitatively.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Drying is a major step in the manufacturing process in pharmaceutical industries, and the selection of dryer and operating conditions are sometimes a bottleneck. In spite of difficulties, the bottlenecks are taken care of with utmost care due to good manufacturing practices (GMP) and industries' image in the global market. The purpose of this work is to research the use of existing knowledge for the selection of dryer and its operating conditions for drying of pharmaceutical materials with the help of methods like case-based reasoning and decision trees to reduce time and expenditure for research. The work consisted of two major parts as follows: Literature survey on the theories of spray dying, case-based reasoning and decision trees; working part includes data acquisition and testing of the models based on existing and upgraded data. Testing resulted in a combination of two models, case-based reasoning and decision trees, leading to more specific results when compared to conventional methods.
Resumo:
Currently there is a vogue for Agile Software Development methods and many software development organizations have already implemented or they are planning to implement agile methods. Objective of this thesis is to define how agile software development methods are implemented in a small organization. Agile methods covered in this thesis are Scrum and XP. From both methods the key practices are analysed and compared to waterfall method. This thesis also defines implementation strategy and actions how agile methods are implemented in a small organization. In practice organization must prepare well and all needed meters are defined before the implementation starts. In this work three different sample projects are introduced where agile methods were implemented. Experiences from these projects were encouraging although sample set of projects were too small to get trustworthy results.
Resumo:
In a very volatile industry of high technology it is of utmost importance to accurately forecast customers’ demand. However, statistical forecasting of sales, especially in heavily competitive electronics product business, has always been a challenging task due to very high variation in demand and very short product life cycles of products. The purpose of this thesis is to validate if statistical methods can be applied to forecasting sales of short life cycle electronics products and provide a feasible framework for implementing statistical forecasting in the environment of the case company. Two different approaches have been developed for forecasting on short and medium term and long term horizons. Both models are based on decomposition models, but differ in interpretation of the model residuals. For long term horizons residuals are assumed to represent white noise, whereas for short and medium term forecasting horizon residuals are modeled using statistical forecasting methods. Implementation of both approaches is performed in Matlab. Modeling results have shown that different markets exhibit different demand patterns and therefore different analytical approaches are appropriate for modeling demand in these markets. Moreover, the outcomes of modeling imply that statistical forecasting can not be handled separately from judgmental forecasting, but should be perceived only as a basis for judgmental forecasting activities. Based on modeling results recommendations for further deployment of statistical methods in sales forecasting of the case company are developed.
Resumo:
Seaports play an important part in the wellbeing of a nation. Many nations are highly dependent on foreign trade and most trade is done using sea vessels. This study is part of a larger research project, where a simulation model is required in order to create further analyses on Finnish macro logistical networks. The objective of this study is to create a system dynamic simulation model, which gives an accurate forecast for the development of demand of Finnish seaports up to 2030. The emphasis on this study is to show how it is possible to create a detailed harbor demand System Dynamic model with the help of statistical methods. The used forecasting methods were ARIMA (autoregressive integrated moving average) and regression models. The created simulation model gives a forecast with confidence intervals and allows studying different scenarios. The building process was found to be a useful one and the built model can be expanded to be more detailed. Required capacity for other parts of the Finnish logistical system could easily be included in the model.
Resumo:
Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.
Resumo:
Throughout history indigo was derived from various plants for example Dyer’s Woad (Isatis tinctoria L.) in Europe. In the 19th century were the synthetic dyes developed and nowadays indigo is mainly synthesized from by-products of fossil fuels. Indigo is a so-called vat dye, which means that it needs to be reduced to its water soluble leucoform before dyeing. Nowadays, most of the industrial reduction is performed chemically by sodium dithionite. However, this is considered environmentally unfavourable because of waste waters contaminating degradation products. Therefore there has been interest to find new possibilities to reduce indigo. Possible alternatives for the application of dithionite as the reducing agent are biologically induced reduction and electrochemical reduction. Glucose and other reducing sugars have recently been suggested as possible environmentally friendly alternatives as reducing agents for sulphur dyes and there have also been interest in using glucose to reduce indigo. In spite of the development of several types of processes, very little is known about the mechanism and kinetics associated with the reduction of indigo. This study aims at investigating the reduction and electrochemical analysis methods of indigo and give insight on the reduction mechanism of indigo. Anthraquinone as well as it’s derivative 1,8-dihydroxyanthraquinone were discovered to act as catalysts for the glucose induced reduction of indigo. Anthraquinone introduces a strong catalytic effect which is explained by invoking a molecular “wedge effect” during co-intercalation of Na+ and anthraquinone into the layered indigo crystal. The study includes also research on the extraction of plant-derived indigo from woad and the examination of the effect of this method to the yield and purity of indigo. The purity has been conventionally studied spectrophotometrically and a new hydrodynamic electrode system is introduced in this study. A vibrating probe is used in following electrochemically the leuco-indigo formation with glucose as a reducing agent.
Resumo:
One of the primary goals for food packages is to protect food against harmful environment, especially oxygen and moisture. The gas transmission rate is the total gas transport through the package, both by permeation through the package material and by leakage through pinholes and cracks. The shelf life of a product can be extended, if the food is stored in a gas tight package. Thus there is a need to test gas tightness of packages. There are several tightness testing methods, and they can be broadly divided into destructive and nondestructive methods. One of the most sensitive methods to detect leaks is by using a non destructive tracer gas technique. Carbon dioxide, helium and hydrogen are the most commonly used tracer gases. Hydrogen is the lightest and the smallest of all gases, which allows it to escape rapidly from the leak areas. The low background concentration of H2 in air (0.5 ppm) enables sensitive leak detection. With a hydrogen leak detector it is also possible to locate leaks. That is not possible with many other tightness testing methods. The experimental work has been focused on investigating the factors which affect the measurement results with the H2leak detector. Also reasons for false results were searched to avoid them in upcoming measurements. From the results of these experiments, the appropriate measurement practice was created in order to have correct and repeatable results. The most important thing for good measurement results is to keep the probe of the detector tightly against the leak. Because of its high diffusion rate, the HZ concentration decreases quickly if holding the probe further away from the leak area and thus the measured H2 leaks would be incorrect and small leaks could be undetected. In the experimental part hydrogen, oxygen and water vapour transmissions through laser beam reference holes (diameters 1 100 μm) were also measured and compared. With the H2 leak detector it was possible to detect even a leakage through 1 μm (diameter) within a few seconds. Water vapour did not penetrate even the largest reference hole (100 μm), even at tropical conditions (38 °C, 90 % RH), whereas some O2 transmission occurred through the reference holes larger than 5 μm. Thus water vapour transmission does not have a significant effect on food deterioration, if the diameter of the leak is less than 100 μm, but small leaks (5 100 μm) are more harmful for the food products, which are sensitive to oxidation.