967 resultados para Point interpolation method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method is used to estimate the volumes of sediments of glacial valleys. This method is based on the concept of sloping local base level and requires only a digital terrain model and the limits of the alluvial valleys as input data. The bedrock surface of the glacial valley is estimated by a progressive excavation of the digital elevation model (DEM) of the filled valley area. This is performed using an iterative routine that replaces the altitude of a point of the DEM by the mean value of its neighbors minus a fixed value. The result is a curved surface, quadratic in 2D. The bedrock surface of the Rhone Valley in Switzerland was estimated by this method using the free digital terrain model Shuttle Radar Topography Mission (SRTM) (~92 m resolution). The results obtained are in good agreement with the previous estimations based on seismic profiles and gravimetric modeling, with the exceptions of some particular locations. The results from the present method and those from the seismic interpretation are slightly different from the results of the gravimetric data. This discrepancy may result from the presence of large buried landslides in the bottom of the Rhone Valley.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of chemicals is a critical part of a pro-active winter maintenance program. However, ensuring that the correct chemicals are used is a challenge. On the one hand, budgets are limited, and thus price of chemicals is a major concern. On the other, performance of chemicals, especially at lower pavement temperatures, is not always assured. Two chemicals that are used extensively by the Iowa Department of Transportation (Iowa DOT) are sodium chloride (or salt) and calcium chloride. While calcium chloride can be effective at much lower temperatures than salt, it is also considerably more expensive. Costs for a gallon of salt brine are typically in the range of $0.05 to $0.10, whereas calcium chloride brine may cost in the range of $1.00 or more per gallon. These costs are of course subject to market forces and will thus change from year to year. The idea of mixing different winter maintenance chemicals is by no means new, and in general discussions it appears that many winter maintenance personnel have from time to time mixed up a jar of chemicals and done some work around the yard to see whether or not their new mix “works.” There are many stories about the mixture turning to “mayonnaise” (or, more colorfully, to “snot”) suggesting that mixing chemicals may give rise to some problems most likely due to precipitation. Further, the question of what constitutes a mixture “working” in this context is a topic of considerable discussion. In this study, mixtures of salt brine and calcium chloride brine were examined to determine their ice melting capability and their freezing point. Using the results from these tests, a linear interpolation model of the ice melting capability of mixtures of the two brines has been developed. Using a criterion based upon the ability of the mixture to melt a certain thickness of ice or snow (expressed as a thickness of melt-water equivalent), the model was extended to develop a material cost per lane mile for the full range of possible mixtures as a function of temperature. This allowed for a comparison of the performance of the various mixtures. From the point of view of melting capacity, mixing calcium chloride brine with salt brine appears to be effective only at very low temperatures (around 0° F and below). However, the approach described herein only considers the material costs, and does not consider application costs or other aspects of the mixture performance than melting capacity. While a unit quantity of calcium chloride is considerably more expensive than a unit quantity of sodium chloride, it also melts considerably more ice. In other words, to achieve the same result, much less calcium chloride brine is required than sodium chloride brine. This is important in considering application costs, because it means that a single application vehicle (for example, a brine dispensing trailer towed behind a snowplow) can cover many more lane miles with calcium chloride brine than with salt brine before needing to refill. Calculating exactly how much could be saved in application costs requires an optimization of routes used in the application of liquids in anti-icing, which is beyond the scope of the current study. However, this may be an area that agencies wish to pursue for future investigation. In discussion with winter maintenance personnel who use mixtures of sodium chloride and calcium chloride, it is evident that one reason for this is because the mixture is much more persistent (i.e. it stays longer on the road surface) than straight salt brine. Operationally this persistence is very valuable, but at present there are not any established methods to measure the persistence of a chemical on a pavement. In conclusion, the study presents a method that allows an agency to determine the material costs of using various mixtures of salt brine and calcium chloride brine. The method is based upon the requirement of melting a certain quantity of snow or ice at the ice-pavement interface, and on how much of a chemical or of a mixture of chemicals is required to do that.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Yritykset ovat pakotettuja erilaisiin yhteistyömuotoihin pärjätäkseen kiristyvässä kilpailussa. Yhteistyösuhteet kulkevat eri nimillä riippuen teollisuuden alasta ja siitä, missä kohtaa toimitusketjua ne toteutuvat, mutta periaatteessa kaikki pohjautuvat samaan ideaan kuin Vendor Managed Inventory (VMI); varastoon jakysyntään liittyvä tieto jaetaan toimitusketjun eri osapuolien kesken, jotta tuotanto, jakelu ja varastonhallinta olisi mahdollista optimoida. Vendor Managed Inventory on ideana yksinkertainen, mutta vaatii onnistuakseen paljon. Perusolettamus on, että toimittajan on kyettävä hallinnoimaan asiakkaan varastoa paremmin kuin asiakas itse. Tämä ei kuitenkaan ole mahdollista ilman riittävää yhteistyötä, oikeanlaista informaatiota tai sopivia tuoteominaisuuksia. Tämän työn tarkoitus on esitellä kriittiset menestystekijät valmistajan kannalta, kun näkyvyys todelliseen kysyntään on heikko ja kyseessäolevat tuotteet ovat ominaisuuksiltaan toimintamalliin huonosti soveltuvia. VMItoimintamallin soveltuvuus matkapuhelimia valmistavan yrityksen liiketoimintaan, sekä sen vaikutus asiakasyhteistyöhön, kannattavuuteen ja toiminnan tehostamiseen on myös tutkittu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä diplomityössä tutkitaan dispariteettikartan laskennan tehostamista interpoloimalla. Kolmiomittausta käyttämällä stereokuvasta muodostetaan ensin harva dispariteettikartta, jonka jälkeen koko kuvan kattava dispariteettikartta muodostetaan interpoloimalla. Kolmiomittausta varten täytyy tietää samaa reaalimaailman pistettä vastaavat kuvapisteet molemmissa kameroissa. Huolimatta siitä, että vastaavien pisteiden hakualue voidaan pienentää kahdesta ulottuvuudesta yhteen ulottuvuuteen käyttämällä esimerkiksi epipolaarista geometriaa, on laskennallisesti tehokkaampaa määrittää osa dispariteetikartasta interpoloimalla, kuin etsiä vastaavia kuvapisteitä stereokuvista. Myöskin johtuen stereonäköjärjestelmän kameroiden välisestä etäisyydestä, kaikki kuvien pisteet eivät löydy toisesta kuvasta. Näin ollen on mahdotonta määrittää koko kuvan kattavaa dispariteettikartaa pelkästään vastaavista pisteistä. Vastaavien pisteiden etsimiseen tässä työssä käytetään dynaamista ohjelmointia sekä korrelaatiomenetelmää. Reaalimaailman pinnat ovat yleisesti ottaen jatkuvia, joten geometrisessä mielessä on perusteltua approksimoida kuvien esittämiä pintoja interpoloimalla. On myöskin olemassa tieteellistä näyttöä, jonkamukaan ihmisen stereonäkö interpoloi objektien pintoja.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Viime vuosikymmenien aikana kommunikaatioteknologiat ovat kehittyneet erittäin paljon. Uusia verkkoja, liityntätekniikoita, protokollia ja päätelaitteita on luotu alati kehittyvällä vauhdilla, eikä hidastumisen merkkejä ole näkyvissä. Varsinkin mobiilisovellukset ovat kasvattaneet markkinaosuuksiaan viime aikoina. Unlicensed MobileAccess (UMA) on uusi liityntätekniikka mobiilipäätelaitteille, joka mahdollistaa liitynnän GSM- runkoverkkoon WLAN- tai Bluetooth - tekniikoiden avulla. Tämä diplomityö keskittyy UMAan liittyviin teknologioihin, joita tarkastellaan lähemmin ensimmäisissä kappaleissa. Tavoitteena on esitellä, mitä UMA merkitsee, ja kuinka eri tekniikoita voidaan soveltaa sen toteutuksissa. Ennenkuin uusia teknologioita voidaan soveltaa kaupallisesti, täytyy niiden olla kokonaisvaltaisesti testattuja. Erilaisia testausmenetelmiä sovelletaan laitteistonja ohjelmiston testaukseen, mutta tavoite on kuitenkin sama, eli vähentää testattavan tuotteen epäluotettavuutta ja lisätä sen laatua. Vaikka UMA käsittääkin pääasiassa jo olemassa olevia tekniikoita, tuo se silti mukanaan uuden verkkoelementin ja kaksi uutta kommunikaatioprotokollaa. Ennen kuin mitään UMAa tukevia ratkaisuja voidaan tuoda markkinoille, monia erilaisia testausmenetelmiä on suoritettava, jotta varmistutaan uuden tuotteen oikeasta toiminnallisuudesta. Koska tämä diplomityö käsittelee uutta tekniikkaa, on myös testausmenetelmien yleisen testausteorian käsittelemiselle varattu oma kappale. Kappale esittelee erilaisia testauksen näkökulmia ja niihin perustuen rakennetaan myös testausohjelmisto. Tavoitteena on luoda ohjelmisto, jota voidaan käyttää UMA-RR protokollan toiminnan varmentamiseen kohdeympäristössä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we propose a novel method for calculating cardiac 3-D strain. The method requires the acquisition of myocardial short-axis (SA) slices only and produces the 3-D strain tensor at every point within every pair of slices. Three-dimensional displacement is calculated from SA slices using zHARP which is then used for calculating the local displacement gradient and thus the local strain tensor. There are three main advantages of this method. First, the 3-D strain tensor is calculated for every pixel without interpolation; this is unprecedented in cardiac MR imaging. Second, this method is fast, in part because there is no need to acquire long-axis (LA) slices. Third, the method is accurate because the 3-D displacement components are acquired simultaneously and therefore reduces motion artifacts without the need for registration. This article presents the theory of computing 3-D strain from two slices using zHARP, the imaging protocol, and both phantom and in-vivo validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the thesis was to create a framework that can be used to define a manufacturing strategy taking advantage of the product life cycle method, which enables PQP enhancements. The starting point was to study synkron implementation of cost leadership and differentiation strategies in different stages of the life cycles. It was soon observed that Porter’s strategies were too generic for the complex and dynamic environment where customer needs deviate market and product specifically. Therefore, the strategy formulation process is based on the Terry Hill’s order-winner and qualifier concepts. The manufacturing strategy formulation is initiated with the definition of order-winning and qualifying criteria. From these criteria there can be shaped product specific proposals for action and production site specific key manufacturing tasks that they need to answer in order to meet customers and markets needs. As a future research it is suggested that the process of capturing order-winners and qualifiers should be developed so that the process would be simple and streamlined at Wallac Oy. In addition, defined strategy process should be integrated to the PerkinElmer’s SGS process. SGS (Strategic Goal Setting) is one of the PerkinElmer’s core management processes. Full Text: Null

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tavoitteena oli kuvata ja ottaa käyttöön sahauseräkohtaisen kannattavuuden laskentamenetelmä sahalle, sekä tehdä laskentamalli menetelmän tueksi. Sahauksen peruskäsitteiden jälkeen työssä on esitelty sahan tuotantoprosessi. Tuotantoprosessi on kuvattu kirjallisuuden ja asiantuntijoiden haastattelujen perusteella. Seuraavaksi kartoitettiin hyötyjä ja vaikutuksia, mitä laskentamenetelmältä odotetaan.. Kustannuslaskennan teoriaa selvitettiin kirjallisuuslähteitä käyttäen silmälläpitäen juuri tätä kehitettävää laskentamenetelmää. Lisäksi esiteltiin Uimaharjun sahalla käytettävät ja laskentaan liittyvät laskenta- ja tietojärjestelmät.Nykyisin sahalla ei ole minkäänlaista menetelmää sahauseräkohtaisen tuloksen laskemiseksi. Pienillä muutoksilla sahan tietojärjestelmään ja prosessikoneisiin voidaan sahauserä kuljettaa prosessin läpi niin, että jokaisessa prosessin vaiheessa sille saadaan kohdistettua tuotantotietoa. Eri vaiheista saatua tietoa käyttämällä saadaan tarkasti määritettyä tuotteet, joita sahauserä tuotti ja paljonko tuotantoresursseja tuottamiseen kului. Laskentamalliin syötetään tuotantotietoja ja kustannustietoa ja saadaan vastaukseksi sahauserän taloudellinen tulos.Toimenpide ehdotuksena esitetään lisätutkimusta tuotantotietojen automaattisesta keräämisestä manuaalisen työn ja virheiden poistamiseksi. Suhteellisen pienillä panoksilla voidaan jokaiselle sahauserälle kerätä tuotantotiedot täysin automaattisesti. Lisäksi kehittämäni laskentamallin tilalle tulisi hankkia sovellus, joka käyttäisi paremmin hyväksi nykyisiä tietojärjestelmiä ja poistaisi manuaalisen työvaiheen laskennassa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: TILLING (Targeting Induced Local Lesions IN Genomes) is a reverse genetic method that combines chemical mutagenesis with high-throughput genome-wide screening for point mutation detection in genes of interest. However, this mutation discovery approach faces a particular problem which is how to obtain a mutant population with a sufficiently high mutation density. Furthermore, plant mutagenesis protocols require two successive generations (M1, M2) for mutation fixation to occur before the analysis of the genotype can begin. Results: Here, we describe a new TILLING approach for rice based on ethyl methanesulfonate (EMS) mutagenesis of mature seed-derived calli and direct screening of in vitro regenerated plants. A high mutagenesis rate was obtained (i.e. one mutation in every 451 Kb) when plants were screened for two senescence-related genes. Screening was carried out in 2400 individuals from a mutant population of 6912. Seven sense change mutations out of 15 point mutations were identified. Conclusions: This new strategy represents a significant advantage in terms of time-savings (i.e. more than eight months), greenhouse space and work during the generation of mutant plant populations. Furthermore, this effective chemical mutagenesis protocol ensures high mutagenesis rates thereby saving in waste removal costs and the total amount of mutagen needed thanks to the mutagenesis volume reduction.