112 resultados para Combines
Resumo:
The problem of understanding how humans perceive the quality of a reproduced image is of interest to researchers of many fields related to vision science and engineering: optics and material physics, image processing (compression and transfer), printing and media technology, and psychology. A measure for visual quality cannot be defined without ambiguity because it is ultimately the subjective opinion of an “end-user” observing the product. The purpose of this thesis is to devise computational methods to estimate the overall visual quality of prints, i.e. a numerical value that combines all the relevant attributes of the perceived image quality. The problem is limited to consider the perceived quality of printed photographs from the viewpoint of a consumer, and moreover, the study focuses only on digital printing methods, such as inkjet and electrophotography. The main contributions of this thesis are two novel methods to estimate the overall visual quality of prints. In the first method, the quality is computed as a visible difference between the reproduced image and the original digital (reference) image, which is assumed to have an ideal quality. The second method utilises instrumental print quality measures, such as colour densities, measured from printed technical test fields, and connects the instrumental measures to the overall quality via subjective attributes, i.e. attributes that directly contribute to the perceived quality, using a Bayesian network. Both approaches were evaluated and verified with real data, and shown to predict well the subjective evaluation results.
Resumo:
Diplomityö on tehty osana Sinfonet-tutkimusprojektia, ja sen tavoitteena on laatia uusi toimintamalli asiakaslähtöisen tuotekehitystoiminnan tehostamiseksi. Toimintamallin vaatimuksena oli asiakastarvetiedon kerääminen koko tuotteen elinkaaren käyttövaiheen ajalta, ja rinnakkainen prosessi asiakastarpeiden analysointiin sekä hyödyntämiseen. Työssä yhdistetään asiakastarpeiden kartoittamiseen, analysointiin ja hyödyntämiseen käytettävät menetelmät yhdeksi systemaattiseksi kokonaisuudeksi, jota Laroxin henkilöstö voi tehokkaasti käyttää työssään. Menetelmä laatittiin teoriaosuudessa havaittujen viitekehysten ja Laroxin avainhenkilöiden kanssa suoritettujen haastattelujen pohjalta. On ehdottoman tärkeätä, että toimintamallin käyttö ja siitä saavutettavat hyödyt saadaan perusteltua yrityksen henkilöstölle. Toimintamallissa asiakastarpeiden kartoitusmenetelmiksi valittiin AVAIN-ryhmähaastattelu sekä puolistrukturoitu haastattelu, ja analysointimenetelmiksi tulkintataulukko sekä QFD (Quality Function Deployment). Tarpeiden dokumentointiin valittiin tietokannaksi Laroxin käytössä jo valmiiksi oleva wiki-pohjainen Live-järjestelmä. Uusi toimintamalli ja siinä mukana olevat prosessit kuvataan diplomityössä yksityiskohtaisesti tietovirtakaaviona.
Resumo:
The aim of this thesis was to study network structures and modularity among biofuel heating system manufacturers in the Finnish bioenergy sector by utilizing the perspectives of numerous Finnish bioenergy specialists. The study is qualitative due to the fact that the research material was gathered with semi-structured theme interviews during May and June 2010. The research methodology used in the thesis combines conceptual and action-oriented approach. Networks, value nets, and modularity were studied from different perspectives. Three network and platform strategies were discovered and a general network structure was formed. Moreover, benefits and disadvantages of networks and modularity among biofuel heating system manufacturers were illustrated. The analysis provides a comprehensive perception of the industry. The results of the research were constructed by implementing existing theories into practice. Also future recommendations for the biofuel heating system manufacturers were given. The results can be considered to be beneficial because the number of previous studies about the subject is relatively small. The reliability of the study is eminent because the number of the interviews was inclusive.
Resumo:
Pro gradu -tutkielman tarkoituksena on tutkia Päijät-Hämeen sosiaali- ja terveydenhuollon kuntayhtymän hankintojen roolia ja asemaa koko organisaatiossa ja sen muodostamia tavoitteita ja haasteita hankinta-asiantuntijuudelle. Tutkielman avulla pyritään selvittämään, millaisista elementeistä organisaation hankinta-asiantuntijuus muodostuu ja millaiset tekijät vaikuttavat organisaation hankinta-asiantuntijuustarpeisiin. Tutkielma ottaa kantaa hankintojen roolituksista kumpuaviin asiantuntijuuden kehittämistarpeisiin. Tutkielma on luonteeltaan kvantitatiivisia sekä kvalitatiivisia menetelmiä yhdistelevä case-tutkimus. Pyrkimyksenä on kartoittaa kuntayhtymälle ehdotelma hankinta-asiantuntijuuden kehittämisestä haastattelujen ja kyselyn avulla sekä kirjallisuuteen perehtyen. Teorian pohjalta pystyttiin muodostamaan hankintapäällikölle, hankintasihteereille sekä OTO -ostajille osaamisprofiilit, joihin peilattiin tämän hetkistä osaamistasoa. Tutkimustulokset osoittivat, että hankinta-asiantuntijuus case -organisaatiossa on tällä hetkellä hyvällä tasolla. Hankintaprofiilista riippumatta kehittämistä kaivattiin Liiketoimintastrategian, Suomen hankintalainsäädännön, Sopimusoikeuden sekä Hankintaprosessin suhteen.
Resumo:
The human genome comprises roughly 20 000 protein coding genes. Proteins are the building material for cells and tissues, and proteins are functional compounds having an important role in many cellular responses, such as cell signalling. In multicellular organisms such as humans, cells need to communicate with each other in order to maintain a normal function of the tissues within the body. This complex signalling between and within cells is transferred by proteins and their post-translational modifications, one of the most important being phosphorylation. The work presented here concerns the development and use of tools for phosphorylation analysis. Mass spectrometers have become essential tools to study proteins and proteomes. In mass spectrometry oriented proteomics, proteins can be identified and their post-translational modifications can be studied. In this Ph.D. thesis the objectives were to improve the robustness of sample handling methods prior to mass spectrometry analysis for peptides and their phosphorylation status. The focus was to develop strategies that enable acquisition of more MS measurements per sample, higher quality MS spectra and simplified and rapid enrichment procedures for phosphopeptides. Furthermore, an objective was to apply these methods to characterize phosphorylation sites of phosphopeptides. In these studies a new MALDI matrix was developed which allowed more homogenous, intense and durable signals to be acquired when compared to traditional CHCA matrix. This new matrix along with other matrices was subsequently used to develop a new method that combines multiple spectra from different matrises from identical peptides. With this approach it was possible to identify more phosphopeptides than with conventional LC/ESI-MS/MS methods, and to use 5 times less sample. Also, phosphopeptide affinity MALDI target was prepared to capture and immobilise phosphopeptides from a standard peptide mixture while maintaining their spatial orientation. In addition a new protocol utilizing commercially available conductive glass slides was developed that enabled fast and sensitive phosphopeptide purification. This protocol was applied to characterize the in vivo phosphorylation of a signalling protein, NFATc1. Evidence for 12 phosphorylation sites were found, and many of those were found in multiply phosphorylated peptides
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
In knowledge-intensive economy an effective knowledge transfer is a part of the firm’s strategy to achieve a competitive advantage in the market. Knowledge transfer related to a variety of mechanisms depends on the nature of knowledge and context. The topic is, however, very little empirical studied and there is a research gap in scientific literature. This study examined and analyzed external knowledge transfer mechanisms in service business and especially in the context of acquisitions. The aim was to find out what kind of mechanisms was used when the buyer began to transfer data e.g. their own agendas and practices to the purchased units. Another major research goal was to identify the critical factors which contributed to knowledge transfer through different mechanisms. The study was conducted as a multiple-case study in a consultative service business company, in its four business units acquired by acquisition, in various parts of the country. The empirical part of the study was carried out as focus group interviews in each unit, and the data were analyzed using qualitative methods. The main findings of this study were firstly the nine different knowledge transfer mechanisms in service business acquisition: acquisition management team as an initiator, unit manager as a translator, formal training, self-directed learning, rooming-in, IT systems implementation, customer relationship management, codified database and ecommunication. The used mechanisms brought up several aspects as giving the face to changing, security of receiving right knowledge and correctly interpreted we-ness atmosphere, and orientation to use more consultative touch with customers. The study pointed out seven critical factors contributed to different mechanisms: absorption, motivation, organizational learning, social interaction, trust, interpretation and time resource. The two last mentioned were new findings compared to previous studies. Each of the mechanisms and the related critical factors contributed in different ways to the activity in different units after the acquisition. The role of knowledge management strategy was the most significant managerial contribution of the study. Phenomenon is not recognized enough although it is strongly linked in knowledge based companies. The recognition would help to develop a better understanding of the business through acquisitions, especially in situations such as where two different knowledge strategies combines in new common company.
Resumo:
Tutkielman tavoitteena on tiivistetysti edesauttaa Lappeenrannan Lifeterveyskaupan yritysmyyntiprosessia kuvailemalla kohdeyrityksen strategiaa, menestystekijöitä ja houkuttelevuutta yritysostajan näkökulma huomioiden siten, että niiden hahmottaminen ja omaksuminen helpottuu yritysostajalle. Tutkielman sisältöä voidaan hyödyntää yritysmyyntiprosessissa sellaisenaan tai tarpeen mukaan muunneltuna. Tutkielma on kvalitatiivinen tapaustutkimus, joka sisältää piirteitä käsite- ja toiminta-analyyttisestä sekä osin nomoteettisesta tutkimusotteesta. Aineisto kerättiin puolistrukturoiduilla teemahaastatteluilla. Kohdeyrityksen strategia on asiakaslähtöinen. Perusstrategia kohdeyrityksessä on riippuvainen kilpailijakontekstista, mutta useimmissa tapauksissa kilpailuetu perustuu differointiin. Suurena terveyskaupan erikoisliikkeenä voidaan hyödyntää myös suuruuden ekonomiaa. Yrityksen toiminta on alalle fokusoitua. Life-terveyskauppaketjuun kuuluminen ja erinomainen liiketilan sijainti ovat olleet tärkeitä tekijöitä menestyksessä. Runsas mainonta, suuri neuvotteluvoima toimittajiin nähden, tuoteportfolion hallinta, osaava henkilöstö ja hyvät sopimusehdot ovat edesauttaneet menestystä. Venäjä-potentiaali Lappeenrannassa on suuri. Markkina-asema on talousaseman ohella erinomainen ja hyödyksi sekä yritykselle että yritysostajalle.
Resumo:
Tutkimuksessa tutkitaan mallintamista ja mittaamista osana liiketoimintaproses-sien parantamista, sekä näiden asioiden kuvaamista soveltuvalla työkalulla. Ensin esitetään teoreettinen viitekehys siihen, kuinka prosesseja voidaan mitata ja mal-lintaa. Sitten raportoidaan käytännössä suoritettu kehitystyö, jolle on määritetty lähtö- ja tavoitetila. Työn onnistumista mitataan johtajahaastatteluin ja saatuja tuloksia verrataan teoriaan. Tutkimuksessa yhdistettiin analyyttinen mallinrakennus, tieteellinen ongelman-ratkaisutoiminta sekä konsultointi tarkoituksena saada aikaan kohde organisaati-olle sopiva konstruktio esitettyyn ongelmaan. Johtajahaastattelut analysoitiin ja suoritettiin kvalitatiivinen tarveanalyysi. Haastatteluja täydennettiin muulla kerä-tyllä aineistolla ja analyysin tarkkuutta pyritään kasvattamaan eri lähdeaineistojen ristivertailuilla. Yrityksissä on niin liiketoiminnalle elintärkeitä ydinprosesseja kuin niitä tukevia tukiprosessejakin. Niiden toiminta perustuu ennalta suunniteltuihin ja uudelleen-käytettäviin menetelmiin. Prosessit tulee sopeuttaa yrityksen arkkitehtuuriin ja niitä on jatkuvasti kehitettävä. Kehittäminen voidaan toteuttaa suurilla kertamuu-toksilla, jatkuvalla laadun parantamisella tai niiden yhdistelmänä. Mallintamisella ja mittaamisella on tärkeä tehtävä liiketoimintaprosessien kehit-tämisessä. Niiden avulla voidaan helpottaa erityisesti prosessien suunnittelua luomalla konkreettisia malleja ja mittareita prosesseista. Toteutuksessa käytettiin prototyyppilähestymistapaa ja työn onnistumista arvioivat yhtiön johtajat. Tutki-muksen tuloksia ovat eri tason prosessimallit, joiden luomisessa käytettiin eri mallintamistekniikoita, sekä mittaristot mittaamaan yrityksen tuottavuutta ja te-hokkuutta.
Resumo:
The currently used forms of cancer therapy are associated with drug resistance and toxicity to healthy tissues. Thus, more efficient methods are needed for cancer-specific induction of growth arrest and programmed cell death, also known as apoptosis. Therapeutic forms of tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) are investigated in clinical trials due to the capability of TRAIL to trigger apoptosis specifically in cancer cells by activation of cell surface death receptors. Many tumors, however, have acquired resistance to TRAIL-induced apoptosis and sensitizing drugs for combinatorial treatments are, therefore, in high demand. This study demonstrates that lignans, natural polyphenols enriched in seeds and cereal, have a remarkable sensitizing effect on TRAIL-induced cell death at non-toxic lignan concentrations. In TRAIL-resistant and androgen-dependent prostate cancer cells we observe that lignans repress receptor tyrosine kinase (RTK) activity and downregulate cell survival signaling via the Akt pathway, which leads to increased TRAIL sensitivity. A structure-activity relationship analysis reveals that the γ-butyrolactone ring of the dibenzylbutyrolactone lignans is essential for the rapidly reversible TRAIL-sensitizing activity of these compounds. Furthermore, the lignan nortrachelogenin (NTG) is identified as the most efficient of the 27 tested lignans and norlignans in sensitization of androgen-deprived prostate cancer cells to TRAIL-induced apoptosis. While this combinatorial anticancer approach may leave normal cells unharmed, several efficient cancer drugs are too toxic, insoluble or unstable to be used in systemic therapy. To enable use of such drugs and to protect normal cells from cytotoxic effects, cancer-targeted drug delivery vehicles of nanometer scale have recently been generated. The newly developed nanoparticle system that we tested in vitro for cancer cell targeting combines the efficient drug-loading capacity of mesoporous silica to the versatile particle surface functionalization of hyperbranched poly(ethylene imine), PEI. The mesoporous hybrid silica nanoparticles (MSNs) were functionalized with folic acid to promote targeted internalization by folate receptor overexpressing cancer cells. The presented results demonstrate that the developed carrier system can be employed in vitro for cancer selective delivery of adsorbed or covalently conjugated molecules and furthermore, for selective induction of apoptotic cell death in folate receptor expressing cancer cells. The tested carrier system displays potential for simultaneous delivery of several anticancer agents specifically to cancer cells also in vivo.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
The objective of the thesis is to enhance understanding of the evolution of convergence. Previous research has shown that the technological interfaces between distinct industries are one of the major sources of new radical cross-industry innovations. Despite the fact that convergence in industry evolution has attracted a substantial managerial interest, the conceptual confusion within the field of convergence exists. Firstly, this study clarifies the convergence phenomenon and its impact to industry evolution. Secondly, the study creates novel patent analysis methods to analyze technological convergence and provide tools for anticipating the early stages of convergence. Overall the study combines the industry evolution perspective and the convergence view of industrial evolution. The theoretical background for the study consists of the industry life cycle theories, technology evolution, and technological trajectories. The study links several important concepts in analyzing industry evolution, technological discontinuities, path-dependency, technological interfaces as a source of industry transformation, and the evolutionary stagesof convergence. Based on reviewing the literature a generic understanding of industry transformation and industrial dynamics was generated. In the convergence studies, the theoretical basis is in the discussion of different convergence types and their impacts on industry evolution, and in anticipating and monitoring the stages of convergence. The study is divided in two parts. The first part gives a general overview, and the second part comprises eight research publications. Our case study is based historically on two very distinct industries of the paper and electronics companies as a test environment to evaluate the importance of emerging business sectors and technological convergence as a source of industry transformation. Both qualitative and quantitative research methodology are utilized. The results of this study reveal that technological convergence and complementary innovations from different fields have significant effect to the emerging new business sector formation. The patent-based indicators in the analysis of technological convergence can be utilized on analyzing technology competition, capability and competence development, knowledge accumulation, knowledge spill-overs, and technology-based industry transformation. The patent-based indicators can provide insights to the future competitive environment. Results and conclusions from empirical part seem not be in conflict with real observations in the industry.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
This bachelor’s thesis is a part of the research project realized in the summer 2011 in Lappeenranta University of Technology. The goal of the project was to develop an automation concept for controlling the electrically excited synchronous motor. Thesis concentrates on the implementation of the automation concept into the ABB’s AC500 programmable logic enviroment. The automation program was developed as a state machine with the ABB’s PS501 Control Builder software. For controlling the automation program is developed a fieldbus control and with CodeSys Visualization Tool a local control with control panel. The fieldbus control is done to correspond the ABB drives communication profile and the local control is implemented with a function block which feeds right control words into the statemachine. A field current control of the synchronous motor is realized as a method presented in doctoral thesis of Olli Pyrhönen (Pyrhönen 1998). The Method combines stator flux and torque based openloop control and power factor based feedback control.