73 resultados para Genetics Statistical methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimal design of a heat exchanger system is based on given model parameters together with given standard ranges for machine design variables. The goals set for minimizing the Life Cycle Cost (LCC) function which represents the price of the saved energy, for maximizing the momentary heat recovery output with given constraints satisfied and taking into account the uncertainty in the models were successfully done. Nondominated Sorting Genetic Algorithm II (NSGA-II) for the design optimization of a system is presented and implemented inMatlab environment. Markov ChainMonte Carlo (MCMC) methods are also used to take into account the uncertainty in themodels. Results show that the price of saved energy can be optimized. A wet heat exchanger is found to be more efficient and beneficial than a dry heat exchanger even though its construction is expensive (160 EUR/m2) compared to the construction of a dry heat exchanger (50 EUR/m2). It has been found that the longer lifetime weights higher CAPEX and lower OPEX and vice versa, and the effect of the uncertainty in the models has been identified in a simplified case of minimizing the area of a dry heat exchanger.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL, OMIM #125310) is an inherited vascular disease. The main symptoms include migraineous headache, recurrent strokes and progressive cognitive impairment. CADASIL is caused by mutations in the NOTCH3 gene which result in degeneration of vascular smooth muscle cells, arteriolar stenosis and impaired cerebral blood flow. The aims of this study were assessment of the genetic background of Finnish and Swedish CADASIL patients, analysis of genetic and environmental factors that may influence the phenotype, and identification of the optimal diagnostic strategy. The majority of Finnish CADASIL patients carry the p.Arg133Cys mutation. Haplotype analysis of 18 families revealed a region of linkage disequilibrium around the NOTCH3 locus, which is evidence for a founder effect and a common ancestral mutation. Despite the same mutational background, the clinical course of CADASIL is highly variable between and even within families. The association of several genetic factors with the phenotypic variation was investigated in 120 CADASIL patients. Apolipoprotein E allele 4 was associated with earlier occurrence of strokes, especially in younger patients. Study of a pair of monozygotic twins with CADASIL revealed environmental factors which may influence the phenotype, i.e. smoking, statin medication and physical activity. Knowledge of these factors is useful, since life-style choices may influence the disease progression. The clinical CADASIL diagnosis can be confirmed by detection of either the NOTCH3 mutation or granular osmiophilic material by electron microscopy in skin biopsy, although the sensitivity estimates have been contradictory. Comparison of these two methods in a group of 131 diagnostic cases from Finland, Sweden and France demonstrated that both methods are highly sensitive and reliable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aikuispotilaan kotisyntyisen keuhkokuumeen etiologinen diagnostiikka mikrobiologisilla pikamenetelmillä Tausta. Keuhkokuume on vakava sairaus, johon sairastuu Suomessa vuosittain n. 60 000 aikuista. Huolimatta siitä, että taudin hoito on kehittynyt, siihen liittyy yhä merkittävä, 6-15%:n kuolleisuus. Alahengitystieinfektion aiheuttajamikrobien tunnistaminen on myös edelleen haasteellista. Tavoitteet. Tämän työn tavoitteena oli tutkia Turun yliopistollisessa keskussairaalassa hoidettujen aikuispotilaiden keuhkokuumeen etiologiaa sekä selvittää uusien mikrobiologisten pikamenetelmi¬en hyödyllisyyttä taudinaiheuttajan toteamisessa. Aineisto. Osatöiden I ja III aineisto koostui 384 Turun yliopistollisen keskussairaalaan infektio-osastolla hoidetusta keuhkokuumepotilaasta. Osatyössä I tutkittiin keuhkokuumeen aiheuttaja¬mikrobeja käyttämällä perinteisten menetelmien lisäksi antigeeniosoitukseen ja PCR-tekniikkaan perustuvia pikamenetelmiä. Osatyö II käsitti 231 potilaasta koostuvan alaryhmän, jossa tutkittiin potilaiden nielun limanäytteestä rinovirusten ja enterovirusten esiintyvyyttä. Osatyössä III potilailta tutkittiin plasman C-reaktiivisen proteiinin (CRP) pitoisuus ensimmäisten viiden sairaalahoitopäi¬vän aikana. Laajoja tilastotieteellisiä analyysejä käyttämällä selvitettiin CRP:n käyttökelpoisuutta sairauden vaikeusasteen arvioinnissa ja komplikaatioiden kehittymisen ennustamisessa. Osatyössä IV 68 keuhkokuumepotilaan sairaalaan tulovaiheessa otetuista näytteistä määritettiin neutrofiilien pintareseptorien ekspressio. Osatyössä V analysoitiin sisätautien vuodeosastoilla vuosina 1996-2000 keuhkokuumepotilaille tehtyjen keuhkohuuhtelunäytteiden laboratoriotutkimustulokset. Tulokset. Keuhkokuumeen aiheuttaja löytyi 209 potilaalta, aiheuttajamikrobeja löydettiin kaikkiaan 230. Näistä aiheuttajista 135 (58.7%) löydettiin antigeenin osoituksella tai PCR-menetelmillä. Suu¬rin osa, 95 (70.4%), todettiin pelkästään kyseisillä pikamenetelmillä. Respiratorinen virus todettiin antigeeniosoituksella 11.1% keuhkokuumepotilaalla. Eniten respiratorisia viruksia löytyi vakavaa keuhkokuumetta sairastavilta potilailta (20.3%). 231 keuhkokuumepotilaan alaryhmässä todettiin PCR-menetelmällä picornavirus 19 (8.2%) potilaalla. Respiratorinen virus löytyi tässä potilasryh¬mässä kaiken kaikkiaan 47 (20%) potilaalta. Näistä 17:llä (36%) löytyi samanaikaisesti bakteerin aiheuttama infektio. CRP-tasot olivat sairaalaan tulovaiheessa merkitsevästi korkeammat vakavaa keuhkokuumetta (PSI-luokat III-V) sairastavilla potilailla kuin lievää keuhkokuumetta (PSI-luokat I-II) sairastavilla potilailla (p <0.001). Yli 100 mg/l oleva CRP-taso neljän päivän kuluttua sairaa¬laan tulosta ennusti keuhkokuumeen komplikaatiota tai huonoa hoitovastetta. Neutrofiilien komple¬menttireseptorin ekspressio oli pneumokokin aiheuttamaa keuhkokuumetta sairastavilla merkitse¬västi korkeampi kuin influenssan aiheuttamaa keuhkokuumetta sairastavilla. BAL-näytteistä vain yhdessä 71:stä (1.3%) todettiin diagnostinen bakteerikasvu kvantitatiivisessa viljelyssä. Uusilla menetelmilläkin keuhkokuumeen aiheuttaja löytyi vain 9.8% BAL-näytteistä. Päätelmät. Uusilla antigeeniosoitus- ja PCR-menetelmillä keuhkokuumeen etiologia voidaan saada selvitettyä nopeasti. Lisäksi näitä menetelmiä käyttämällä taudin aiheuttajamikrobi löytyi huomattavasti suuremmalta osalta potilaista kuin pelkästään tavanomaisia menetelmiä käyttämällä. Pikamenetelmien hyödyllisyys vaihteli taudin vaikeusasteen mukaan. Respiratorinen virus löytyi huomattavan usein keuhkokuumetta sairastavilta potilailta, ja näiden potilaiden taudinkuva oli usein vaikea. Tulovaiheen korkeaa CRP-tasoa voidaan käyttää lisäkeinona arvioitaessa keuhkokuumeen vaikeutta. CRP on erityisen hyödyllinen arvioitaessa hoitovastetta ja riskiä komplikaatioiden ke¬hittymiseen. Neutrofiilien komplementtireseptorin ekspression tutkiminen näyttää lupaavalta pi¬kamenetelmältä erottamaan bakteerien ja virusten aiheuttamat taudit toisistaan. Antimikrobihoitoa saavilla potilailla BAL-tutkimuksen löydökset olivat vähäiset ja vaikuttivat hoitoon vain harvoin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic diversity is one of the levels of biodiversity that the World Conservation Union (IUCN) has recognized as being important to preserve. This is because genetic diversity is fundamental to the future evolution and to the adaptive flexibility of a species to respond to the inherently dynamic nature of the natural world. Therefore, the key to maintaining biodiversity and healthy ecosystems is to identify, monitor and maintain locally-adapted populations, along with their unique gene pools, upon which future adaptation depends. Thus, conservation genetics deals with the genetic factors that affect extinction risk and the genetic management regimes required to minimize the risk. The conservation of exploited species, such as salmonid fishes, is particularly challenging due to the conflicts between different interest groups. In this thesis, I conduct a series of conservation genetic studies on primarily Finnish populations of two salmonid fish species (European grayling, Thymallus thymallus, and lake-run brown trout, Salmo trutta) which are popular recreational game fishes in Finland. The general aim of these studies was to apply and develop population genetic approaches to assist conservation and sustainable harvest of these populations. The approaches applied included: i) the characterization of population genetic structure at national and local scales; ii) the identification of management units and the prioritization of populations for conservation based on evolutionary forces shaping indigenous gene pools; iii) the detection of population declines and the testing of the assumptions underlying these tests; and iv) the evaluation of the contribution of natural populations to a mixed stock fishery. Based on microsatellite analyses, clear genetic structuring of exploited Finnish grayling and brown trout populations was detected at both national and local scales. Finnish grayling were clustered into three genetically distinct groups, corresponding to northern, Baltic and south-eastern geographic areas of Finland. The genetic differentiation among and within population groups of grayling ranged from moderate to high levels. Such strong genetic structuring combined with low genetic diversity strongly indicates that genetic drift plays a major role in the evolution of grayling populations. Further analyses of European grayling covering the majority of the species’ distribution range indicated a strong global footprint of population decline. Using a coalescent approach the beginning of population reduction was dated back to 1 000-10 000 years ago (ca. 200-2 000 generations). Forward simulations demonstrated that the bottleneck footprints measured using the M ratio can persist within small populations much longer than previously anticipated in the face of low levels of gene flow. In contrast to the M ratio, two alternative methods for genetic bottleneck detection identified recent bottlenecks in six grayling populations that warrant future monitoring. Consistent with the predominant role of random genetic drift, the effective population size (Ne) estimates of all grayling populations were very low with the majority of Ne estimates below 50. Taken together, highly structured local populations, limited gene flow and the small Ne of grayling populations indicates that grayling populations are vulnerable to overexploitation and, hence, monitoring and careful management using the precautionary principles is required not only in Finland but throughout Europe. Population genetic analyses of lake-run brown trout populations in the Inari basin (northernmost Finland) revealed hierarchical population structure where individual populations were clustered into three population groups largely corresponding to different geographic regions of the basin. Similar to my earlier work with European grayling, the genetic differentiation among and within population groups of lake-run brown trout was relatively high. Such strong differentiation indicated that the power to determine the relative contribution of populations in mixed fisheries should be relatively high. Consistent with these expectations, high accuracy and precision in mixed stock analysis (MSA) simulations were observed. Application of MSA to indigenous fish caught in the Inari basin identified altogether twelve populations that contributed significantly to mixed stock fisheries with the Ivalojoki river system being the major contributor (70%) to the total catch. When the contribution of wild trout populations to the fisheries was evaluated regionally, geographically nearby populations were the main contributors to the local catches. MSA also revealed a clear separation between the lower and upper reaches of Ivalojoki river system – in contrast to lower reaches of the Ivalojoki river that contributed considerably to the catch, populations from the upper reaches of the Ivalojoki river system (>140 km from the river mouth) did not contribute significantly to the fishery. This could be related to the available habitat size but also associated with a resident type life history and increased cost of migration. The studies in my thesis highlight the importance of dense sampling and wide population coverage at the scale being studied and also demonstrate the importance of critical evaluation of the underlying assumptions of the population genetic models and methods used. These results have important implications for conservation and sustainable fisheries management of Finnish populations of European grayling and brown trout in the Inari basin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual environments and real-time simulators (VERS) are becoming more and more important tools in research and development (R&D) process of non-road mobile machinery (NRMM). The virtual prototyping techniques enable faster and more cost-efficient development of machines compared to use of real life prototypes. High energy efficiency has become an important topic in the world of NRMM because of environmental and economic demands. The objective of this thesis is to develop VERS based methods for research and development of NRMM. A process using VERS for assessing effects of human operators on the life-cycle efficiency of NRMM was developed. Human in the loop simulations are ran using an underground mining loader to study the developed process. The simulations were ran in the virtual environment of the Laboratory of Intelligent Machines of Lappeenranta University of Technology. A physically adequate real-time simulation model of NRMM was shown to be reliable and cost effective in testing of hardware components by the means of hardware-in-the-loop (HIL) simulations. A control interface connecting integrated electro-hydraulic energy converter (IEHEC) with virtual simulation model of log crane was developed. IEHEC consists of a hydraulic pump-motor and an integrated electrical permanent magnet synchronous motorgenerator. The results show that state of the art real-time NRMM simulators are capable to solve factors related to energy consumption and productivity of the NRMM. A significant variation between the test drivers is found. The results show that VERS can be used for assessing human effects on the life-cycle efficiency of NRMM. HIL simulation responses compared to that achieved with conventional simulation method demonstrate the advances and drawbacks of various possible interfaces between the simulator and hardware part of the system under study. Novel ideas for arranging the interface are successfully tested and compared with the more traditional one. The proposed process for assessing the effects of operators on the life-cycle efficiency will be applied for wider group of operators in the future. Driving styles of the operators can be analysed statistically from sufficient large result data. The statistical analysis can find the most life-cycle efficient driving style for the specific environment and machinery. The proposed control interface for HIL simulation need to be further studied. The robustness and the adaptation of the interface in different situations must be verified. The future work will also include studying the suitability of the IEHEC for different working machines using the proposed HIL simulation method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pairs trading is an algorithmic trading strategy that is based on the historical co-movement of two separate assets and trades are executed on the basis of degree of relative mispricing. The purpose of this study is to explore one new and alternative copula-based method for pairs trading. The objective is to find out whether the copula method generates more trading opportunities and higher profits than the more traditional distance and cointegration methods applied extensively in previous empirical studies. Methods are compared by selecting top five pairs from stocks of the large and medium-sized companies in the Finnish stock market. The research period includes years 2006-2015. All the methods are proven to be profitable and the Finnish stock market suitable for pairs trading. However, copula method doesn’t generate more trading opportunities or higher profits than the other methods. It seems that the limitations of the more traditional methods are not too restrictive for this particular sample data.