938 resultados para PM3 semi-empirical method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many known examples of multiple semi-independent associations at individual loci; such associations might arise either because of true allelic heterogeneity or because of imperfect tagging of an unobserved causal variant. This phenomenon is of great importance in monogenic traits but has not yet been systematically investigated and quantified in complex-trait genome-wide association studies (GWASs). Here, we describe a multi-SNP association method that estimates the effect of loci harboring multiple association signals by using GWAS summary statistics. Applying the method to a large anthropometric GWAS meta-analysis (from the Genetic Investigation of Anthropometric Traits consortium study), we show that for height, body mass index (BMI), and waist-to-hip ratio (WHR), 3%, 2%, and 1%, respectively, of additional phenotypic variance can be explained on top of the previously reported 10% (height), 1.5% (BMI), and 1% (WHR). The method also permitted a substantial increase (by up to 50%) in the number of loci that replicate in a discovery-validation design. Specifically, we identified 74 loci at which the multi-SNP, a linear combination of SNPs, explains significantly more variance than does the best individual SNP. A detailed analysis of multi-SNPs shows that most of the additional variability explained is derived from SNPs that are not in linkage disequilibrium with the lead SNP, suggesting a major contribution of allelic heterogeneity to the missing heritability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artifacts are present in most of the electroencephalography (EEG) recordings, making it difficult to interpret or analyze the data. In this paper a cleaning procedure based on a multivariate extension of empirical mode decomposition is used to improve the quality of the data. This is achieved by applying the cleaning method to raw EEG data. Then, a synchrony measure is applied on the raw and the clean data in order to compare the improvement of the classification rate. Two classifiers are used, linear discriminant analysis and neural networks. For both cases, the classification rate is improved about 20%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to propose a way of using the Tocher's method of clustering to obtain a matrix similar to the cophenetic one obtained for hierarchical methods, which would allow the calculation of a cophenetic correlation. To illustrate the obtention of the proposed cophenetic matrix, we used two dissimilarity matrices - one obtained with the generalized squared Mahalanobis distance and the other with the Euclidean distance - between 17 garlic cultivars, based on six morphological characters. Basically, the proposal for obtaining the cophenetic matrix was to use the average distances within and between clusters, after performing the clustering. A function in R language was proposed to compute the cophenetic matrix for Tocher's method. The empirical distribution of this correlation coefficient was briefly studied. For both dissimilarity measures, the values of cophenetic correlation obtained for the Tocher's method were higher than those obtained with the hierarchical methods (Ward's algorithm and average linkage - UPGMA). Comparisons between the clustering made with the agglomerative hierarchical methods and with the Tocher's method can be performed using a criterion in common: the correlation between matrices of original and cophenetic distances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena on ennakoida liiketoimintaprosessien sähköistymisen kehittymistä käyttämällä skenaariomenetelmää, yhtä laajimmin käytetyistä tulevaisuuden tutkimisen menetelmistä. Tarkastelun kohteena ovat erityisesti tulevaisuuden e-business -ratkaisut metsäteollisuudessa. Tutkimuksessa selvitetään skenaariomenetelmän ominaisuuksia, skenaariosuunnittelun periaatteita sekä menetelmän sopivuutta teknologian ja toimialan muutosten tarkasteluun. Tutkimuksen teoriaosassa selvitetään teknologian muutoksen vaikutusta toimialojen kehitykseen. Todettiin, että teknologisella muutoksella on vahva vaikutus toimialojen muutoksiin, ja että jokainen toimiala seuraa tietynlaista kehitystrajektoria. Yritysten tulee olla tietoisia teknologisen muutoksen nopeudesta ja suunnasta, ja seurata toimialansa kehityksen sääntöjä. Metsäteollisuudessa muutosten radikaali luonne sekä ICT-teknologian nopea kehitys asettavat haasteita liiketoimintaprosessien sähköistämisen kentässä. Empiriaosuudessa luotiin kolme erilaista skenaariota e-busineksen tulevaisuudesta metsäteollisuudessa. Skenaariot perustuvat pääosin aiheen asiantuntijoiden tämän hetkisiin näkemyksiin, joita koottiin skenaariotyöpajassa. Skenaarioiden muodostamisessa yhdistettiin kvalitatiivisia ja kvantitatiivisia elementtejä. Muodostetut kolme skenaariota osoittavat, että e-busineksen vaikutukset tulevaisuudessa nähdään pääosin positiivisina, ja että yritysten tulee kehittyä aktiivisesti ja joustavasti pystyäkseen hyödyntämään sähköisiä ratkaisuja tehokkaasti liiketoiminnassaan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena on selvittää, onko perheomistajuus, eli yksityisomistus, kannattavampi omistusmuoto kuin institutionaalinen omistajuus ja, onko yrityksen iällä ja koolla vaikutusta perheyritysten menestymiseen. Aikaisempaan tutkimustietoon tukeutuen, tutkimuksen aluksi käydään myös läpi perheomistajuuteen yleisesti liitettyjä ominaispiirteitä sekä perheyritysten menestymistä verrattuna ei-perheyrityksiin. Empiirinen analyysi perheomistajuuden vaikutuksista yrityksen kannattavuuteen sekä yrityksen iän ja koon vaikutuksista perheyritysten menestymiseen toteutetaan kahden otoksen avulla, jotka koostuvat listaamattomista norjalaisista pienistä ja keskisuurista yrityksistä (pk-yrityksistä). Näin ollen satunnaisotos ja päätoimialaotos, johon listaamattomat pk-yritykset on valittu satunnaisesti Norjan tärkeimmiltä toimialoilta, analysoidaan erikseen. Analyysi toteutetaan käyttäen lineaarista regressioanalyysia. Vaikka satunnaisotoksen perusteella perheyritykset eivät näytä olevan ei-perheyrityksiä kannattavampia, päätoimialaotos osoittaa, että listaamattomissa pk-yrityksissä perhe- eli yksityisomistajuus on merkittävästi institutionaalista omistajuutta kannattavampi omistusmuoto. Eritoten nuoret ja pienet yritykset vastaavat perheyritysten paremmasta kannattavuudesta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena on selvittää mitkä riskitekijät vaikuttavat osakkeiden tuottoihin. Arvopapereina käytetään kuutta portfoliota, jotka ovat jaoteltu markkina-arvon mukaan. Aikaperiodi on vuoden 1987 alusta vuoden 2004 loppuun. Malleina käytetään pääomamarkkinoiden hinnoittelumallia, arbitraasihinnoitteluteoriaa sekä kulutuspohjaista pääomamarkkinoiden hinnoittelumallia. Riskifaktoreina kahteen ensimmäiseen malliin käytetään markkinariskiä sekä makrotaloudellisia riskitekijöitä. Kulutuspohjaiseen pääomamarkkinoiden hinnoinoittelumallissa keskitytään estimoimaan kuluttajien riskitottumuksia sekä diskonttaustekijää, jolla kuluttaja arvostavat tulevaisuuden kulutusta. Tämä työ esittelee momenttiteorian, jolla pystymme estimoimaan lineaarisia sekä epälineaarisia yhtälöitä. Käytämme tätä menetelmää testaamissamme malleissa. Yhteenvetona tuloksista voidaan sanoa, että markkinabeeta onedelleen tärkein riskitekijä, mutta löydämme myös tukea makrotaloudellisille riskitekijöille. Kulutuspohjainen mallimme toimii melko hyvin antaen teoreettisesti hyväksyttäviä arvoja.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman päätavoitteena on tutkia mitä lisäarvoa yritys saa ottaessaan Balanced scorecardin käyttöönsä ja miten käyttöönottoprosessi tulee suorittaa, jotta määritellyt lisäarvot realisoituvat. Teoreettisena lähtökohtana on Toivasen käyttöönottoprojektimalli. Tutkielmassa tullaan keskittymään mittariston käyttöönottoprosessiin ja käyttöönoton vaikutuksiin. Tutkielman päätutkimusmenetelmä on toiminta-analyyttinen case-tutkimus. Empiirinen aineisto kerätään puolistrukturoitujen teemahaastattelujen ja osallistuvan havainnoinnin avulla. Tutkimustulosten mukaan merkittävimmät Balanced scorecardin mukanaan tuomat lisäarvot yritykselle ovat kokonaisvaltaisempi ja johdonmukaisempi yrityksen ohjaus, parantunut suorituskyvyn mittaus ja seuranta, sekä strategian jalkautuksen tehostuminen. Käyttöönoton onnistumisen kannalta kriittisimmät tekijät ovat johdon sitoutuneisuus, vision ja strategian selkeys, selkeä päätös projektin aloittamiseksi, sekä mittareiden valinta ja tavoitetasojen asettaminen niille.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli tutkia yrityksen rajoja laajennetun transaktiokustannusteorian näkökulmasta. Tutkimus oli empiirinen tutkimus, jossa tutkittiin viittä toimialaa. Tutkimuksen tavoitteena oli verrata paperiteollisuutta teräs-, kemian-, ICT- ja energiateollisuuteen. Aineisto empiiriseen osioon kerättiin puolistrukturoiduilla teemahaastatteluilla. Tutkimus osoitti, että laajennettu transaktiokustannusteoria soveltuu hyvinyrityksen rajojen määrittelyyn. Staattinen transaktiokustannusteorian selitysaste ei ole riittävä, joten dynaaminen laajennus on tarpeellinen. Tutkimuksessa ilmeni, että paperiteollisuudella verrattuna muihin toimialoihin on suurimmat haasteet tehokkaiden rajojen määrittämisessä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates factors that affect software testing practice. The thesis consists of empirical studies, in which the affecting factors were analyzed and interpreted using quantitative and qualitative methods. First, the Delphi method was used to specify the scope of the thesis. Secondly, for the quantitative analysis 40industry experts from 30 organizational units (OUs) were interviewed. The survey method was used to explore factors that affect software testing practice. Conclusions were derived using correlation and regression analysis. Thirdly, from these 30 OUs, five were further selected for an in-depth case study. The data was collected through 41 semi-structured interviews. The affecting factors and their relationships were interpreted with qualitative analysis using grounded theory as the research method. The practice of software testing was analyzed from the process improvement and knowledge management viewpoints. The qualitative and quantitativeresults were triangulated to increase the validity of the thesis. Results suggested that testing ought to be adjusted according to the business orientation of the OU; the business orientation affects the testing organization and knowledge management strategy, and the business orientation andthe knowledge management strategy affect outsourcing. As a special case, the complex relationship between testing schedules and knowledge transfer is discussed. The results of this thesis can be used in improvingtesting processes and knowledge management in software testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Requirements-relatedissues have been found the third most important risk factor in software projects and as the biggest reason for software project failures. This is not a surprise since; requirements engineering (RE) practices have been reported deficient inmore than 75% of all; enterprises. A problem analysis on small and low maturitysoftware organizations revealed two; central reasons for not starting process improvement efforts: lack of resources and uncertainty; about process improvementeffort paybacks.; In the constructive part of the study a basic RE method, BaRE, was developed to provide an; easy to adopt way to introduce basic systematic RE practices in small and low maturity; organizations. Based on diffusion of innovations literature, thirteen desirable characteristics; were identified for the solution and the method was implemented in five key components:; requirements document template, requirements development practices, requirements; management practices, tool support for requirements management, and training.; The empirical evaluation of the BaRE method was conducted in three industrial case studies. In; this evaluation, two companies established a completely new RE infrastructure following the; suggested practices while the third company conducted continued requirements document; template development based on the provided template and used it extensively in practice. The; real benefits of the adoption of the method were visible in the companies in four to six months; from the start of the evaluation project, and the two small companies in the project completed; their improvement efforts with an input equal to about one person month. The collected dataon; the case studies indicates that the companies implemented new practices with little adaptations; and little effort. Thus it can be concluded that the constructed BaRE method is indeed easy to; adopt and it can help introduce basic systematic RE practices in small organizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.