897 resultados para F-2 generation
Resumo:
The chapter discusses both the complementary factors and contradictions of adopting ERP based systems with enterprise 2.0. ERP is characterized as achieving efficient business performance by enabling a standardized business process design, but at a cost of flexibility in operations. It is claimed that enterprise 2.0 can support flexible business process management and so incorporate informal and less structured interactions. A traditional view however is that efficiency and flexibility objectives are incompatible as they are different business objectives which are pursued separately in different organizational environments. Thus an ERP system with a primary objective of improving efficiency and an enterprise 2.0 system with a primary aim of improving flexibility may represent a contradiction and lead to a high risk of failure if adopted simultaneously. This chapter will use case study analysis to investigate the use of a combination of ERP and enterprise 2.0 in a single enterprise with the aim of improving both efficiency and flexibility in operations. The chapter provides an in-depth analysis of the combination of ERP with enterprise 2.0 based on social-technical information systems management theory. The chapter also provides a summary of the benefits of the combination of ERP systems and enterprise 2.0 and how they could contribute to the development of a new generation of business management that combines both formal and informal mechanisms. For example, the multiple-sites or informal communities of an enterprise could collaborate efficiently with a common platform with a certain level of standardization but also have the flexibility in order to provide an agile reaction to internal and external events.
Dark soliton generation from semiconductor optical amplifier gain medium in ring fiber configuration
Resumo:
We have investigated the mode-lock operation from a semiconductor optical amplifier (SOA) gain chip in the ring fibre configuration. At lower pump currents, the laser generates dark soliton pulses both at the fundamental repetition rate of 39 MHz and supports up to the 6th harmonic order corresponding to 234-MHz repetition rate with an output power of ∼2.1 mW. At higher pump currents, the laser can be switched between the bright, dark and concurrent bright and dark soliton generation regimes.
Resumo:
All-fiber cavity for synchronous generation of conventional and Raman dissipative solitons in the telecom spectral range is designed. Through extensive numerical modelling we demonstrate 2-wavelength complex with 10 nJ energy and
Resumo:
A number of studies have shown that methanogens are active in the presence of sulfate under some conditions. This phenomenon is especially exemplified in carbonate sediments of the southern Australian continental margin. Three sites cored during Ocean Drilling Program (ODP) Leg 182 in the Great Australian Bight have high concentrations of microbially-generated methane and hydrogen sulfide throughout almost 500 m of sediments. In these cores, the sulfate-reducing and methanogenic zones overlap completely; that is, the usual sulfate-methane transition zone is absent. Amino acid racemization data show that the gassy sediments consist of younger carbonates than the low-gas sites. High concentrations of the reduced gases also occur in two ODP sites on the margin of the Bahamas platform, both of which have similar sedimentary conditions to those of the high-gas sites of Leg 182. Co-generation of these reduced gases results from an unusual combination of conditions, including: (1) a thick Quaternary sequence of iron-poor carbonate sediments, (2) a sub-seafloor brine, and (3) moderate amounts of organic carbon. The probable explanation for the co-generation of hydrogen sulfide and methane in all these sites, as well as in other reported environments, is that methanogens are utilizing non-competitive substrates to produce methane within the sulfate-reducing zone. Taken together, these results form the basis of a new model for sulfate reduction and methanogenesis in marine sediments. The biogeochemical end-members of the model are: (1) minimal sulfate reduction, (2) complete sulfate reduction followed by methanogenesis, and (3) overlapping sulfate reduction and methanogenesis with no transition zone.
Resumo:
INTRODUCTION: Acute myeloid leukemia (AML) is a heterogeneous clonal disorder often associated with dismal overall survival. The clinical diversity of AML is reflected in the range of recurrent somatic mutations in several genes, many of which have a prognostic and therapeutic value. Targeted next-generation sequencing (NGS) of these genes has the potential for translation into clinical practice. In order to assess this potential, an inter-laboratory evaluation of a commercially available AML gene panel across three diagnostic centres in the UK and Ireland was performed.
METHODS: DNA from six AML patient samples was distributed to each centre and processed using a standardised workflow, including a common sequencing platform, sequencing chips and bioinformatics pipeline. A duplicate sample in each centre was run to assess inter- and intra-laboratory performance.
RESULTS: An average sample read depth of 2725X (range 629-5600) was achieved using six samples per chip, with some variability observed in the depth of coverage generated for individual samples and between centres. A total of 16 somatic mutations were detected in the six AML samples, with a mean of 2.7 mutations per sample (range 1-4) representing nine genes on the panel. 15/16 mutations were identified by all three centres. Allelic frequencies of the mutations ranged from 5.6 to 53.3 % (median 44.4 %), with a high level of concordance of these frequencies between centres, for mutations detected.
CONCLUSION: In this inter-laboratory comparison, a high concordance, reproducibility and robustness was demonstrated using a commercially available NGS AML gene panel and platform.
Resumo:
Background
First generation migrants are reportedly at higher risk of mental ill-health compared to the settled population. This paper systematically reviews and synthesizes all reviews on the mental health of first generation migrants in order to appraise the risk factors for, and explain differences in, the mental health of this population.
Methods
Scientific databases were searched for systematic reviews (inception-November 2015) which provided quantitative data on the mental ill-health of first generation migrants and associated risk factors. Two reviewers screened titles, abstracts and full text papers for their suitability against pre-specified criteria, methodological quality was assessed.
Results
One thousand eight hundred twenty articles were identified, eight met inclusion criteria, which were all moderate or low quality. Depression was mostly higher in first generation migrants in general, and in refugees/asylum seekers when analysed separately. However, for both groups there was wide variation in prevalence rates, from 5 to 44 % compared with prevalence rates of 8–12 % in the general population. Post-Traumatic Stress Disorder prevalence was higher for both first generation migrants in general and for refugees/asylum seekers compared with the settled majority. Post-Traumatic Stress Disorder prevalence in first generation migrants in general and refugees/ asylum seekers ranged from 9 to 36 % compared with reported prevalence rates of 1–2 % in the general population. Few studies presented anxiety prevalence rates in first generation migrants and there was wide variation in those that did. Prevalence ranged from 4 to 40 % compared with reported prevalence of 5 % in the general population. Two reviews assessed the psychotic disorder risk, reporting this was two to three times more likely in adult first generation migrants. However, one review on the risk of schizophrenia in refugees reported similar prevalence rates (2 %) to estimates of prevalence among the settled majority (3 %). Risk factors for mental ill-health included low Gross National Product in the host country, downward social mobility, country of origin, and host country.
Conclusion
First generation migrants may be at increased risk of mental illness and public health policy must account for this and influencing factors. High quality research in the area is urgently needed as is the use of culturally specific validated measurement tools for assessing migrant mental health.
Resumo:
Following the intrinsically linked balance sheets in his Capital Formation Life Cycle, Lukas M. Stahl explains with his Triple A Model of Accounting, Allocation and Accountability the stages of the Capital Formation process from FIAT to EXIT. Based on the theoretical foundations of legal risk laid by the International Bar Association with the help of Roger McCormick and legal scholars such as Joanna Benjamin, Matthew Whalley and Tobias Mahler, and founded on the basis of Wesley Hohfeld’s category theory of jural relations, Stahl develops his mutually exclusive Four Determinants of Legal Risk of Law, Lack of Right, Liability and Limitation. Those Four Determinants of Legal Risk allow us to apply, assess, and precisely describe the respective legal risk at all stages of the Capital Formation Life Cycle as demonstrated in case studies of nine industry verticals of the proposed and currently negotiated Transatlantic Trade and Investment Partnership between the United States of America and the European Union, TTIP, as well as in the case of the often cited financing relation between the United States and the People’s Republic of China. Having established the Four Determinants of Legal Risk and its application to the Capital Formation Life Cycle, Stahl then explores the theoretical foundations of capital formation, their historical basis in classical and neo-classical economics and its forefathers such as The Austrians around Eugen von Boehm-Bawerk, Ludwig von Mises and Friedrich von Hayek and most notably and controversial, Karl Marx, and their impact on today’s exponential expansion of capital formation. Starting off with the first pillar of his Triple A Model, Accounting, Stahl then moves on to explain the Three Factors of Capital Formation, Man, Machines and Money and shows how “value-added” is created with respect to the non-monetary capital factors of human resources and industrial production. Followed by a detailed analysis discussing the roles of the Three Actors of Monetary Capital Formation, Central Banks, Commercial Banks and Citizens Stahl readily dismisses a number of myths regarding the creation of money providing in-depth insight into the workings of monetary policy makers, their institutions and ultimate beneficiaries, the corporate and consumer citizens. In his second pillar, Allocation, Stahl continues his analysis of the balance sheets of the Capital Formation Life Cycle by discussing the role of The Five Key Accounts of Monetary Capital Formation, the Sovereign, Financial, Corporate, Private and International account of Monetary Capital Formation and the associated legal risks in the allocation of capital pursuant to his Four Determinants of Legal Risk. In his third pillar, Accountability, Stahl discusses the ever recurring Crisis-Reaction-Acceleration-Sequence-History, in short: CRASH, since the beginning of the millennium starting with the dot-com crash at the turn of the millennium, followed seven years later by the financial crisis of 2008 and the dislocations in the global economy we are facing another seven years later today in 2015 with several sordid debt restructurings under way and hundred thousands of refugees on the way caused by war and increasing inequality. Together with the regulatory reactions they have caused in the form of so-called landmark legislation such as the Sarbanes-Oxley Act of 2002, the Dodd-Frank Act of 2010, the JOBS Act of 2012 or the introduction of the Basel Accords, Basel II in 2004 and III in 2010, the European Financial Stability Facility of 2010, the European Stability Mechanism of 2012 and the European Banking Union of 2013, Stahl analyses the acceleration in size and scope of crises that appears to find often seemingly helpless bureaucratic responses, the inherent legal risks and the complete lack of accountability on part of those responsible. Stahl argues that the order of the day requires to address the root cause of the problems in the form of two fundamental design defects of our Global Economic Order, namely our monetary and judicial order. Inspired by a 1933 plan of nine University of Chicago economists abolishing the fractional reserve system, he proposes the introduction of Sovereign Money as a prerequisite to void misallocations by way of judicial order in the course of domestic and transnational insolvency proceedings including the restructuring of sovereign debt throughout the entire monetary system back to its origin without causing domino effects of banking collapses and failed financial institutions. In recognizing Austrian-American economist Schumpeter’s Concept of Creative Destruction, as a process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one, Stahl responds to Schumpeter’s economic chemotherapy with his Concept of Equitable Default mimicking an immunotherapy that strengthens the corpus economicus own immune system by providing for the judicial authority to terminate precisely those misallocations that have proven malignant causing default perusing the century old common law concept of equity that allows for the equitable reformation, rescission or restitution of contract by way of judicial order. Following a review of the proposed mechanisms of transnational dispute resolution and current court systems with transnational jurisdiction, Stahl advocates as a first step in order to complete the Capital Formation Life Cycle from FIAT, the creation of money by way of credit, to EXIT, the termination of money by way of judicial order, the institution of a Transatlantic Trade and Investment Court constituted by a panel of judges from the U.S. Court of International Trade and the European Court of Justice by following the model of the EFTA Court of the European Free Trade Association. Since the first time his proposal has been made public in June of 2014 after being discussed in academic circles since 2011, his or similar proposals have found numerous public supporters. Most notably, the former Vice President of the European Parliament, David Martin, has tabled an amendment in June 2015 in the course of the negotiations on TTIP calling for an independent judicial body and the Member of the European Commission, Cecilia Malmström, has presented her proposal of an International Investment Court on September 16, 2015. Stahl concludes, that for the first time in the history of our generation it appears that there is a real opportunity for reform of our Global Economic Order by curing the two fundamental design defects of our monetary order and judicial order with the abolition of the fractional reserve system and the introduction of Sovereign Money and the institution of a democratically elected Transatlantic Trade and Investment Court that commensurate with its jurisdiction extending to cases concerning the Transatlantic Trade and Investment Partnership may complete the Capital Formation Life Cycle resolving cases of default with the transnational judicial authority for terminal resolution of misallocations in a New Global Economic Order without the ensuing dangers of systemic collapse from FIAT to EXIT.
Resumo:
Résumé : En raison de sa grande étendue, le Nord canadien présente plusieurs défis logistiques pour une exploitation rentable de ses ressources minérales. La TéléCartographie Prédictive (TCP) vise à faciliter la localisation de gisements en produisant des cartes du potentiel géologique. Des données altimétriques sont nécessaires pour générer ces cartes. Or, celles actuellement disponibles au nord du 60e parallèle ne sont pas optimales principalement parce qu’elles sont dérivés de courbes à équidistance variable et avec une valeur au mètre. Parallèlement, il est essentiel de connaître l'exactitude verticale des données altimétriques pour être en mesure de les utiliser adéquatement, en considérant les contraintes liées à son exactitude. Le projet présenté vise à aborder ces deux problématiques afin d'améliorer la qualité des données altimétriques et contribuer à raffiner la cartographie prédictive réalisée par TCP dans le Nord canadien, pour une zone d’étude située au Territoire du Nord-Ouest. Le premier objectif était de produire des points de contrôles permettant une évaluation précise de l'exactitude verticale des données altimétriques. Le second objectif était de produire un modèle altimétrique amélioré pour la zone d'étude. Le mémoire présente d'abord une méthode de filtrage pour des données Global Land and Surface Altimetry Data (GLA14) de la mission ICESat (Ice, Cloud and land Elevation SATellite). Le filtrage est basé sur l'application d'une série d'indicateurs calculés à partir d’informations disponibles dans les données GLA14 et des conditions du terrain. Ces indicateurs permettent d'éliminer les points d'élévation potentiellement contaminés. Les points sont donc filtrés en fonction de la qualité de l’attitude calculée, de la saturation du signal, du bruit d'équipement, des conditions atmosphériques, de la pente et du nombre d'échos. Ensuite, le document décrit une méthode de production de Modèles Numériques de Surfaces (MNS) améliorés, par stéréoradargrammétrie (SRG) avec Radarsat-2 (RS-2). La première partie de la méthodologie adoptée consiste à faire la stéréorestitution des MNS à partir de paires d'images RS-2, sans point de contrôle. L'exactitude des MNS préliminaires ainsi produits est calculée à partir des points de contrôles issus du filtrage des données GLA14 et analysée en fonction des combinaisons d’angles d'incidences utilisées pour la stéréorestitution. Ensuite, des sélections de MNS préliminaires sont assemblées afin de produire 5 MNS couvrant chacun la zone d'étude en totalité. Ces MNS sont analysés afin d'identifier la sélection optimale pour la zone d'intérêt. Les indicateurs sélectionnés pour la méthode de filtrage ont pu être validés comme performant et complémentaires, à l’exception de l’indicateur basé sur le ratio signal/bruit puisqu’il était redondant avec l’indicateur basé sur le gain. Autrement, chaque indicateur a permis de filtrer des points de manière exclusive. La méthode de filtrage a permis de réduire de 19% l'erreur quadratique moyenne sur l'élévation, lorsque que comparée aux Données d'Élévation Numérique du Canada (DNEC). Malgré un taux de rejet de 69% suite au filtrage, la densité initiale des données GLA14 a permis de conserver une distribution spatiale homogène. À partir des 136 MNS préliminaires analysés, aucune combinaison d’angles d’incidences des images RS-2 acquises n’a pu être identifiée comme étant idéale pour la SRG, en raison de la grande variabilité des exactitudes verticales. Par contre, l'analyse a indiqué que les images devraient idéalement être acquises à des températures en dessous de 0°C, pour minimiser les disparités radiométriques entre les scènes. Les résultats ont aussi confirmé que la pente est le principal facteur d’influence sur l’exactitude de MNS produits par SRG. La meilleure exactitude verticale, soit 4 m, a été atteinte par l’assemblage de configurations de même direction de visées. Par contre, les configurations de visées opposées, en plus de produire une exactitude du même ordre (5 m), ont permis de réduire le nombre d’images utilisées de 30%, par rapport au nombre d'images acquises initialement. Par conséquent, l'utilisation d'images de visées opposées pourrait permettre d’augmenter l’efficacité de réalisation de projets de SRG en diminuant la période d’acquisition. Les données altimétriques produites pourraient à leur tour contribuer à améliorer les résultats de la TCP, et augmenter la performance de l’industrie minière canadienne et finalement, améliorer la qualité de vie des citoyens du Nord du Canada.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Tsunamis occur quite frequently following large magnitude earthquakes along the Chilean coast. Most of these earthquakes occur along the Peru-Chile Trench, one of the most seismically active subduction zones of the world. This study aims to understand better the characteristics of the tsunamis triggered along the Peru-Chile Trench. We investigate the tsunamis induced by the Mw8.3 Illapel, the Mw8.2 Iquique and the Mw8.8 Maule Chilean earthquakes that happened on September 16th, 2015, April 1st, 2014 and February 27th, 2010, respectively. The study involves the relation between the co-seismic deformation and the tsunami generation, the near-field tsunami propagation, and the spectral analysis of the recorded tsunami signals in the near-field. We compare the tsunami characteristics to highlight the possible similarities between the three events and, therefore, attempt to distinguish the specific characteristics of the tsunamis occurring along the Peru-Chile Trench. We find that these three earthquakes present faults with important extensions beneath the continent which result in the generation of tsunamis with short wavelengths, relative to the fault widths involved, and with reduced initial potential energy. In addition, the presence of the Chilean continental margin, that includes the shelf of shallow bathymetry and the continental slope, constrains the tsunami propagation and the coastal impact. All these factors contribute to a concentrated local impact but can, on the other hand, reduce the far-field tsunami effects from earthquakes along Peru-Chile Trench.
Resumo:
Decision-making is often dependent on uncertain data, e.g. data associated with confidence scores or probabilities. We present a comparison of different informa- tion presentations for uncertain data and, for the first time, measure their effects on human decision-making. We show that the use of Natural Language Genera- tion (NLG) improves decision-making un- der uncertainty, compared to state-of-the- art graphical-based representation meth- ods. In a task-based study with 442 adults, we found that presentations using NLG lead to 24% better decision-making on av- erage than the graphical presentations, and to 44% better decision-making when NLG is combined with graphics. We also show that women achieve significantly better re- sults when presented with NLG output (an 87% increase on average compared to graphical presentations).
Resumo:
Background: Bio-conjugated nanoparticles are important analytical tools with emerging biological and medical applications. In this context, in situ conjugation of nanoparticles with biomolecules via laser ablation in an aqueous media is a highly promising one-step method for the production of functional nanoparticles resulting in highly efficient conjugation. Increased yields are required, particularly considering the conjugation of cost-intensive biomolecules like RNA aptamers. Results: Using a DNA aptamer directed against streptavidin, in situ conjugation results in nanoparticles with diameters of approximately 9 nm exhibiting a high aptamer surface density (98 aptamers per nanoparticle) and a maximal conjugation efficiency of 40.3%. We have demonstrated the functionality of the aptamer-conjugated nanoparticles using three independent analytical methods, including an agglomeration-based colorimetric assay, and solid-phase assays proving high aptamer activity. To demonstrate the general applicability of the in situ conjugation of gold nanoparticles with aptamers, we have transferred the method to an RNA aptamer directed against prostate-specific membrane antigen (PSMA). Successful detection of PSMA in human prostate cancer tissue was achieved utilizing tissue microarrays. Conclusions: In comparison to the conventional generation of bio-conjugated gold nanoparticles using chemical synthesis and subsequent bio-functionalization, the laser-ablation-based in situ conjugation is a rapid, one-step production method. Due to high conjugation efficiency and productivity, in situ conjugation can be easily used for high throughput generation of gold nanoparticles conjugated with valuable biomolecules like aptamers.
Epidemiology and genetic architecture of blood pressure: a family based study of Generation Scotland
Resumo:
Hypertension is a major risk factor for cardiovascular disease and mortality, and a growing global public health concern, with up to one-third of the world’s population affected. Despite the vast amount of evidence for the benefits of blood pressure (BP) lowering accumulated to date, elevated BP is still the leading risk factor for disease and disability worldwide. It is well established that hypertension and BP are common complex traits, where multiple genetic and environmental factors contribute to BP variation. Furthermore, family and twin studies confirmed the genetic component of BP, with a heritability estimate in the range of 30-50%. Contemporary genomic tools enabling the genotyping of millions of genetic variants across the human genome in an efficient, reliable, and cost-effective manner, has transformed hypertension genetics research. This is accompanied by the presence of international consortia that have offered unprecedentedly large sample sizes for genome-wide association studies (GWASs). While GWAS for hypertension and BP have identified more than 60 loci, variants in these loci are associated with modest effects on BP and in aggregate can explain less than 3% of the variance in BP. The aims of this thesis are to study the genetic and environmental factors that influence BP and hypertension traits in the Scottish population, by performing several genetic epidemiological analyses. In the first part of this thesis, it aims to study the burden of hypertension in the Scottish population, along with assessing the familial aggregation and heritialbity of BP and hypertension traits. In the second part, it aims to validate the association of common SNPs reported in the large GWAS and to estimate the variance explained by these variants. In this thesis, comprehensive genetic epidemiology analyses were performed on Generation Scotland: Scottish Family Health Study (GS:SFHS), one of the largest population-based family design studies. The availability of clinical, biological samples, self-reported information, and medical records for study participants has allowed several assessments to be performed to evaluate factors that influence BP variation in the Scottish population. Of the 20,753 subjects genotyped in the study, a total of 18,470 individuals (grouped into 7,025 extended families) passed the stringent quality control (QC) criteria and were available for all subsequent analysis. Based on the BP-lowering treatment exposure sources, subjects were further classified into two groups. First, subjects with both a self-reported medications (SRMs) history and electronic-prescription records (EPRs; n =12,347); second, all the subjects with at least one medication history source (n =18,470). In the first group, the analysis showed a good concordance between SRMs and EPRs (kappa =71%), indicating that SRMs can be used as a surrogate to assess the exposure to BP-lowering medication in GS:SFHS participants. Although both sources suffer from some limitations, SRMs can be considered the best available source to estimate the drug exposure history in those without EPRs. The prevalence of hypertension was 40.8% with higher prevalence in men (46.3%) compared to women (35.8%). The prevalence of awareness, treatment and controlled hypertension as defined by the study definition were 25.3%, 31.2%, and 54.3%, respectively. These findings are lower than similar reported studies in other populations, with the exception of controlled hypertension prevalence, which can be considered better than other populations. Odds of hypertension were higher in men, obese or overweight individuals, people with a parental history of hypertension, and those living in the most deprived area of Scotland. On the other hand, deprivation was associated with higher odds of treatment, awareness and controlled hypertension, suggesting that people living in the most deprived area may have been receiving better quality of care, or have higher comorbidity levels requiring greater engagement with doctors. These findings highlight the need for further work to improve hypertension management in Scotland. The family design of GS:SFHS has allowed family-based analysis to be performed to assess the familial aggregation and heritability of BP and hypertension traits. The familial correlation of BP traits ranged from 0.07 to 0.20, and from 0.18 to 0.34 for parent-offspring pairs and sibling pairs, respectively. A higher correlation of BP traits was observed among first-degree relatives than other types of relative pairs. A variance-component model that was adjusted for sex, body mass index (BMI), age, and age-squared was used to estimate heritability of BP traits, which ranged from 24% to 32% with pulse pressure (PP) having the lowest estimates. The genetic correlation between BP traits showed a high correlation between systolic (SBP), diastolic (DBP) and mean arterial pressure (MAP) (G: 81% to 94%), but lower correlations with PP (G: 22% to 78%). The sibling recurrence risk ratio (λS) for hypertension and treatment were calculated as 1.60 and 2.04 respectively. These findings confirm the genetic components of BP traits in GS:SFHS, and justify further work to investigate genetic determinants of BP. Genetic variants reported in the recent large GWAS of BP traits were selected for genotyping in GS:SFHS using a custom designed TaqMan® OpenArray®. The genotyping plate included 44 single nucleotide polymorphisms (SNPs) that have been previously reported to be associated with BP or hypertension at genome-wide significance level. A linear mixed model that is adjusted for age, age-squared, sex, and BMI was used to test for the association between the genetic variants and BP traits. Of the 43 variants that passed the QC, 11 variants showed statistically significant association with at least one BP trait. The phenotypic variance explained by these variant for the four BP traits were 1.4%, 1.5%, 1.6%, and 0.8% for SBP, DBP, MAP, and PP, respectively. The association of genetic risk score (GRS) that were constructed from selected variants has showed a positive association with BP level and hypertension prevalence, with an average effect of one mmHg increase with each 0.80 unit increases in the GRS across the different BP traits. The impact of BP-lowering medication on the genetic association study for BP traits has been established, with typical practice of adding a fixed value (i.e. 15/10 mmHg) to the measured BP values to adjust for BP treatment. Using the subset of participants with the two treatment exposure sources (i.e. SRMs and EPRs), the influence of using either source to justify the addition of fixed values in SNP association signal was analysed. BP phenotypes derived from EPRs were considered the true phenotypes, and those derived from SRMs were considered less accurate, with some phenotypic noise. Comparing SNPs association signals between the four BP traits in the two model derived from the different adjustments showed that MAP was the least impacted by the phenotypic noise. This was suggested by identifying the same overlapped significant SNPs for the two models in the case of MAP, while other BP traits had some discrepancy between the two sources