979 resultados para Length-frequency analysis
Resumo:
INTRODUCTION: The antiretroviral drug efavirenz (EFV) is extensively metabolized into three primary metabolites: 8-hydroxy-EFV, 7-hydroxy-EFV and N-glucuronide-EFV. There is a wide interindividual variability in EFV plasma exposure, explained to a great extent by cytochrome P450 2B6 (CYP2B6), the main isoenzyme responsible for EFV metabolism and involved in the major metabolic pathway (8-hydroxylation) and to a lesser extent in 7-hydroxylation. When CYP2B6 function is impaired, the relevance of CYP2A6, the main isoenzyme responsible for 7-hydroxylation may increase. We hypothesize that genetic variability in this gene may contribute to the particularly high, unexplained variability in EFV exposure in individuals with limited CYP2B6 function. METHODS: This study characterized CYP2A6 variation (14 alleles) in individuals (N=169) previously characterized for functional variants in CYP2B6 (18 alleles). Plasma concentrations of EFV and its primary metabolites (8-hydroxy-EFV, 7-hydroxy-EFV and N-glucuronide-EFV) were measured in different genetic backgrounds in vivo. RESULTS: The accessory metabolic pathway CYP2A6 has a critical role in limiting drug accumulation in individuals characterized as CYP2B6 slow metabolizers. CONCLUSION: Dual CYP2B6 and CYP2A6 slow metabolism occurs at significant frequency in various human populations, leading to extremely high EFV exposure.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Meta-analysis of genome-wide association studies (GWASs) has led to the discoveries of many common variants associated with complex human diseases. There is a growing recognition that identifying "causal" rare variants also requires large-scale meta-analysis. The fact that association tests with rare variants are performed at the gene level rather than at the variant level poses unprecedented challenges in the meta-analysis. First, different studies may adopt different gene-level tests, so the results are not compatible. Second, gene-level tests require multivariate statistics (i.e., components of the test statistic and their covariance matrix), which are difficult to obtain. To overcome these challenges, we propose to perform gene-level tests for rare variants by combining the results of single-variant analysis (i.e., p values of association tests and effect estimates) from participating studies. This simple strategy is possible because of an insight that multivariate statistics can be recovered from single-variant statistics, together with the correlation matrix of the single-variant test statistics, which can be estimated from one of the participating studies or from a publicly available database. We show both theoretically and numerically that the proposed meta-analysis approach provides accurate control of the type I error and is as powerful as joint analysis of individual participant data. This approach accommodates any disease phenotype and any study design and produces all commonly used gene-level tests. An application to the GWAS summary results of the Genetic Investigation of ANthropometric Traits (GIANT) consortium reveals rare and low-frequency variants associated with human height. The relevant software is freely available.
Resumo:
The molecular diagnosis of retinal dystrophies (RD) is difficult because of genetic and clinical heterogeneity. Previously, the molecular screening of genes was done one by one, sometimes in a scheme based on the frequency of sequence variants and the number of exons/length of the candidate genes. Payment for these procedures was complicated and the sequential billing of several genes created endless paperwork. We therefore evaluated the costs of generating and sequencing a hybridization-based DNA library enriched for the 64 most frequently mutated genes in RD, called IROme, and compared them to the costs of amplifying and sequencing these genes by the Sanger method. The production cost generated by the high-throughput (HT) sequencing of IROme was established at CHF 2,875.75 per case. Sanger sequencing of the same exons cost CHF 69,399.02. Turnaround time of the analysis was 3 days for IROme. For Sanger sequencing, it could only be estimated, as we never sequenced all 64 genes in one single patient. Sale cost for IROme calculated on the basis of the sale cost of one exon by Sanger sequencing is CHF 8,445.88, which corresponds to the sale price of 40 exons. In conclusion, IROme is cheaper and faster than Sanger sequencing and therefore represents a sound approach for the diagnosis of RD, both scientifically and economically. As a drop in the costs of HT sequencing is anticipated, target resequencing might become the new gold standard in the molecular diagnosis of RD.
Resumo:
Correspondence analysis, when used to visualize relationships in a table of counts(for example, abundance data in ecology), has been frequently criticized as being too sensitiveto objects (for example, species) that occur with very low frequency or in very few samples. Inthis statistical report we show that this criticism is generally unfounded. We demonstrate this inseveral data sets by calculating the actual contributions of rare objects to the results ofcorrespondence analysis and canonical correspondence analysis, both to the determination ofthe principal axes and to the chi-square distance. It is a fact that rare objects are oftenpositioned as outliers in correspondence analysis maps, which gives the impression that theyare highly influential, but their low weight offsets their distant positions and reduces their effecton the results. An alternative scaling of the correspondence analysis solution, the contributionbiplot, is proposed as a way of mapping the results in order to avoid the problem of outlying andlow contributing rare objects.
Resumo:
BACKGROUND: Three non-synonymous single nucleotide polymorphisms (Q223R, K109R and K656N) of the leptin receptor gene (LEPR) have been tested for association with obesity-related outcomes in multiple studies, showing inconclusive results. We performed a systematic review and meta-analysis on the association of the three LEPR variants with BMI. In addition, we analysed 15 SNPs within the LEPR gene in the CoLaus study, assessing the interaction of the variants with sex. METHODOLOGY/PRINCIPAL FINDINGS: We searched electronic databases, including population-based studies that investigated the association between LEPR variants Q223R, K109R and K656N and obesity- related phenotypes in healthy, unrelated subjects. We furthermore performed meta-analyses of the genotype and allele frequencies in case-control studies. Results were stratified by SNP and by potential effect modifiers. CoLaus data were analysed by logistic and linear regressions and tested for interaction with sex. The meta-analysis of published data did not show an overall association between any of the tested LEPR variants and overweight. However, the choice of a BMI cut-off value to distinguish cases from controls was crucial to explain heterogeneity in Q223R. Differences in allele frequencies across ethnic groups are compatible with natural selection of derived alleles in Q223R and K109R and of the ancient allele in K656N in Asians. In CoLaus, the rs10128072, rs3790438 and rs3790437 variants showed interaction with sex for their association with overweight, waist circumference and fat mass in linear regressions. CONCLUSIONS: Our systematic review and analysis of primary data from the CoLaus study did not show an overall association between LEPR SNPs and overweight. Most studies were underpowered to detect small effect sizes. A potential effect modification by sex, population stratification, as well as the role of natural selection should be addressed in future genetic association studies.
Resumo:
Background: We have recently shown that the median diagnostic delay to establish Crohn's disease (CD) diagnosis (i.e. the period from first symptom onset to diagnosis) in the Swiss IBD Cohort (SIBDC) was 9 months. Seventy five percent of all CD patients were diagnosed within 24 months. The clinical impact of a long diagnostic delay on the natural history of CD is unknown. Aim: To compare the frequency and type of CD-related complications in the patient groups with long diagnostic delay (>24 months) vs. the ones diagnosed within 24 months. Methods: Retrospective analysis of data from the SIBDCS, comprising a large sample of CD patients followed in hospitals and private practices across Switzerland. The proportions of the following outcomes were compared between groups of patients diagnosed 1, 2-5, 6-10, 11-15, and ≥ 16 years ago and stratified according to the length of diagnostic delay: bowel stenoses, internal fistulas, perianal fistulas, CD-related surgical interventions, and extraintestinal manifestations. Results: Two hundred CD patients (121 female, mean age 44.9 ± 15.0 years, 38% smokers, 71% ever treated with immunomodulators and 35% with anti-TNF) with long diagnostic delay were compared to 697 CD patients (358 female, mean age 39.1 ± 14.9 years, 33% smokers, 74% ever treated with immunomodulators and 33% with anti-TNF) diagnosed within 24 months. No differences in the outcomes were observed between the two patient groups within year one after CD diagnosis. Among those diagnosed 2-5 years ago, CD patients with long diagnostic delay (n = 45) presented more frequently with internal fistulas (11.1% vs. 3.1%, p = 0.03) and bowel stenoses (28.9% vs. 15.7%, p = 0.05), and they more frequently underwent CD-related operations (15.6% vs. 5.0%, p = 0.02) compared to the patients diagnosed within 24 months (n = 159). Among those diagnosed 6-10 years ago, CD patients with long diagnostic delay (n = 48) presented more frequently with extraintestinal manifestations (60.4% vs. 34.6%, p = 0.001) than those diagnosed within 24 months (n = 182). For the patients diagnosed 11-15 years ago, no differences in outcomes were found between the long diagnostic delay group (n = 106) and the one diagnosed within 24 months (n = 32). Among those diagnosed ≥ 16 years ago, the group with long diagnostic delay (n = 71) more frequently underwent CD-related operations (63.4% vs. 46.5%, p = 0.01) compared to the group diagnosed with CD within 24 months (n = 241). Conclusions: A long diagnostic delay in CD patients is associated with a more complicated disease course and higher number of CD-related operations in the years following the diagnosis. Our results indicate that efforts should be undertaken to shorten the diagnostic delay in CD patients in order to reduce the risk for progression towards a complicated disease phenotype.
Resumo:
This study investigated fatigue-induced changes in spring-mass model characteristics during repeated running sprints. Sixteen active subjects performed 12 × 40 m sprints interspersed with 30 s of passive recovery. Vertical and anterior-posterior ground reaction forces were measured at 5-10 m and 30-35 m and used to determine spring-mass model characteristics. Contact (P < 0.001), flight (P < 0.05) and swing times (P < 0.001) together with braking, push-off and total stride durations (P < 0.001) lengthened across repetitions. Stride frequency (P < 0.001) and push-off forces (P < 0.05) decreased with fatigue, whereas stride length (P = 0.06), braking (P = 0.08) and peak vertical forces (P = 0.17) changes approached significance. Center of mass vertical displacement (P < 0.001) but not leg compression (P > 0.05) increased with time. As a result, vertical stiffness decreased (P < 0.001) from the first to the last repetition, whereas leg stiffness changes across sprint trials were not significant (P > 0.05). Changes in vertical stiffness were correlated (r > 0.7; P < 0.001) with changes in stride frequency. When compared to 5-10 m, most of ground reaction force-related parameters were higher (P < 0.05) at 30-35 m, whereas contact time, stride frequency, vertical and leg stiffness were lower (P < 0.05). Vertical stiffness deteriorates when 40 m run-based sprints are repeated, which alters impact parameters. Maintaining faster stride frequencies through retaining higher vertical stiffness is a prerequisite to improve performance during repeated sprinting.
Resumo:
Around 11.5 * 106 m3 of rock detached from the eastern slope of the Santa Cruz valley (San Juan province, Argentina) in the first fortnight of January 2005. The rockslide?debris avalanche blocked the course, resulting in the development of a lake with maximum length of around 3.5 km. The increase in the inflow rate from 47,000?74,000 m3/d between April and October to 304,000 m3/d between late October and the first fortnight of November, accelerated the growing rate of the lake. On 12 November 2005 the dam failed, releasing 24.6 * 106 m3 of water. The resulting outburst flood caused damages mainly on infrastructure, and affected the facilities of a hydropower dam which was under construction 250 km downstream from the source area. In this work we describe causes and consequences of the natural dam formation and failure, and we dynamically model the 2005 rockslide?debris avalanche with DAN3D. Additionally, as a volume ~ 24 * 106 m3of rocks still remain unstable in the slope, we use the results of the back analysis to forecast the formation of a future natural dam. We analyzed two potential scenarios: a partial slope failure of 6.5 * 106 m3 and a worst case where all the unstable volume remaining in the slope fails. The spreading of those potential events shows that a new blockage of the Santa Cruz River is likely to occur. According to their modeled morphometry and the contributing watershed upstream the blockage area, as the one of 2005, the dams would also be unstable. This study shows the importance of back and forward analysis that can be carried out to obtain critical information for land use planning, hazards mitigation, and emergency management.
Resumo:
Records with the search string biogeograph* were collected from the Science Citation Index (SCI). A total of 3456 records were downloaded for the 1945-2006 period from titles of articles and reviews, and 10,543 records were downloaded for 1991-2006, taking into consideration also abstracts and keywords. Temporal trends of publications, geographical and institutional distribution of the research output, authorship, and core journals were evaluated. There were as many as 122 countries carrying out biogeographic research; in the most recent period, USA is the top producing country, followed by the United Kingdom, Australia, France, Germany, Spain, and Canada. There were 17,493 authors contributing to the field. During 1991-2006 there were 4098 organizations with authors involved in biogeographic research; institutions with higher number of papers are the Natural History Museum (United Kingdom), the University of California, Berkeley (USA), the Museum National d'Histoire Naturelle (France), the Universidad Nacional Autónoma de México (Mexico), the American Museum of Natural History (USA) and the Russian Academy of Sciences (Russia). Research articles are spread over a variety of journals, with the Journal of Biogeography, Molecular Phylogenetics and Evolution, Molecular Ecology, and Biological Journal of the Linnean Society being the core journals. From 28,759 keywords retrieved those with the highest frequency were evolution, phylogeny, diversity, mitochondrial DNA, pattern(s), systematics, and population(s). We conclude that publications on biogeography have increased substantially during the last years, especially since 1998. The preferred journal for biogeographic papers is the Journal of Biogeography. Most frequent keywords seem to indicate that biogeography fits well within both evolutionary biology and ecology, with molecular biology and phylogenetics being important factors that drive their current development.
Resumo:
This preliminary exploration was limited by a number of factors. The format of the study has necessarily induced some form of selection bias of the panelists, because of the complexity of some questions, and the time required to complete the questionnaires. Several issues have not been addressed. One example could be the response to HIV infection occurring in a vaccinee. The study also did not address the difficulties related to the licensing of the vaccine. Indeed, the proposed scenario assumed that the vaccine had been registered as a starting point for the analysis. Finally, it has not been possible to conduct a sensitivity analysis, in order to evaluate how the responses would have been modified if some important characteristics of the vaccine had been modified.Very diverse evaluations were given in response to questions related with attitudes and perception of AIDS and AIDS vaccine. The possibility that vaccine availability or usage can be associated with an increased frequency in risky behaviors was spontaneously mentioned by half of the panelists. The estimation of the proportion of persons at highest risk who would choose to use this vaccine also indicated a high degree of uncertainty. This study offers important lessons. According to a broad and diverse panel of individuals, an incompletely effective AIDS vaccine would result in an additional level of complexity for the AIDS prevention strategy, rather than a simplification. The use of such a vaccine would have to be coupled with counselling. This implies a sustained emphasis on the recommendations which have been central to the STOP AIDS campaigns until now. In addition, consensual issues, as well as other issues more likely to be controversial have been identified. This should greatly help focusing the work of any committee designated to develop and implement a vaccination policy if an AIDS vaccine became available. Finally, our experience with the Policy Delphi indicates that this mode of structured communication could be usefully applied to other public health issues presenting a high visibility as well as a complex relationship with public perception.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
Background: Medical treatment of inflammatory bowel disease (IBD) is becoming more and more complex, as several classes of immuno-modulating drugs (IMD) are often used simultaneously. Thus, the probability of adverse effects is greatly increased. Most studies reporting on adverse effects focus on single therapy, and studies providing a global survey of side effects for multiple treatments are lacking. Aim: To assess the type and frequency of adverse events in IBD patients treated with single and multiple IMD therapy. Methods: Analysis of data from the Swiss IBD Cohort Study (SIBDCS) that collects data on a large sample of IBD patients from hospitals and private practices across Switzerland. The following IMD categories were analyzed: 5-ASA, azathioprine (Aza), 6-mercaptopurine (6-MP), methotrexate (MTX), anti-TNF (infliximab, adalimumab, certolizumab-pegol), cyclosporine, tacrolimus, and steroids. The following side effects were assessed: hepatitis, pancreatitis, leucopenia, thrombopenia, nephritis, allergic reaction, pneumonitis, infections (including tuberculosis), osteoporosis, abdominal pain/diarrhea (unrelated to IBD activity), cataract, diabetes, exanthema, hirsutism, lupus-like syndrome, myalgias, depression/psychosis, tumor development. Results: A total of 1,961 patients were analyzed (977 [50%] female, mean age 42.1 ± 14.4 years): 1,119 with Crohn's disease (CD), 800 with ulcerative colitis (UC), and 42 with indeterminate colitis (IC). Three-hundred eighteen (16.2%) patients were not treated with any of the above-mentioned medications, while 650 (33.2%), 569 (29%) and 424 (21.6%) patients had one-, two-, and three- or more- IMD therapy, respectively. Of the 1,643 patients treated with IMD, 535 (32.6%) patients reported at least one side effect. We found a significant correlation between the number of drugs used by a patient and the frequency of side effects (17.4% side effects for one drug, 29% for 2 drugs, and 60.6% for three or more drugs, p < 0.001). The frequency of side effects for the different IMD classes were as follows: 5-ASA (n = 980 treated patients) 10.8%, Aza/6-MP (n = 636) 51.9% (pancreatitis in 57 = 9%, hepatitis in 17 = 2.7% of treated patients), MTX (n = 146) 42.5% (hepatitis in 4 = 2.7% of treated patients), anti-TNF (n = 255) 23.1%, cyclosporine (n = 49) 10.2%, tacrolimus (n = 5) 20%, steroids (systemic or topical, n = 1,150) 9.6%. Conclusion: IBD treatment is associated with a significant number of side effects. A direct correlation between the number of IMD used simultaneously and the frequency of side effects was observed. The results of this study indicate that treating physicians should be vigilant for the occurrence of side effects in IBD patients under single and/or multiple drug therapy.
Resumo:
We consider the joint visualization of two matrices which have common rowsand columns, for example multivariate data observed at two time pointsor split accord-ing to a dichotomous variable. Methods of interest includeprincipal components analysis for interval-scaled data, or correspondenceanalysis for frequency data or ratio-scaled variables on commensuratescales. A simple result in matrix algebra shows that by setting up thematrices in a particular block format, matrix sum and difference componentscan be visualized. The case when we have more than two matrices is alsodiscussed and the methodology is applied to data from the InternationalSocial Survey Program.
Resumo:
Large, rare copy number variants (CNVs) have been implicated in a variety of psychiatric disorders, but the role of CNVs in recurrent depression is unclear. We performed a genome-wide analysis of large, rare CNVs in 3106 cases of recurrent depression, 459 controls screened for lifetime-absence of psychiatric disorder and 5619 unscreened controls from phase 2 of the Wellcome Trust Case Control Consortium (WTCCC2). We compared the frequency of cases with CNVs against the frequency observed in each control group, analysing CNVs over the whole genome, genic, intergenic, intronic and exonic regions. We found that deletion CNVs were associated with recurrent depression, whereas duplications were not. The effect was significant when comparing cases with WTCCC2 controls (P=7.7 × 10(-6), odds ratio (OR) =1.25 (95% confidence interval (CI) 1.13-1.37)) and to screened controls (P=5.6 × 10(-4), OR=1.52 (95% CI 1.20-1.93). Further analysis showed that CNVs deleting protein coding regions were largely responsible for the association. Within an analysis of regions previously implicated in schizophrenia, we found an overall enrichment of CNVs in our cases when compared with screened controls (P=0.019). We observe an ordered increase of samples with deletion CNVs, with the lowest proportion seen in screened controls, the next highest in unscreened controls and the highest in cases. This may suggest that the absence of deletion CNVs, especially in genes, is associated with resilience to recurrent depression.