70 resultados para Source Code Analysis
Resumo:
There are suggestions of an inverse association between folate intake and serum folate levels and the risk of oral cavity and pharyngeal cancers (OPCs), but most studies are limited in sample size, with only few reporting information on the source of dietary folate. Our study aims to investigate the association between folate intake and the risk of OPC within the International Head and Neck Cancer Epidemiology (INHANCE) Consortium. We analyzed pooled individual-level data from ten case-control studies participating in the INHANCE consortium, including 5,127 cases and 13,249 controls. Odds ratios (ORs) and the corresponding 95% confidence intervals (CIs) were estimated for the associations between total folate intake (natural, fortification and supplementation) and natural folate only, and OPC risk. We found an inverse association between total folate intake and overall OPC risk (the adjusted OR for the highest vs. the lowest quintile was 0.65, 95% CI: 0.43-0.99), with a stronger association for oral cavity (OR = 0.57, 95% CI: 0.43-0.75). A similar inverse association, though somewhat weaker, was observed for folate intake from natural sources only in oral cavity cancer (OR = 0.64, 95% CI: 0.45-0.91). The highest OPC risk was observed in heavy alcohol drinkers with low folate intake as compared to never/light drinkers with high folate (OR = 4.05, 95% CI: 3.43-4.79); the attributable proportion (AP) owing to interaction was 11.1% (95% CI: 1.4-20.8%). Lastly, we reported an OR of 2.73 (95% CI:2.34-3.19) for those ever tobacco users with low folate intake, compared with nevere tobacco users and high folate intake (AP of interaction =10.6%, 95% CI: 0.41-20.8%). Our project of a large pool of case-control studies supports a protective effect of total folate intake on OPC risk.
Resumo:
Deciding whether two fingerprint marks originate from the same source requires examination and comparison of their features. Many cognitive factors play a major role in such information processing. In this paper we examined the consistency (both between- and within-experts) in the analysis of latent marks, and whether the presence of a 'target' comparison print affects this analysis. Our findings showed that the context of a comparison print affected analysis of the latent mark, possibly influencing allocation of attention, visual search, and threshold for determining a 'signal'. We also found that even without the context of the comparison print there was still a lack of consistency in analysing latent marks. Not only was this reflected by inconsistency between different experts, but the same experts at different times were inconsistent with their own analysis. However, the characterization of these inconsistencies depends on the standard and definition of what constitutes inconsistent. Furthermore, these effects were not uniform; the lack of consistency varied across fingerprints and experts. We propose solutions to mediate variability in the analysis of friction ridge skin.
Resumo:
BACKGROUND AND STUDY AIMS: Appropriate use of colonoscopy is a key component of quality management in gastrointestinal endoscopy. In an update of a 1998 publication, the 2008 European Panel on the Appropriateness of Gastrointestinal Endoscopy (EPAGE II) defined appropriateness criteria for various colonoscopy indications. This introductory paper therefore deals with methodology, general appropriateness, and a review of colonoscopy complications. METHODS:The RAND/UCLA Appropriateness Method was used to evaluate the appropriateness of various diagnostic colonoscopy indications, with 14 multidisciplinary experts using a scale from 1 (extremely inappropriate) to 9 (extremely appropriate). Evidence reported in a comprehensive updated literature review was used for these decisions. Consolidation of the ratings into three appropriateness categories (appropriate, uncertain, inappropriate) was based on the median and the heterogeneity of the votes. The experts then met to discuss areas of disagreement in the light of existing evidence, followed by a second rating round, with a subsequent third voting round on necessity criteria, using much more stringent criteria (i. e. colonoscopy is deemed mandatory). RESULTS: Overall, 463 indications were rated, with 55 %, 16 % and 29 % of them being judged appropriate, uncertain and inappropriate, respectively. Perforation and hemorrhage rates, as reported in 39 studies, were in general < 0.1 % and < 0.3 %, respectively CONCLUSIONS: The updated EPAGE II criteria constitute an aid to clinical decision-making but should in no way replace individual judgment. Detailed panel results are freely available on the internet (www.epage.ch) and will thus constitute a reference source of information for clinicians.
Resumo:
C4-dicarboxylates are one of the preferred carbon and energy sources for the growth of P. aeruginosa, a ubiquitous and metabolically versatile bacterium. However, despite their importance, C4-dicarboxylates sensing and uptake systems were poorly understood in P. aeruginosa and only little information was available in the literature. In our work, the C4-dicarboxylate transport (Dct) system in P. aeruginosa was found to be composed of a novel two-component system, called DctB/DctD, regulating together with the sigma factor RpoN the expression of two newly identified C4-dicarboxylate transporters: DctA and DctPQM. Inactivation of the dct A, dctB or dctD gene caused a growth defect of the strain in minimal media supplemented with succinate, fumarate or malate, indicating their major role in Dct. However, residual growth of the dctA mutant in these media suggested the presence of redundant C4-dicarboxylate transporter(s). Tn5 insertion mutagenesis of the kdctA mutant, combined with a screening for growth on succinate, led to the identification of a second Dct system, the DctPQM transporter, belonging to the tripartite ATP-independent periplasmic (TRAP) family of carriers. AdctAAdctPQM double mutant showed no growth on malate and fumarate albeit residual growth on succinate suggested that additional transporters for succinate are present. Competition experiments demonstrated that the DctPQM carrier was more efficient than the DctA carrier for the utilization of succinate at μΜ concentrations, whereas DctA was the major transporter at mM concentrations. For the first time, high- and low-affinity uptake systems for succinate (DctA and DctPQM) are reported to function co-ordinately to transport C4- dicarboxylates. Most probably, the presence of redundant uptake systems contributes to the versatility of this bacterium. Next, the regulation of the Dct system was investigated. While performing a parallel study about the carbon catabolite repression (CCR) phenomenon in P. aeruginosa, a link between the CCR cascade (CbrAB/CrcZ/Crc) and the Dct system was observed. Crc is a translational repressor acting when preferred carbon sources (like C4-dicarboxylates) are present. CrcZ is a small RNA acting as a functional antagonist of Crc and induced by the CbrA/CbrB two-component system when non preferred carbon sources (like mannitol) are utilized. Novel targets of the CbrAB/CrcZ/Crc system in P. aeruginosa were identified using transcriptome analysis; among them dctA and dctPQM were detected. CCR is regulating the dct transporter genes expression depending on the succinate concentrations in the medium of growth; this modulation of CCR is possible because, at the same time, succinate concentrations tune CCR. In a medium containing high succinate concentrations, CrcZ levels were low and therefore Crc inhibited the translation of mRNA targets. Whereas in a medium containing low succinate concentrations, the subsequent increase of CrcZ levels sequestered Crc, inhibiting its activity. This model shows for the first time that CCR possesses a feedback-based circuitry, a very important type of regulatory loop that confers the best adaptive response under changing environmental conditions. The expression of the dct transporter genes is also found to be regulated by the RNA chaperone protein Hfq. Hfq has the same post-transcriptional effect than Crc at high concentration of succinate, i.e. inhibiting dctP and dctR and indirectly favouring dctA expression. Moreover, an additional indirect positive regulation of dctP expression by Hfq was found. Finally, a metabolome approach was performed to investigate the internal signals modulating CCR via induction of CbrA activity in P. aeruginosa PAOl and P. putida KT2442. The results of the analysis are currently under study in the laboratory. - Les acides C4-dicarboxyliques font partie des sources de carbone et d'énergie préférés de P. aeruginosa, une bactérie versatile et ubiquitaire. Néanmoins, malgré leur importance, comment la présence des acides C4-dicarboxyliques dans le milieu est sentie par la bactérie et comment ils sont transportés dans la cellule chez P. aeruginosa n'étaient pas connus. De plus, peu d'informations sur ces procédés ont été répertoriées dans la littérature. Grace à notre travail, le système de transport des acides C4-dicarboxyliques (Dct) chez P. aeruginosa a pu être caractérisé. En effet, il est composé d'un nouveau système à deux composants, nommé DctB/DctD, qui régule, en combinaison avec le facteur sigma alternatif RpoN, l'expression des deux nouveaux transporteurs des acides C4-dicarboxyliques: DctA et DctPQM. L'inactivation des gènes dctA, dctB or dctD cause un défaut de croissance des souches mutantes dans un milieu minimum contenant du succinate, fumarate ou malate; confirmation de leur rôle dans le Dct. Cependant, une croissance résiduelle du mutant dctA dans ces milieux suggérerait une redondance des transporteurs d'acides Grdicarboxyliques. Une expérience de mutagenèse dans la souche AdctA, utilisant le transposon Tn5, combiné avec un criblage génétique sur la croissance dans le succinate, nous a permis d'identifier le deuxième transporteur DctPQM. DctPQM appartient à la famille des transporteurs TRAP (tripartite ATP-independent periplasmic). Un double mutant AdctAAdctPQM ne pousse pas dans du malate ou fumarate mais par contre présente une croissance résiduelle dans le succinate suggérant l'existence de transporteurs supplémentaires pour le succinate. En réalisant des expériences de compétitions nous avons démontré que le transporteur DctPQM est plus efficace que le transporteur DctA pour l'utilisation de succinate à une concentration de l'ordre du μΜ. Par contre, DctA est le transporteur le plus important pour une concentration de succinate de l'ordre du raM. Pour la première fois, deux systèmes de transport, un avec une forte- et un avec une faible-affinité (DctA et DctPQM) pour le succinate, sont coordonnés dans leur activité de transport des acides C4- dicarboxyliques, probablement contribuant à la versatilité de la bactérie. Ensuite, nous avons étudié la régulation du system Dct. En effectuant, en parallèle, une étude sur le phénomène de la répression catabolique (RC) chez P. aeruginosa, un lien entre la RC et le système Dct a été observé. La cascade des régulateurs formant la RC est composée de CbrA/CbrB, CrcZ et Crc. Crc est un répresseur traductionnel qui agit quand des sources de carbone préférées (comme les acides C4-dicarboxyliques) sont présentes dans le milieu. CrcZ est un petit ARN non-codant qui agit comme antagoniste de Crc. L'expression de CrcZ est induite par le système à deux composants CbrA/CbrB lorsque une source de carbone non-préférée est utilisée (comme le mannitol). Des nouvelles cibles du système CbrAB/CrcZ/Crc chez P. aeruginosa ont été identifiées grâce à une analyse du transcriptome des souches mutantes des régulateurs de la cascade. Parmi les cibles identifiées, les gènes dctA et dctPQM étaient présents. La RC régule l'expression des transporteurs dct en fonction de la concentration de succinate dans le milieu de croissance. Cette régulation est possible parce que, en même temps, les acides C4- dicarboxyliques régulent la RC. Dans un milieu contenant une grande concentration du succinate, le niveau d'expression de CrcZ est faible, donc Crc peut inhiber l'expression de ces ARN messagers cibles. Par contre, dans un milieu avec une faible concentration de succinate, l'augmentation de l'expression de CrcZ titre Crc et inhibe son activité. Ce modèle de régulation rétroactive est très important pour le phénomène de la RC, parce qu'il permet à la bactérie d'accorder une meilleure réponse à un changement environnemental. L'expression des gènes codant pour les transporteurs dct sont aussi régulés par la protéine chaperonne d'ARN Hfq. Hfq semble avoir le même effet traductionnelle que Crc, lorsqu'il y a une forte concentration de succinate. Nous avons ainsi observé une régulation négative de l'expression du gène dct Ρ et dctR, qui code pour un répresseur de la transcription de dctA. Nous avons aussi observé une régulation positive de la transcription de dctP par Hfq, probablement de façon indirecte. Enfin, une analyse du metabolome a était utilisée pour chercher les signaux internes modulant la RC et, en particulier, l'activité de la protéine senseur CbrA chez P. aeruginosa PAOl et P. putida KT2442. Les résultats de l'analyse sont en cours d'étude dans le laboratoire.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Around 11.5 * 106 m3 of rock detached from the eastern slope of the Santa Cruz valley (San Juan province, Argentina) in the first fortnight of January 2005. The rockslide?debris avalanche blocked the course, resulting in the development of a lake with maximum length of around 3.5 km. The increase in the inflow rate from 47,000?74,000 m3/d between April and October to 304,000 m3/d between late October and the first fortnight of November, accelerated the growing rate of the lake. On 12 November 2005 the dam failed, releasing 24.6 * 106 m3 of water. The resulting outburst flood caused damages mainly on infrastructure, and affected the facilities of a hydropower dam which was under construction 250 km downstream from the source area. In this work we describe causes and consequences of the natural dam formation and failure, and we dynamically model the 2005 rockslide?debris avalanche with DAN3D. Additionally, as a volume ~ 24 * 106 m3of rocks still remain unstable in the slope, we use the results of the back analysis to forecast the formation of a future natural dam. We analyzed two potential scenarios: a partial slope failure of 6.5 * 106 m3 and a worst case where all the unstable volume remaining in the slope fails. The spreading of those potential events shows that a new blockage of the Santa Cruz River is likely to occur. According to their modeled morphometry and the contributing watershed upstream the blockage area, as the one of 2005, the dams would also be unstable. This study shows the importance of back and forward analysis that can be carried out to obtain critical information for land use planning, hazards mitigation, and emergency management.
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.
Resumo:
Accurate detection of subpopulation size determinations in bimodal populations remains problematic yet it represents a powerful way by which cellular heterogeneity under different environmental conditions can be compared. So far, most studies have relied on qualitative descriptions of population distribution patterns, on population-independent descriptors, or on arbitrary placement of thresholds distinguishing biological ON from OFF states. We found that all these methods fall short of accurately describing small population sizes in bimodal populations. Here we propose a simple, statistics-based method for the analysis of small subpopulation sizes for use in the free software environment R and test this method on real as well as simulated data. Four so-called population splitting methods were designed with different algorithms that can estimate subpopulation sizes from bimodal populations. All four methods proved more precise than previously used methods when analyzing subpopulation sizes of transfer competent cells arising in populations of the bacterium Pseudomonas knackmussii B13. The methods' resolving powers were further explored by bootstrapping and simulations. Two of the methods were not severely limited by the proportions of subpopulations they could estimate correctly, but the two others only allowed accurate subpopulation quantification when this amounted to less than 25% of the total population. In contrast, only one method was still sufficiently accurate with subpopulations smaller than 1% of the total population. This study proposes a number of rational approximations to quantifying small subpopulations and offers an easy-to-use protocol for their implementation in the open source statistical software environment R.
Resumo:
Deciding whether two fingerprint marks originate from the same source requires examination and comparison of their features. Many cognitive factors play a major role in such information processing. In this paper we examined the consistency (both between- and within-experts) in the analysis of latent marks, and whether the presence of a 'target' comparison print affects this analysis. Our findings showed that the context of a comparison print affected analysis of the latent mark, possibly influencing allocation of attention, visual search, and threshold for determining a 'signal'. We also found that even without the context of the comparison print there was still a lack of consistency in analysing latent marks. Not only was this reflected by inconsistency between different experts, but the same experts at different times were inconsistent with their own analysis. However, the characterization of these inconsistencies depends on the standard and definition of what constitutes inconsistent. Furthermore, these effects were not uniform; the lack of consistency varied across fingerprints and experts. We propose solutions to mediate variability in the analysis of friction ridge skin.
Resumo:
OBJECTIVE: HIV-1 post-exposure prophylaxis (PEP) is frequently prescribed after exposure to source persons with an undetermined HIV serostatus. To reduce unnecessary use of PEP, we implemented a policy including active contacting of source persons and the availability of free, anonymous HIV testing ('PEP policy'). METHODS: All consultations for potential non-occupational HIV exposures i.e. outside the medical environment) were prospectively recorded. The impact of the PEP policy on PEP prescription and costs was analysed and modelled. RESULTS: Among 146 putative exposures, 47 involved a source person already known to be HIV positive and 23 had no indication for PEP. The remaining 76 exposures involved a source person of unknown HIV serostatus. Of 33 (43.4%) exposures for which the source person could be contacted and tested, PEP was avoided in 24 (72.7%), initiated and discontinued in seven (21.2%), and prescribed and completed in two (6.1%). In contrast, of 43 (56.6%) exposures for which the source person could not be tested, PEP was prescribed in 35 (81.4%), P < 0.001. Upon modelling, the PEP policy allowed a 31% reduction of cost for management of exposures to source persons of unknown HIV serostatus. The policy was cost-saving for HIV prevalence of up to 70% in the source population. The availability of all the source persons for testing would have reduced cost by 64%. CONCLUSION: In the management of non-occupational HIV exposures, active contacting and free, anonymous testing of source persons proved feasible. This policy resulted in a decrease in prescription of PEP, proved to be cost-saving, and presumably helped to avoid unnecessary toxicity and psychological stress.
Resumo:
This article analyses and discusses issues that pertain to the choice of relevant databases for assigning values to the components of evaluative likelihood ratio procedures at source level. Although several formal likelihood ratio developments currently exist, both case practitioners and recipients of expert information (such as judiciary) may be reluctant to consider them as a framework for evaluating scientific evidence in context. The recent ruling R v T and ensuing discussions in many forums provide illustrative examples for this. In particular, it is often felt that likelihood ratio-based reasoning amounts to an application that requires extensive quantitative information along with means for dealing with technicalities related to the algebraic formulation of these approaches. With regard to this objection, this article proposes two distinct discussions. In a first part, it is argued that, from a methodological point of view, there are additional levels of qualitative evaluation that are worth considering prior to focusing on particular numerical probability assignments. Analyses will be proposed that intend to show that, under certain assumptions, relative numerical values, as opposed to absolute values, may be sufficient to characterize a likelihood ratio for practical and pragmatic purposes. The feasibility of such qualitative considerations points out that the availability of hard numerical data is not a necessary requirement for implementing a likelihood ratio approach in practice. It is further argued that, even if numerical evaluations can be made, qualitative considerations may be valuable because they can further the understanding of the logical underpinnings of an assessment. In a second part, the article will draw a parallel to R v T by concentrating on a practical footwear mark case received at the authors' institute. This case will serve the purpose of exemplifying the possible usage of data from various sources in casework and help to discuss the difficulty associated with reconciling the depth of theoretical likelihood ratio developments and limitations in the degree to which these developments can actually be applied in practice.
Resumo:
Alzheimer's disease (AD) disrupts functional connectivity in distributed cortical networks. We analyzed changes in the S-estimator, a measure of multivariate intraregional synchronization, in electroencephalogram (EEG) source space in 15 mild AD patients versus 15 age-matched controls to evaluate its potential as a marker of AD progression. All participants underwent 2 clinical evaluations and 2 EEG recording sessions on diagnosis and after a year. The main effect of AD was hyposynchronization in the medial temporal and frontal regions and relative hypersynchronization in posterior cingulate, precuneus, cuneus, and parietotemporal cortices. However, the S-estimator did not change over time in either group. This result motivated an analysis of rapidly progressing AD versus slow-progressing patients. Rapidly progressing AD patients showed a significant reduction in synchronization with time, manifest in left frontotemporal cortex. Thus, the evolution of source EEG synchronization over time is correlated with the rate of disease progression and should be considered as a cost-effective AD biomarker.
Resumo:
During my PhD, my aim was to provide new tools to increase our capacity to analyse gene expression patterns, and to study on a large-scale basis the evolution of gene expression in animals. Gene expression patterns (when and where a gene is expressed) are a key feature in understanding gene function, notably in development. It appears clear now that the evolution of developmental processes and of phenotypes is shaped both by evolution at the coding sequence level, and at the gene expression level.Studying gene expression evolution in animals, with complex expression patterns over tissues and developmental time, is still challenging. No tools are available to routinely compare expression patterns between different species, with precision, and on a large-scale basis. Studies on gene expression evolution are therefore performed only on small genes datasets, or using imprecise descriptions of expression patterns.The aim of my PhD was thus to develop and use novel bioinformatics resources, to study the evolution of gene expression. To this end, I developed the database Bgee (Base for Gene Expression Evolution). The approach of Bgee is to transform heterogeneous expression data (ESTs, microarrays, and in-situ hybridizations) into present/absent calls, and to annotate them to standard representations of anatomy and development of different species (anatomical ontologies). An extensive mapping between anatomies of species is then developed based on hypothesis of homology. These precise annotations to anatomies, and this extensive mapping between species, are the major assets of Bgee, and have required the involvement of many co-workers over the years. My main personal contribution is the development and the management of both the Bgee database and the web-application.Bgee is now on its ninth release, and includes an important gene expression dataset for 5 species (human, mouse, drosophila, zebrafish, Xenopus), with the most data from mouse, human and zebrafish. Using these three species, I have conducted an analysis of gene expression evolution after duplication in vertebrates.Gene duplication is thought to be a major source of novelty in evolution, and to participate to speciation. It has been suggested that the evolution of gene expression patterns might participate in the retention of duplicate genes. I performed a large-scale comparison of expression patterns of hundreds of duplicated genes to their singleton ortholog in an outgroup, including both small and large-scale duplicates, in three vertebrate species (human, mouse and zebrafish), and using highly accurate descriptions of expression patterns. My results showed unexpectedly high rates of de novo acquisition of expression domains after duplication (neofunctionalization), at least as high or higher than rates of partitioning of expression domains (subfunctionalization). I found differences in the evolution of expression of small- and large-scale duplicates, with small-scale duplicates more prone to neofunctionalization. Duplicates with neofunctionalization seemed to evolve under more relaxed selective pressure on the coding sequence. Finally, even with abundant and precise expression data, the majority fate I recovered was neither neo- nor subfunctionalization of expression domains, suggesting a major role for other mechanisms in duplicate gene retention.
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.
Resumo:
This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.