946 resultados para false
Resumo:
Functional RNA structures play an important role both in the context of noncoding RNA transcripts as well as regulatory elements in mRNAs. Here we present a computational study to detect functional RNA structures within the ENCODE regions of the human genome. Since structural RNAs in general lack characteristic signals in primary sequence, comparative approaches evaluating evolutionary conservation of structures are most promising. We have used three recently introduced programs based on either phylogenetic–stochastic context-free grammar (EvoFold) or energy directed folding (RNAz and AlifoldZ), yielding several thousand candidate structures (corresponding to ∼2.7% of the ENCODE regions). EvoFold has its highest sensitivity in highly conserved and relatively AU-rich regions, while RNAz favors slightly GC-rich regions, resulting in a relatively small overlap between methods. Comparison with the GENCODE annotation points to functional RNAs in all genomic contexts, with a slightly increased density in 3′-UTRs. While we estimate a significant false discovery rate of ∼50%–70% many of the predictions can be further substantiated by additional criteria: 248 loci are predicted by both RNAz and EvoFold, and an additional 239 RNAz or EvoFold predictions are supported by the (more stringent) AlifoldZ algorithm. Five hundred seventy RNAz structure predictions fall into regions that show signs of selection pressure also on the sequence level (i.e., conserved elements). More than 700 predictions overlap with noncoding transcripts detected by oligonucleotide tiling arrays. One hundred seventy-five selected candidates were tested by RT-PCR in six tissues, and expression could be verified in 43 cases (24.6%).
Resumo:
The completion of the sequencing of the mouse genome promises to help predict human genes with greater accuracy. While current ab initio gene prediction programs are remarkably sensitive (i.e., they predict at least a fragment of most genes), their specificity is often low, predicting a large number of false-positive genes in the human genome. Sequence conservation at the protein level with the mouse genome can help eliminate some of those false positives. Here we describe SGP2, a gene prediction program that combines ab initio gene prediction with TBLASTX searches between two genome sequences to provide both sensitive and specific gene predictions. The accuracy of SGP2 when used to predict genes by comparing the human and mouse genomes is assessed on a number of data sets, including single-gene data sets, the highly curated human chromosome 22 predictions, and entire genome predictions from ENSEMBL. Results indicate that SGP2 outperforms purely ab initio gene prediction methods. Results also indicate that SGP2 works about as well with 3x shotgun data as it does with fully assembled genomes. SGP2 provides a high enough specificity that its predictions can be experimentally verified at a reasonable cost. SGP2 was used to generate a complete set of gene predictions on both the human and mouse by comparing the genomes of these two species. Our results suggest that another few thousand human and mouse genes currently not in ENSEMBL are worth verifying experimentally.
Resumo:
The recent availability of the chicken genome sequence poses the question of whether there are human protein-coding genes conserved in chicken that are currently not included in the human gene catalog. Here, we show, using comparative gene finding followed by experimental verification of exon pairs by RT–PCR, that the addition to the multi-exonic subset of this catalog could be as little as 0.2%, suggesting that we may be closing in on the human gene set. Our protocol, however, has two shortcomings: (i) the bioinformatic screening of the predicted genes, applied to filter out false positives, cannot handle intronless genes; and (ii) the experimental verification could fail to identify expression at a specific developmental time. This highlights the importance of developing methods that could provide a reliable estimate of the number of these two types of genes.
Resumo:
This review paper reports the consensus of a technical workshop hosted by the European network, NanoImpactNet (NIN). The workshop aimed to review the collective experience of working at the bench with manufactured nanomaterials (MNMs), and to recommend modifications to existing experimental methods and OECD protocols. Current procedures for cleaning glassware are appropriate for most MNMs, although interference with electrodes may occur. Maintaining exposure is more difficult with MNMs compared to conventional chemicals. A metal salt control is recommended for experiments with metallic MNMs that may release free metal ions. Dispersing agents should be avoided, but if they must be used, then natural or synthetic dispersing agents are possible, and dispersion controls essential. Time constraints and technology gaps indicate that full characterisation of test media during ecotoxicity tests is currently not practical. Details of electron microscopy, dark-field microscopy, a range of spectroscopic methods (EDX, XRD, XANES, EXAFS), light scattering techniques (DLS, SLS) and chromatography are discussed. The development of user-friendly software to predict particle behaviour in test media according to DLVO theory is in progress, and simple optical methods are available to estimate the settling behaviour of suspensions during experiments. However, for soil matrices such simple approaches may not be applicable. Alternatively, a Critical Body Residue approach may be taken in which body concentrations in organisms are related to effects, and toxicity thresholds derived. For microbial assays, the cell wall is a formidable barrier to MNMs and end points that rely on the test substance penetrating the cell may be insensitive. Instead assays based on the cell envelope should be developed for MNMs. In algal growth tests, the abiotic factors that promote particle aggregation in the media (e.g. ionic strength) are also important in providing nutrients, and manipulation of the media to control the dispersion may also inhibit growth. Controls to quantify shading effects, and precise details of lighting regimes, shaking or mixing should be reported in algal tests. Photosynthesis may be more sensitive than traditional growth end points for algae and plants. Tests with invertebrates should consider non-chemical toxicity from particle adherence to the organisms. The use of semi-static exposure methods with fish can reduce the logistical issues of waste water disposal and facilitate aspects of animal husbandry relevant to MMNs. There are concerns that the existing bioaccumulation tests are conceptually flawed for MNMs and that new test(s) are required. In vitro testing strategies, as exemplified by genotoxicity assays, can be modified for MNMs, but the risk of false negatives in some assays is highlighted. In conclusion, most protocols will require some modifications and recommendations are made to aid the researcher at the bench. [Authors]
Resumo:
Cannabis cultivation in order to produce drugs is forbidden in Switzerland. Thus, law enforcement authorities regularly ask forensic laboratories to determinate cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. As required by the EU official analysis protocol the THC rate of cannabis is measured from the flowers at maturity. When laboratories are confronted to seedlings, they have to lead the plant to maturity, meaning a time consuming and costly procedure. This study investigated the discrimination of fibre type from drug type Cannabis seedlings by analysing the compounds found in their leaves and using chemometrics tools. 11 legal varieties allowed by the Swiss Federal Office for Agriculture and 13 illegal ones were greenhouse grown and analysed using a gas chromatograph interfaced with a mass spectrometer. Compounds that show high discrimination capabilities in the seedlings have been identified and a support vector machines (SVMs) analysis was used to classify the cannabis samples. The overall set of samples shows a classification rate above 99% with false positive rates less than 2%. This model allows then discrimination between fibre and drug type Cannabis at an early stage of growth. Therefore it is not necessary to wait plants' maturity to quantify their amount of THC in order to determine their chemotype. This procedure could be used for the control of legal (fibre type) and illegal (drug type) Cannabis production.
Resumo:
In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.
Resumo:
The rapid adoption of online media like Facebook, Twitter or Wikileaks leaves us with little time to think. Where is information technology taking us, our society and our democratic institutions ? Is the Web replicating social divides that already exist offline or does collaborative technology pave the way for a more equal society ? How do we find the right balance between openness and privacy ? Can social media improve civic participation or do they breed superficial exchange and the promotion of false information ? These and lots of other questions arise when one starts to look at the Internet, society and politics. The first part of this paper gives an overview of the social changes that occur with the rise of the Web. The second part serves as an overview on how the Web is being used for political participation in Switzerland and abroad. Le développement rapide de nouveaux médias comme Facebook, Twitter ou Wikileaks ne laisse que peu de temps à la réflexion. Quels sont les changements que ces technologies de l'information impliquent pour nous, notre société et nos institutions démocratiques ? Internet ne fait-il que reproduire des divisions sociales qui lui préexistent ou constitue-t-il un moyen de lisser et d'égaliser ces mêmes divisions ? Comment trouver le bon équilibre entre transparence et respect de la vie privée ? Les médias sociaux permettent-ils de stimuler la participation politique ou ne sont-ils que le vecteur d'échanges superficiels et de fausses informations ? Ces questions, parmi d'autres, émergent rapidement lorsque l'on s'intéresse à la question des liens entre Internet, la société et la politique. La première partie de ce cahier est consacrée aux changements sociaux générés par l'émergence et le développement d'Internet. La seconde fait l'état des lieux de la manière dont Internet est utilisé pour stimuler la participation politique en Suisse et à l'étranger.
Resumo:
BACKGROUND: Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. METHODS: We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship 'Prevalence = Incidence x Duration' in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship 'incident = true incident + false incident' and also to the IIR derived from the BED incidence assay. RESULTS: Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R(2) = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. CONCLUSIONS: IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.
Resumo:
Interferences with the Olympus immunoturbidimetric assay for ferritin have been reported because the antibodies used in the immunoassay are derived from rabbits. Rabbits are familiar pets known to be a risk factor for developing heterophilic (or interfering) antibodies. This report shows how the current Olympus Ferritin assay has been improved to eliminate the interference from heterophilic antibodies.
Resumo:
Testosterone abuse is conventionally assessed by the urinary testosterone/epitestosterone (T/E) ratio, levels above 4.0 being considered suspicious. A deletion polymorphism in the gene coding for UGT2B17 is strongly associated with reduced testosterone glucuronide (TG) levels in urine. Many of the individuals devoid of the gene would not reach a T/E ratio of 4.0 after testosterone intake. Future test programs will most likely shift from population based- to individual-based T/E cut-off ratios using Bayesian inference. A longitudinal analysis is dependent on an individual's true negative baseline T/E ratio. The aim was to investigate whether it is possible to increase the sensitivity and specificity of the T/E test by addition of UGT2B17 genotype information in a Bayesian framework. A single intramuscular dose of 500mg testosterone enanthate was given to 55 healthy male volunteers with either two, one or no allele (ins/ins, ins/del or del/del) of the UGT2B17 gene. Urinary excretion of TG and the T/E ratio was measured during 15 days. The Bayesian analysis was conducted to calculate the individual T/E cut-off ratio. When adding the genotype information, the program returned lower individual cut-off ratios in all del/del subjects increasing the sensitivity of the test considerably. It will be difficult, if not impossible, to discriminate between a true negative baseline T/E value and a false negative one without knowledge of the UGT2B17 genotype. UGT2B17 genotype information is crucial, both to decide which initial cut-off ratio to use for an individual, and for increasing the sensitivity of the Bayesian analysis.
Resumo:
Overdiagnosis is the diagnosis of an abnormality that is not associated with a substantial health hazard and that patients have no benefit to be aware of. It is neither a misdiagnosis (diagnostic error), nor a false positive result (positive test in the absence of a real abnormality). It mainly results from screening, use of increasingly sensitive diagnostic tests, incidental findings on routine examinations, and widening diagnostic criteria to define a condition requiring an intervention. The blurring boundaries between risk and disease, physicians' fear of missing a diagnosis and patients' need for reassurance are further causes of overdiagnosis. Overdiagnosis often implies procedures to confirm or exclude the presence of the condition and is by definition associated with useless treatments and interventions, generating harm and costs without any benefit. Overdiagnosis also diverts healthcare professionals from caring about other health issues. Preventing overdiagnosis requires increasing awareness of healthcare professionals and patients about its occurrence, the avoidance of unnecessary and untargeted diagnostic tests, and the avoidance of screening without demonstrated benefits. Furthermore, accounting systematically for the harms and benefits of screening and diagnostic tests and determining risk factor thresholds based on the expected absolute risk reduction would also help prevent overdiagnosis.
Resumo:
Tässä insinöörityössä tutkittiin Neste Jacobsin toimeksiantamia laitossuunnitteluprojekteja ja tarkemmin niiden yleis- ja putkistosuunnittelua ja siinä yleisimmin käytettäviä tunnuslu-kuja. Tunnuslukujen selvittämistä varten valittiin neljä projektia, jotka olivat tyypillisiä teollisia investointiprojekteja, ja joista saadut tekniset tiedot olivat tuoreita. Valittujen projektien suunnittelutyö oli joko valmis tai valmistumassa tietojenkeruuhetkellä. Työssä tutkittiin projektien tunnuslukujen eroja ja niiden syitä. Projektien tekniset tiedot kerättiin niiden suunnittelutyössä mukana olleilta yrityksiltä. Alkutietojen hyväksymisen jälkeen niiden pohjalta laskettiin projektikohtaiset tunnusluvut Tunnuslukutietojen keräämisen tuli olla huolellista ja pitkäjänteistä, koska niistä saatuja tietoja käytetään tulevien projektien laajuuksien ja kustannuksien arviointiin. Työn tärkeimpiä tuloksia olivat yleisimmin käytettyjen tunnuslukujen saanti jokaisesta pro-jektista ja tietojen kokoaminen helposti luettavaan taulukkoon. Saatuja tuloksia voidaan pitää luotettavina ja niitä voidaan käyttää jatkossa arvioitaessa projektien laajuuksia.
Resumo:
We study an adaptive statistical approach to analyze brain networks represented by brain connection matrices of interregional connectivity (connectomes). Our approach is at a middle level between a global analysis and single connections analysis by considering subnetworks of the global brain network. These subnetworks represent either the inter-connectivity between two brain anatomical regions or by the intra-connectivity within the same brain anatomical region. An appropriate summary statistic, that characterizes a meaningful feature of the subnetwork, is evaluated. Based on this summary statistic, a statistical test is performed to derive the corresponding p-value. The reformulation of the problem in this way reduces the number of statistical tests in an orderly fashion based on our understanding of the problem. Considering the global testing problem, the p-values are corrected to control the rate of false discoveries. Finally, the procedure is followed by a local investigation within the significant subnetworks. We contrast this strategy with the one based on the individual measures in terms of power. We show that this strategy has a great potential, in particular in cases where the subnetworks are well defined and the summary statistics are properly chosen. As an application example, we compare structural brain connection matrices of two groups of subjects with a 22q11.2 deletion syndrome, distinguished by their IQ scores.
Resumo:
This paper retakes previous work of the authors, about the relationship between non-quasi-competitiveness (the increase in price caused by an increase in the number of oligopolists) and stability of the equilibrium in the classical Cournot oligopoly model. Though it has been widely accepted in the literature that the loss of quasi-competitiveness is linked, in the long run as new firms entered the market, to instability of the model, the authors in their previous work put forward a model in which a situation of monopoly changed to duopoly losing quasi-competitiveness but maintaining the stability of the equilibrium. That model could not, at the time, be extended to any number of oligopolists. The present paper exhibits such an extension. An oligopoly model is shown in which the loss of quasi-competitiveness resists the presence in the market of as many firms as one wishes and where the successive Cournot's equilibrium points are unique and asymptotically stable. In this way, for the first time, the conjecture that non-quasi- competitiveness and instability were equivalent in the long run, is proved false.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.