978 resultados para collections
Resumo:
[Vente. Estampes. 1911-03-16. Paris]
Resumo:
[Vente. Estampes. 1909-05-05 - 1909-05-08. Paris]
Resumo:
Recently, modern cross-sectional imaging techniques such as multi-detector computed tomography (MDCT) have pioneered post mortem investigations, especially in forensic medicine. Such approaches can also be used to investigate bones non-invasively for anthropological purposes. Long bones are often examined in forensic cases because they are frequently discovered and transferred to medico-legal departments for investigation. To estimate their age, the trabecular structure must be examined. This study aimed to compare the performance of MDCT with conventional X-rays to investigate the trabecular structure of long bones. Fifty-two dry bones (24 humeri and 28 femora) from anthropological collections were first examined by conventional X-ray, and then by MDCT. Trabecular structure was evaluated by seven observers (two experienced and five inexperienced in anthropology) who analyzed images obtained by radiological methods. Analyses contained the measurement of one quantitative parameter (caput diameter of humerus and femur) and staging the trabecular structure of each bone. Preciseness of each technique was indicated by describing areas of trabecular destruction and particularities of the bones, such as pathological changes. Concerning quantitative parameters, the measurements demonstrate comparable results for the MDCT and conventional X-ray techniques. In contrast, the overall inter-observer reliability of the staging was low with MDCT and conventional X-ray. Reliability increased significantly when only the results of the staging performed by the two experienced observers were compared, particularly regarding the MDCT analysis. Our results also indicate that MDCT appears to be better suited to a detailed examination of the trabecular structure. In our opinion, MDCT is an adequate tool with which to examine the trabecular structure of long bones. However, adequate methods should be developed or existing methods should be adapted to MDCT.
Resumo:
In Pseudomonas aeruginosa, N-acylhomoserine lactone signals regulate the expression of several hundreds of genes, via the transcriptional regulator LasR and, in part, also via the subordinate regulator RhlR. This regulatory network termed quorum sensing contributes to the virulence of P. aeruginosa as a pathogen. The fact that two supposed PAO1 wild-type strains from strain collections were found to be defective for LasR function because of independent point mutations in the lasR gene led to the hypothesis that loss of quorum sensing might confer a selective advantage on P. aeruginosa under certain environmental conditions. A convenient plate assay for LasR function was devised, based on the observation that lasR mutants did not grow on adenosine as the sole carbon source because a key degradative enzyme, nucleoside hydrolase (Nuh), is positively controlled by LasR. The wild-type PAO1 and lasR mutants showed similar growth rates when incubated in nutrient yeast broth at pH 6.8 and 37 degrees C with good aeration. However, after termination of growth during 30 to 54 h of incubation, when the pH rose to > or = 9, the lasR mutants were significantly more resistant to cell lysis and death than was the wild type. As a consequence, the lasR mutant-to-wild-type ratio increased about 10-fold in mixed cultures incubated for 54 h. In a PAO1 culture, five consecutive cycles of 48 h of incubation sufficed to enrich for about 10% of spontaneous mutants with a Nuh(-) phenotype, and five of these mutants, which were functionally complemented by lasR(+), had mutations in lasR. The observation that, in buffered nutrient yeast broth, the wild type and lasR mutants exhibited similar low tendencies to undergo cell lysis and death suggests that alkaline stress may be a critical factor providing a selective survival advantage to lasR mutants.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
Le manuscrit de Paris, BNF, fr. 818 renferme dans sa première partie (fol. 1-154) l'une des plus amples collections de miracles de Notre-Dame en langue vulgaire du xiiie siècle. Composée vers 1220 dans la région de Lyon, cette compilation anonyme compte parmi les rares textes littéraires produits au Moyen Âge dans l'espace francoprovençal, domaine géographiquement intermédiaire entre le domaine d'oïl et le domaine d'oc (et englobant notam- ment la plus grande partie de la Suisse romande), mais présentant des caracté- ristiques linguistiques qui lui sont propres. Malgré son grand intérêt linguistique et thématique, la collection de miracles du ms. fr. 818 demeure à ce jour partiellement inédite et n'a fait l'objet que d'études fragmentaires. Nous nous proposons donc d'apporter une contribution aux études francoprovençales en complétant l'édition de ce « Mariale en langue vulgaire » (selon l'expression de P. Meyer) et en ana- lysant la scripta très hétérogène de ce recueil, qui se présente comme un savant mélange de formes françaises et de formes lyonnaises. Grâce à l'exa- men systématique de tous les aspects remarquables de la langue du Mariale (phonétique, morphologie, lexique), nous souhaitons porter à la connais- sance des romanistes les riches matériaux francoprovençaux offerts par ce recueil, matériaux qui demeurent en partie méconnus et se trouvent parfois répertoriés dans les dictionnaires sous la fausse étiquette « française ».
Resumo:
DNA-binding proteins mediate a variety of crucial molecular functions, such as transcriptional regulation and chromosome maintenance, replication and repair, which in turn control cell division and differentiation. The roles of these proteins in disease are currently being investigated using microarray-based approaches. However, these assays can be difficult to adapt to routine diagnosis of complex diseases such as cancer. Here, we review promising alternative approaches involving protein-binding microarrays (PBMs) that probe the interaction of proteins from crude cell or tissue extracts with large collections of synthetic or natural DNA sequences. Recent studies have demonstrated the use of these novel PBM approaches to provide rapid and unbiased characterization of DNA-binding proteins as molecular markers of disease, for example cancer progression or infectious diseases.
Resumo:
The 2010-2011 (FY11) edition of Iowa Public Library Statistics includes information on income, expenditures, collections, circulation, and other measures, including staff. Each section is arranged by size code, then alphabetically by city. The totals and percentiles for each size code grouping are given immediately following the alphabetical listings. Totals and medians for all reporting libraries are given at the end of each section. There are 543 libraries included in this publication; 525 submitted a report. The table of size codes (page 5) lists the libraries alphabetically. The following table lists the size code designations, the population range in each size code, the number of libraries reporting in each size code, and the total population of the reporting libraries in each size code. The total population served by the 543 libraries is 2,339,070. Population data is used to determine per capita figures throughout the publication.
Resumo:
Manuscrit orné de dessins à la plume et aux armes de François de Rochechouart, gouverneur du Gênes sous Louis XII. Anciennes collections Mc Carthy, Georges Hilbert, Robert Hoe et comte Paul Durrieu.
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
[Vente. Art. 1909-11-26 - 1909-11-27. Paris]
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The author reviews past work with Ibict and the global progress made by the Open Access Movement. He postulates a theory of open access being an example of a complex adaptive system created by Internet-based scholarly publishing. Open access could be the cause of a cascade of increasing complexity and opportunities that will reshape this system. He has chosen the pervasive and global "Connectedness" created by the internet and the content spaces it provides for open access collections as a "simple disruptive agent". He discusses how connectedness influences infinite variety, creativity, work, change, knowledge, and the information economy. Case studies from the University of New Mexico Libraries are used where appropriate.