114 resultados para Archaeological museums and collections
Resumo:
Background Several studies have identified rare genetic variations responsible for many cases of familial breast cancer but their contribution to total breast cancer incidence is relatively small. More common genetic variations with low penetrance have been postulated to account for a higher proportion of the population risk of breast cancer. Methods and Results In an effort to identify genes that influence non-familial breast cancer risk, we tested over 25,000 single nucleotide polymorphisms (SNPs) located within approximately 14,000 genes in a large-scale case-control study in 254 German women with breast cancer and 268 age-matched women without malignant disease. We identified a marker on chromosome 14q24.3-q31.1 that was marginally associated with breast cancer status (OR = 1.5, P = 0.07). Genotypes for this SNP were also significantly associated with indicators of breast cancer severity, including presence of lymph node metastases ( P = 0.006) and earlier age of onset ( P = 0.01). The association with breast cancer status was replicated in two independent samples (OR = 1.35, P = 0.05). High-density association fine mapping showed that the association spanned about 80 kb of the zinc-finger gene DPF3 (also known as CERD4 ). One SNP in intron 1 was found to be more strongly associated with breast cancer status in all three sample collections (OR = 1.6, P = 0.003) as well as with increased lymph node metastases ( P = 0.01) and tumor size ( P = 0.01). Conclusion Polymorphisms in the 5' region of DPF3 were associated with increased risk of breast cancer development, lymph node metastases, age of onset, and tumor size in women of European ancestry. This large-scale association study suggests that genetic variation in DPF3 contributes to breast cancer susceptibility and severity.
Resumo:
We conducted a large-scale association study to identify genes that influence nonfamilial breast cancer risk using a collection of German cases and matched controls and >25,000 single nucleotide polymorphisms located within 16,000 genes. One of the candidate loci identified was located on chromosome 19p13.2 [odds ratio (OR) = 1.5, P = 0.001]. The effect was substantially stronger in the subset of cases with reported family history of breast cancer (OR = 3.4, P = 0.001). The finding was subsequently replicated in two independent collections (combined OR = 1.4, P < 0.001) and was also associated with predisposition to prostate cancer in an independent sample set of prostate cancer cases and matched controls (OR = 1.4, P = 0.002). High-density single nucleotide polymorphism mapping showed that the extent of association spans 20 kb and includes the intercellular adhesion molecule genes ICAM1, ICAM4, and ICAM5. Although genetic variants in ICAM5 showed the strongest association with disease status, ICAM1 is expressed at highest levels in normal and tumor breast tissue. A variant in ICAM5 was also associated with disease progression and prognosis. Because ICAMs are suitable targets for antibodies and small molecules, these findings may not only provide diagnostic and prognostic markers but also new therapeutic opportunities in breast and prostate cancer.
Resumo:
Collecting regular personal reflections from first year teachers in rural and remote schools is challenging as they are busily absorbed in their practice, and separated from each other and the researchers by thousands of kilometres. In response, an innovative web-based solution was designed to both collect data and be a responsive support system for early career teachers as they came to terms with their new professional identities within rural and remote school settings. Using an emailed link to a web-based application named goingok.com, the participants are charting their first year plotlines using a sliding scale from ‘distressed’, ‘ok’ to ‘soaring’ and describing their self-assessment in short descriptive posts. These reflections are visible to the participants as a developing online journal, while the collections of de-identified developing plotlines are visible to the research team, alongside numerical data. This paper explores important aspects of the design process, together with the challenges and opportunities encountered in its implementation. A number of the key considerations for choosing to develop a web application for data collection are initially identified, and the resultant application features and scope are then examined. Examples are then provided about how a responsive software development approach can be part of a supportive feedback loop for participants while being an effective data collection process. Opportunities for further development are also suggested with projected implications for future research.
Resumo:
The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.
Resumo:
This work was composed in relation to the author's research of the popularity of themes of ephemerality and affect in recent global art. This focus correlated with Chicks on Speed's ongoing inquiries into issues of collections and collecting in the artworld, articulated as 'the art dump' by the group. This work was subsequently performed as a contribution to a performance with international multidisciplinary group Chicks on Speed as a part of their residency during MONA FOMA in Tasmania.
Resumo:
Post-transcriptional silencing of plant genes using anti-sense or co-suppression constructs usually results in only a modest proportion of silenced individuals. Recent work has demonstrated the potential for constructs encoding self-complementary 'hairpin' RNA (hpRNA) to efficiently silence genes. In this study we examine design rules for efficient gene silencing, in terms of both the proportion of independent transgenic plants showing silencing, and the degree of silencing. Using hpRNA constructs containing sense/anti-sense arms ranging from 98 to 853 nt gave efficient silencing in a wide range of plant species, and inclusion of an intron in these constructs had a consistently enhancing effect. Intron-containing constructs (ihpRNA) generally gave 90-100% of independent transgenic plants showing silencing. The degree of silencing with these constructs was much greater than that obtained using either co-suppression or anti-sense constructs. We have made a generic vector, pHANNIBAL, that allows a simple, single PCR product from a gene of interest to be easily converted into a highly effective ihpRNA silencing construct. We have also created a high-throughput vector, pHELLSGATE, that should facilitate the cloning of gene libraries or large numbers of defined genes, such as those in EST collections, using an in vitro recombinase system. This system may facilitate the large-scale determination and discovery of plant gene functions in the same way as RNAi is being used to examine gene function in Caenorhabditis elegans.
Resumo:
This article uses the Lavender Library, Archives, and Cultural Exchange of Sacramento, Incorporated, a small queer community archives in Northern California, as a case study for expanding our knowledge of community archives and issues of archival practice. It explores why creating a separate community archives was necessary, the role of community members in founding and maintaining the archives, the development of its collections, and the ongoing challenges community archives face. The article also considers the implications community archives have for professional practice, particularly in the areas of collecting, description, and collaboration.
Resumo:
This case study examines the way in which Knowledge Unlatched is combining collective action and open access licenses to encourage innovation in markets for specialist academic books. Knowledge Unlatched is a not for profit organisation that has been established to help a global community of libraries coordinate their book purchasing activities more effectively and, in so doing, to ensure that books librarians select for their own collections become available for free for anyone in the world to read. The Knowledge Unlatched model is an attempt to re-coordinate a market in order to facilitate a transition to digitally appropriate publishing models that include open access. It offers librarians an opportunity to facilitate the open access publication of books that their own readers would value access to. It provides publishers with a stable income stream on titles selected by libraries, as well as an ability to continue selling books to a wider market on their own terms. Knowledge Unlatched provides a rich case study for researchers and practitioners interested in understanding how innovations in procurement practices can be used to stimulate more effective, equitable markets for socially valuable products.
Resumo:
A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.
Resumo:
This thesis presents novel techniques for addressing the problems of continuous change and inconsistencies in large process model collections. The developed techniques treat process models as a collection of fragments and facilitate version control, standardization and automated process model discovery using fragment-based concepts. Experimental results show that the presented techniques are beneficial in consolidating large process model collections, specifically when there is a high degree of redundancy.
Resumo:
In this chapter we continue the exposition of crypto topics that was begun in the previous chapter. This chapter covers secret sharing, threshold cryptography, signature schemes, and finally quantum key distribution and quantum cryptography. As in the previous chapter, we have focused only on the essentials of each topic. We have selected in the bibliography a list of representative items, which can be consulted for further details. First we give a synopsis of the topics that are discussed in this chapter. Secret sharing is concerned with the problem of how to distribute a secret among a group of participating individuals, or entities, so that only predesignated collections of individuals are able to recreate the secret by collectively combining the parts of the secret that were allocated to them. There are numerous applications of secret-sharing schemes in practice. One example of secret sharing occurs in banking. For instance, the combination to a vault may be distributed in such a way that only specified collections of employees can open the vault by pooling their portions of the combination. In this way the authority to initiate an action, e.g., the opening of a bank vault, is divided for the purposes of providing security and for added functionality, such as auditing, if required. Threshold cryptography is a relatively recently studied area of cryptography. It deals with situations where the authority to initiate or perform cryptographic operations is distributed among a group of individuals. Many of the standard operations of single-user cryptography have counterparts in threshold cryptography. Signature schemes deal with the problem of generating and verifying electronic) signatures for documents.Asubclass of signature schemes is concerned with the shared-generation and the sharedverification of signatures, where a collaborating group of individuals are required to perform these actions. A new paradigm of security has recently been introduced into cryptography with the emergence of the ideas of quantum key distribution and quantum cryptography. While classical cryptography employs various mathematical techniques to restrict eavesdroppers from learning the contents of encrypted messages, in quantum cryptography the information is protected by the laws of physics.
Resumo:
Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.
Resumo:
Analysis of behavioural consistency is an important aspect of software engineering. In process and service management, consistency verification of behavioural models has manifold applications. For instance, a business process model used as system specification and a corresponding workflow model used as implementation have to be consistent. Another example would be the analysis to what degree a process log of executed business operations is consistent with the corresponding normative process model. Typically, existing notions of behaviour equivalence, such as bisimulation and trace equivalence, are applied as consistency notions. Still, these notions are exponential in computation and yield a Boolean result. In many cases, however, a quantification of behavioural deviation is needed along with concepts to isolate the source of deviation. In this article, we propose causal behavioural profiles as the basis for a consistency notion. These profiles capture essential behavioural information, such as order, exclusiveness, and causality between pairs of activities of a process model. Consistency based on these profiles is weaker than trace equivalence, but can be computed efficiently for a broad class of models. In this article, we introduce techniques for the computation of causal behavioural profiles using structural decomposition techniques for sound free-choice workflow systems if unstructured net fragments are acyclic or can be traced back to S- or T-nets. We also elaborate on the findings of applying our technique to three industry model collections.
Resumo:
Background Parents play a significant role in shaping youth physical activity (PA). However, interventions targeting PA parenting have been ineffective. Methodological inconsistencies related to the measurement of parental influences may be a contributing factor. The purpose of this article is to review the extant peer-reviewed literature related to the measurement of general and specific parental influences on youth PA. Methods A systematic review of studies measuring constructs of PA parenting was conducted. Computerized searches were completed using PubMed, MEDLINE, Academic Search Premier, SPORTDiscus, and PsycINFO. Reference lists of the identified articles were manually reviewed as well as the authors' personal collections. Articles were selected on the basis of strict inclusion criteria and details regarding the measurement protocols were extracted. A total of 117 articles met the inclusionary criteria. Methodological articles that evaluated the validity and reliability of PA parenting measures (n=10) were reviewed separately from parental influence articles (n=107). Results A significant percentage of studies used measures with indeterminate validity and reliability. A significant percentage of articles did not provide sample items, describe the response format, or report the possible range of scores. No studies were located that evaluated sensitivity to change. Conclusion The reporting of measurement properties and the use of valid and reliable measurement scales need to be improved considerably.
Resumo:
For users of germplasm collections, the purpose of measuring characterization and evaluation descriptors, and subsequently using statistical methodology to summarize the data, is not only to interpret the relationships between the descriptors, but also to characterize the differences and similarities between accessions in relation to their phenotypic variability for each of the measured descriptors. The set of descriptors for the accessions of most germplasm collections consists of both numerical and categorical descriptors. This poses problems for a combined analysis of all descriptors because few statistical techniques deal with mixtures of measurement types. In this article, nonlinear principal component analysis was used to analyze the descriptors of the accessions in the Australian groundnut collection. It was demonstrated that the nonlinear variant of ordinary principal component analysis is an appropriate analytical tool because subspecies and botanical varieties could be identified on the basis of the analysis and characterized in terms of all descriptors. Moreover, outlying accessions could be easily spotted and their characteristics established. The statistical results and their interpretations provide users with a more efficient way to identify accessions of potential relevance for their plant improvement programs and encourage and improve the usefulness and utilization of germplasm collections.