8 resultados para Libraries and schools.
em Aston University Research Archive
Resumo:
The future of public libraries has been threatened by funding cuts and new digital technologies which have led many people to question their traditional role and purpose. However, freedom of information, ready access to knowledge and information literacy in all its digital and analog guises are more important than ever. Thus, public libraries remain significant spaces and places where people can socially interact and learn. In many countries public libraries are reinventing themselves and part of this process has been the redesign of library services and the design and construction of new library building and facilities that articulate the values, purpose and role of what has been termed 'the next library'. Following discussion of new library developments in London, Birmingham and Worcester in the UK, Aarhus in Denmark and Helsinki in Finland, the article concludes that public libraries are now both social and media spaces as well as being important physical places that can help city dwellers decide what type of urban world they want to see.
Resumo:
The study addresses the introduction of an innovation of new technology into a bureaucratic profession. The organisational setting is that of local authority secondary schools at a time at which microcomputers were being introduced in both the organisational core (for teaching) and its periphery (school administration). The research studies innovation-adopting organisations within their sectoral context; key actors influencing the innovation are identified at the levels of central government, local government and schools.A review of the literature on new technology and innovation (including educational innovation), and on schools as organisations in a changing environment leads to the development of the conceptual framework of the study using a resource dependency model within a cycle of the acquisition, allocation and utilisation of financial, physical and intangible resources. The research methodology is longitudinal and draws from both positivist and interpretive traditions. lt includes an initial census of the two hundred secondary schools in four local education authorities, a final survey of the same population, and four case studies, using both interview methods and documentation. Two modes of innovation are discerned. In respect of administrative use a rationalising, controlling mode is identified, with local education authorities developing standardised computer-assisted administrative systems for use in schools. In respect of curricular use, in contrast, teachers have been able to maintain an indeterminate occupational knowledge base, derived from an ideology of professionalism in respect of the classroom use of the technology. The mode of innovation in respect of curricular use has been one of learning and enabling. The resourcing policies of central and local government agencies affect the extent of use of the technology for teaching purposes, but the way in which it is used is determined within individual schools, where staff with relevant technical expertise significantly affect the course of the innovation.
Resumo:
We have previously described ProxiMAX, a technology that enables the fabrication of precise, combinatorial gene libraries via codon-by-codon saturation mutagenesis. ProxiMAX was originally performed using manual, enzymatic transfer of codons via blunt-end ligation. Here we present Colibra™: an automated, proprietary version of ProxiMAX used specifically for antibody library generation, in which double-codon hexamers are transferred during the saturation cycling process. The reduction in process complexity, resulting library quality and an unprecedented saturation of up to 24 contiguous codons are described. Utility of the method is demonstrated via fabrication of complementarity determining regions (CDR) in antibody fragment libraries and next generation sequencing (NGS) analysis of their quality and diversity.
Resumo:
Ontology search and reuse is becoming increasingly important as the quest for methods to reduce the cost of constructing such knowledge structures continues. A number of ontology libraries and search engines are coming to existence to facilitate locating and retrieving potentially relevant ontologies. The number of ontologies available for reuse is steadily growing, and so is the need for methods to evaluate and rank existing ontologies in terms of their relevance to the needs of the knowledge engineer. This paper presents AKTiveRank, a prototype system for ranking ontologies based on a number of structural metrics.
Resumo:
Representing knowledge using domain ontologies has shown to be a useful mechanism and format for managing and exchanging information. Due to the difficulty and cost of building ontologies, a number of ontology libraries and search engines are coming to existence to facilitate reusing such knowledge structures. The need for ontology ranking techniques is becoming crucial as the number of ontologies available for reuse is continuing to grow. In this paper we present AKTiveRank, a prototype system for ranking ontologies based on the analysis of their structures. We describe the metrics used in the ranking system and present an experiment on ranking ontologies returned by a popular search engine for an example query.
Resumo:
A visualization plot of a data set of molecular data is a useful tool for gaining insight into a set of molecules. In chemoinformatics, most visualization plots are of molecular descriptors, and the statistical model most often used to produce a visualization is principal component analysis (PCA). This paper takes PCA, together with four other statistical models (NeuroScale, GTM, LTM, and LTM-LIN), and evaluates their ability to produce clustering in visualizations not of molecular descriptors but of molecular fingerprints. Two different tasks are addressed: understanding structural information (particularly combinatorial libraries) and relating structure to activity. The quality of the visualizations is compared both subjectively (by visual inspection) and objectively (with global distance comparisons and local k-nearest-neighbor predictors). On the data sets used to evaluate clustering by structure, LTM is found to perform significantly better than the other models. In particular, the clusters in LTM visualization space are consistent with the relationships between the core scaffolds that define the combinatorial sublibraries. On the data sets used to evaluate clustering by activity, LTM again gives the best performance but by a smaller margin. The results of this paper demonstrate the value of using both a nonlinear projection map and a Bernoulli noise model for modeling binary data.
Resumo:
The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.