961 resultados para Collection development (Libraries)
Resumo:
A collection of versatile best practices for promoting literacy development by utilizing local community connections in school and public libraries. This book provides a fresh approach to learning as well as guidelines for creating dynamic and relevant library programs for children, teens, and families. Organized thematically, each chapter includes relevant topical research and three to eight community-focused approaches. Programs range from small, single-library initiatives in rural communities to multi-site, cross-border initiatives. This resource includes collaborative and locally inspired programs, many of which can be scaled to the budget of any library, school, or community organization.
Resumo:
This research paper sets up a typology of libraries managed by cultural centres abroad. Nearly 2,200 libraries linked to a dozen of different cultural organizations provide not only traditional services such as loan and access to printed and audiovisual materials but they also approach local citizens, offering help and services in matters of education, literacy, cooperation, social issues or development. These actions may fit under the label of cultural diplomacy actions. This paper analyses the relevance of those cultural centres and offers a classification through a table including networks of institutions of the thirty most significant cultural centres worldwide.
Resumo:
Funding: The research presented in this manuscript was wholly funded by National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), https://www.nc3rs.org.uk/. ‘The Snail Assay as an Alternative to the Rodent Hershberger Assay for Detecting Androgens and Anti-androgens’ funding reference: G0900802/1 to SJ, EJR, CSJ, and LRN. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Resumo:
This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/K006428/1]; and the European Regional Development Fund [grant number LUPS/ERDF/2010/4/1/0164].
Resumo:
Object-oriented design and object-oriented languages support the development of independent software components such as class libraries. When using such components, versioning becomes a key issue. While various ad-hoc techniques and coding idioms have been used to provide versioning, all of these techniques have deficiencies - ambiguity, the necessity of recompilation or re-coding, or the loss of binary compatibility of programs. Components from different software vendors are versioned at different times. Maintaining compatibility between versions must be consciously engineered. New technologies such as distributed objects further complicate libraries by requiring multiple implementations of a type simultaneously in a program. This paper describes a new C++ object model called the Shared Object Model for C++ users and a new implementation model called the Object Binary Interface for C++ implementors. These techniques provide a mechanism for allowing multiple implementations of an object in a program. Early analysis of this approach has shown it to have performance broadly comparable to conventional implementations.
Resumo:
The current era of American Christianity marks the transition from a Western, white-dominated U.S. Evangelicalism to an ethnically diverse demographic for evangelicalism. Despite this increasing diversity, U.S. Evangelicalism has demonstrated a stubborn inability to address the entrenched assumption of white supremacy. The 1970s witnessed the rise in prominence of Evangelicalism in the United States. At the same time, the era witnessed a burgeoning movement of African-American evangelicals, who often experienced marginalization from the larger movement. What factors prevented the integration between two seemingly theologically compatible movements? How do these factors impact the challenge of integration and reconciliation in the changing demographic reality of early twenty-first Evangelicalism?
The question is examined through the unpacking of the diseased theological imagination rooted in U.S. Evangelicalism. The theological categories of Creation, Anthropology, Christology, Soteriology, and Ecclesiology are discussed to determine specific deficiencies that lead to assumptions of white supremacy. The larger history of U.S. Evangelicalism and the larger story of the African-American church are explored to provide a context for the unique expression of African-American evangelicalism in the last third of the twentieth century. Through the use of primary sources — personal interviews, archival documents, writings by principals, and private collection documents — the specific history of African-American evangelicals in the 1960s and 1970s is described. The stories of the National Black Evangelical Association, Tom Skinner, John Perkins, and Circle Church provide historical snapshots that illuminate the relationship between the larger U.S. Evangelical movement and African-American evangelicals.
Various attempts at integration and shared leadership were made in the 1970s as African-American evangelicals engaged with white Evangelical institutions. However, the failure of these attempts point to the challenges to diversity for U.S. Evangelicalism and the failure of the Evangelical theological imagination. The diseased theological imagination of U.S. Evangelical Christianity prevented engagement with the needed challenge of African American evangelicalism, resulting in dysfunctional racial dynamics evident in twenty-first century Evangelical Christianity. The historical problem of situating African American evangelicals reveals the theological problem of white supremacy in U.S. Evangelicalism.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
Despite its large impact on the individual and society, we currently have only a rudimentary understanding of the biological basis of Major Depressive Disorder, even less so in adolescent populations. This thesis focuses on two research questions. First, how do adolescents with depression differ from adolescents who have never been depressed on (1a) brain morphology and (1b) DNA methylation? We studied differences in the fronto-limbic system (a collection of areas responsible for emotion regulation) and methylation at the serotonin transporter (SLC6A4) and FK506 binding protein gene (FKBP5) genes (two genes strongly linked to stress regulation and depression). Second, how does childhood trauma, which is known to increase risk for depression, affect (2a) brain development and (2b) SLC6A4 and FKBP5 methylation? Further, (2c) how might DNA methylation explain how trauma affects brain development in depression? We studied these questions in 24 adolescent depressed patients and 21 controls. We found that (1a) depressed adolescents had decreased left precuneus volume and greater volume of the left precentral gyrus compared to controls; however, no differences in fronto-limbic morphology were identified. Moreover, (1b) individuals with depression had lower levels of FKBP5 methylation than controls. In line with our second hypothesis (2a) greater levels of trauma were associated with decreased volume of a number of fronto-limbic regions. Further, we found that (2b) greater trauma was associated with decreased SLC6A4, but not FKBP5, methylation. Finally, (2c) greater FKBP5, but not SLC6A4, methylation was associated with decreased volume of a number of fronto-limbic regions. The results of this study suggest an association among trauma, DNA methylation and brain development in youth, but the direction of these relationships appears to be inconsistent. Future studies using a longitudinal design will be necessary to clarify these results and help us understand how the brain and epigenome change over time in depressed youth.
Resumo:
Objectives This paper describes the methods used in the International Cancer Benchmarking Partnership Module 4 Survey (ICBPM4) which examines time intervals and routes to cancer diagnosis in 10 jurisdictions. We present the study design with defining and measuring time intervals, identifying patients with cancer, questionnaire development, data management and analyses.
Design and setting Recruitment of participants to the ICBPM4 survey is based on cancer registries in each jurisdiction. Questionnaires draw on previous instruments and have been through a process of cognitive testing and piloting in three jurisdictions followed by standardised translation and adaptation. Data analysis focuses on comparing differences in time intervals and routes to diagnosis in the jurisdictions.
Participants Our target is 200 patients with symptomatic breast, lung, colorectal and ovarian cancer in each jurisdiction. Patients are approached directly or via their primary care physician (PCP). Patients’ PCPs and cancer treatment specialists (CTSs) are surveyed, and ‘data rules’ are applied to combine and reconcile conflicting information. Where CTS information is unavailable, audit information is sought from treatment records and databases.
Main outcomes Reliability testing of the patient questionnaire showed that agreement was complete (κ=1) in four items and substantial (κ=0.8, 95% CI 0.333 to 1) in one item. The identification of eligible patients is sufficient to meet the targets for breast, lung and colorectal cancer. Initial patient and PCP survey response rates from the UK and Sweden are comparable with similar published surveys. Data collection was completed in early 2016 for all cancer types.
Conclusion An international questionnaire-based survey of patients with cancer, PCPs and CTSs has been developed and launched in 10 jurisdictions. ICBPM4 will help to further understand international differences in cancer survival by comparing time intervals and routes to cancer diagnosis.
Resumo:
The Mobile Information Literacy curriculum is a growing collection of training materials designed to build literacies for the millions of people worldwide coming online every month via a mobile phone. Most information information and digital literacy curricula were designed for a PC age, and public and private organizations around the world have used these curricula to help newcomers use computers and the internet effectively and safely. The better curricula address not only skills, but also concepts and attitudes. The central question for this project is: what are the relevant skills, concepts, and attitudes for people using mobiles, not PCs, to access the internet? As part of the Information Strategies for Societies in Transition project, we developed a six-module curriculum for mobile-first users. The project is situated in Myanmar, a country undergoing massive political, economic, and social changes, and where mobile penetration is expected to reach 80% by the end of 2015 from just 4% in 2014. Combined with the country’s history of media censorship, Myanmar presents unique challenges for addressing the needs of people who need the ability to find and evaluate the quality and credibility of information obtained online, understand how to create and share online information effectively, and participate safely and securely.
Resumo:
The Mobile Information Literacy curriculum is a growing collection of training materials designed to build literacies for the millions of people worldwide coming online every month via a mobile phone. Most information information and digital literacy curricula were designed for a PC age, and public and private organizations around the world have used these curricula to help newcomers use computers and the internet effectively and safely. The better curricula address not only skills, but also concepts and attitudes. The central question for this project is: what are the relevant skills, concepts, and attitudes for people using mobiles, not PCs, to access the internet? As part of the Information Strategies for Societies in Transition project, we developed a six-module curriculum for mobile-first users. The project is situated in Myanmar, a country undergoing massive political, economic, and social changes, and where mobile penetration is expected to reach 80% by the end of 2015 from just 4% in 2014. Combined with the country’s history of media censorship, Myanmar presents unique challenges for addressing the needs of people who need the ability to find and evaluate the quality and credibility of information obtained online, understand how to create and share online information effectively, and participate safely and securely.
Resumo:
Research activities during this period concentrated on continuation of field and laboratory testing for the Dallas County test road. Stationary ditch collection of dust was eliminated because of inconsistent data, and because of vandalism to collectors. Braking tests were developed and initiated to evaluate the influence of treatments on braking and safety characteristics of the test sections. Dust testing was initiated for out of the wheelpath conditions as well as in the wheelpath. Contrary to the results obtained during the summer and fall of 1987, the 1.5 percent bentonite treatment appears to be outperforming the other bentonite treated sections after over a year of service. Overall dust reduction appears to average between 25 to 35 percent. Dallas County applied 300 tons per mile of class A roadstone maintenance surfacing to the test road in August 1988. Test data indicates that the bentonite is capable of interacting and functioning to reduce dust generation of the new surfacing material. Again, the 1.5 percent bentonite treatment appeared the most effective. The fine particulate bonding and aggregation mechanism of the bentonite appears recoverable from the environmental effects of winter, and from alternating wet and dry road surface conditions. The magnesium chloride treatment appears capable of long-term (over one year) dust reduction and exhibited an overall average reduction in the range of 15 to 30 percent. The magnesium chloride treatment also appears capable of interacting with newly applied crushed stone to reduce dust generation. Two additional one mile test roads were to have been constructed early this year. Due to an extremely dry spring and summer, construction scheduling was not possible until August. This would have allowed only minimal data collection. Considering this and the fact that this was an atypically dry summer, it was our opinion that it would be in the best interest of the research project to extend the project (at no additional cost) for a period of one year. The two additional test roads will be constructed in early spring 1989 in Adair and Marion counties.
Resumo:
Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.
Resumo:
The purpose of the study was to explore how a public, IT services transferor, organization, comprised of autonomous entities, can effectively develop and organize its data center cost recovery mechanisms in a fair manner. The lack of a well-defined model for charges and a cost recovery scheme could cause various problems. For example one entity may be subsidizing the costs of another entity(s). Transfer pricing is in the best interest of each autonomous entity in a CCA. While transfer pricing plays a pivotal role in the price settings of services and intangible assets, TCE focuses on the arrangement at the boundary between entities. TCE is concerned with the costs, autonomy, and cooperation issues of an organization. The theory is concern with the factors that influence intra-firm transaction costs and attempting to manifest the problems involved in the determination of the charges or prices of the transactions. This study was carried out, as a single case study, in a public organization. The organization intended to transfer the IT services of its own affiliated public entities and was in the process of establishing a municipal-joint data center. Nine semi-structured interviews, including two pilot interviews, were conducted with the experts and managers of the case company and its affiliating entities. The purpose of these interviews was to explore the charging and pricing issues of the intra-firm transactions. In order to process and summarize the findings, this study employed qualitative techniques with the multiple methods of data collection. The study, by reviewing the TCE theory and a sample of transfer pricing literature, created an IT services pricing framework as a conceptual tool for illustrating the structure of transferring costs. Antecedents and consequences of the transfer price based on TCE were developed. An explanatory fair charging model was eventually developed and suggested. The findings of the study suggested that the Chargeback system was inappropriate scheme for an organization with affiliated autonomous entities. The main contribution of the study was the application of TP methodologies in the public sphere with no tax issues consideration.