952 resultados para Digital Reference Service
Resumo:
Metabolic homeostasis is achieved by complex molecular and cellular networks that differ significantly among individuals and are difficult to model with genetically engineered lines of mice optimized to study single gene function. Here, we systematically acquired metabolic phenotypes by using the EUMODIC EMPReSS protocols across a large panel of isogenic but diverse strains of mice (BXD type) to study the genetic control of metabolism. We generated and analyzed 140 classical phenotypes and deposited these in an open-access web service for systems genetics (www.genenetwork.org). Heritability, influence of sex, and genetic modifiers of traits were examined singly and jointly by using quantitative-trait locus (QTL) and expression QTL-mapping methods. Traits and networks were linked to loci encompassing both known variants and novel candidate genes, including alkaline phosphatase (ALPL), here linked to hypophosphatasia. The assembled and curated phenotypes provide key resources and exemplars that can be used to dissect complex metabolic traits and disorders.
Resumo:
The goal of this research is to study how knowledge-intensive business services can be productized by using the service blueprinting tool. As services provide the majority of jobs, GDP and productivity growth in Europe, their continuous development is needed for Europe to retain its global competitiveness. As services are turning more complex, their development becomes more difficult. The theoretical part of this study is based on researching productization in the context of knowledge-intensive business services. The empirical part is carried out as a case study in a KIBS company, and utilizes qualitative interviews and case materials. The final outcome of this study is an updated productization framework, designed for KIBS companies, and recommendations for the case company. As the results of this study indicate, productization expanded with service blueprinting can be a useful tool for KIBS companies to develop their services. The updated productization framework is provided for future reference.
Resumo:
As institutions of higher education struggle to stay relevant, competitive, accessible, and flexible, they are scrambling to attend to a shift in focus for new students. This shift involves experiential learning. The purpose of this major research paper was to examine the existing structures, to seek gaps in the experiential learning programs, and to devise a framework to move forward. The specific focus was on experiential learning at Brock University in the Faculty of Applied Health Sciences. The methodology was underscored with cognitive constructivism and appreciative theory. Data collection involved content analysis steps established by Krippendorff (2004) and Weber (1985). Data analysis involved the four dimensions of reflection designed by LaBoskey, including the purpose, context, content, and procedures. The results developed understandings on the state of formal processes and pathways within service learning. A tool kit was generated that defines service learning and offers an overview of the types of service learning typically employed. The tool kit acts as a reference guide for those interested in implementing experiential learning courses. Importantly, the results also provided 10 key points in experiential learning courses by Emily Allan. A flow chart illustrates the connections among each of the 10 points, and then they are described in full to establish a strategy for the way forward in experiential learning.
Resumo:
The purpose of this project is to provide social service practitioners with tools and perspectives to engage young people in a process of developing and connecting with their own personal narratives, and storytelling with others. This project extensively reviews the literature to explore Why Story, What Is Story, Future Directions of Story, and Challenges of Story. Anchoring this exploration is Freire’s (1970/2000) intentional uncovering and decoding. Taking a phenomenological approach, I draw additionally on Brookfield’s (1995) critical reflection; Delgado (1989) and McLaren (1998) for subversive narrative; and Robin (2008) and Sadik (2008) for digital storytelling. The recommendations provided within this project include a practical model built upon Baxter Magolda and King’s (2004) process towards self-authorship for engaging an exercise of storytelling that is accessible to practitioners and young people alike. A personal narrative that aims to help connect lived experience with the theoretical content underscores this project. I call for social service practitioners to engage their own personal narratives in an inclusive and purposeful storytelling method that enhances their ability to help the young people they serve develop and share their stories.
Resumo:
Introduction: Biomedical scientists need to choose among hundreds of publicly available bioinformatics applications, tools, and databases. Librarian challenges include raising awareness to valuable resources, as well as providing support in finding and evaluating specific resources. Our objective is to implement an education program in bioinformatics similar to those offered in other North American academic libraries. Description: Our initial target clientele included four research departments of the Faculty of Medicine at Universite´ de Montréal. In January 2010, I attended two departmental meetings and interviewed a few stakeholders in order to propose a basic bioinformatics service: one-to-one consultations and a workshop on NCBI databases. The response was favourable. The workshop was thus offered once a month during the Winter and Fall semesters, and participants were invited to evaluate the workshop via an online survey. In addition, a bioinformatics subject guide was launched on the library’s website in December 2010. Outcomes: One hundred and two participants attended one of the nine NCBI workshops offered in 2010; most were graduate students (74%). The survey’s response rate was 54%. A majority of respondents thought that the bioinformatics resources featured in the workshop were relevant (95%) and that the difficulty level of exercises was appropriate (84%). Respondents also thought that their future information searches would be more efficient (93%) and that the workshop should be integrated in a course (78%). Furthermore, five bioinformatics-related reference questions were answered and two one-to-one consultations with students were performed. Discussion: The success of our bioinformatics service is growing. Future directions include extending the service to other biomedical departments, integrating the workshop in an undergraduate course, promoting the subject guide to other francophone universities, and creating a bioinformatics blog that would feature specific databases, news, and library resources.
Resumo:
As shown by different scholars, the idea of “author” is not absolute or necessary. On the contrary, it came to life as an answer to the very practical needs of an emerging print technology in search of an economic model of its own. In this context, and according to the criticism of the notion of “author” made during the 1960–70s (in particular by Barthes and Foucault), it would only be natural to consider the idea of the author being dead as a global claim accepted by all scholars. Yet this is not the case, because, as Rose suggests, the idea of “author” and the derived notion of copyright are still too important in our culture to be abandoned. But why such an attachment to the idea of “author”? The hypothesis on which this chapter is based is that the theory of the death of the author—developed in texts such as What is an Author? by Michel Foucault and The Death of the Author by Roland Barthes—did not provide the conditions for a shift towards a world without authors because of its inherent lack of concrete editorial practices different from the existing ones. In recent years, the birth and diffusion of the Web have allowed the concrete development of a different way of interpreting the authorial function, thanks to new editorial practices—which will be named “editorialization devices” in this chapter. Thus, what was inconceivable for Rose in 1993 is possible today because of the emergence of digital technology—and in particular, the Web.
Resumo:
The paratext framework is now used in a variety of fields to assess, measure, analyze, and comprehend the elements that provide thresholds, allowing scholars to better understand digital objects. Researchers from many disciplines revisit paratextual theories in order to grasp what surrounds text in the digital age. Examining Paratextual Theory and its Applications in Digital Culture suggests a theoretical and practical tool for building bridges between disciplines interested in conducting joint research and exploration of digital culture. Helping scholars from different fields find an interdisciplinary framework and common language to study digital objects, this book serves as a useful reference for academics, librarians, professionals, researchers, and students, offering a collaborative outlook and perspective.
Resumo:
The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.
Resumo:
In everyday life different flows of customers to avail some service facility or other at some service station are experienced. In some of these situations, congestion of items arriving for service, because an item cannot be serviced Immediately on arrival, is unavoidable. A queuing system can be described as customers arriving for service, waiting for service if it is not immediate, and if having waited for service, leaving the system after being served. Examples Include shoppers waiting in front of checkout stands in a supermarket, Programs waiting to be processed by a digital computer, ships in the harbor Waiting to be unloaded, persons waiting at railway booking office etc. A queuing system is specified completely by the following characteristics: input or arrival pattern, service pattern, number of service channels, System capacity, queue discipline and number of service stages. The ultimate objective of solving queuing models is to determine the characteristics that measure the performance of the system
Resumo:
Scientific studies on the materials management systems and practices actually followed in various organizations in India are rather limited. This is particularly true with respect to service industries. In this context ,the present study on the “materials management in state transport undertakings in India, with special reference to kerala state road corporation” assumes particular significance . This study, examines critically, the prevailing set up, procedures and practices of materials management in the Kerala state road transport corporation and compares them with the prevailing practices in other similar state transport undertakings. It indicates several areas for improvement with respect to the organization, materials planning, purchasing, store keeping and other aspects. It also seeks to develop a comprehensive inventory control system for KSRTC.
Resumo:
As of 1999. the state of Kerala has 3210 offices of scheduled commercial banks (SCBS). In all, there are 48 commercial banks operating in Kerala, which includes PSBs, OPBs, NPBS. FBs, and Gramin Banks. The urban areas give a complete picture of the competition in the present day banking scenario with the presence of all bank groups. Semi-urban areas of Kerala have 2196 and urban areas have 593 as on March 1995.“ The study focuses on the selected segments ofthe urban customers in Kerala which is capable of giving the finer aspects of variation in customer behaviour in the purchase of banking products and services. Considering the exhaustive nature of such an exercise, all the districts in the state have not been brought under the purview of the study. Instead. three districts with largest volume of business in terms of deposits, advances, and number of offices have been short listed as representative regions for a focused study. The study focuses on the retail customer segment and their perceptions on the various products or services offered to them. Non Resident Indians (NRIs), and Traders and Small—ScaIe Industries segments have also been included in the study with a view to obtain a comparative picture with respect to perception on customer satisfaction and service quality dimensions and bank choice behaviour. The research is hence confined to customer behaviour and the implications for possible strategies for segmentation within the retail segment customers
Resumo:
The purpose of this paper is to describe the design and development of a digital library at Cochin University of Science and Technology (CUSAT), India, using DSpace open source software. The study covers the structure, contents and usage of CUSAT digital library. Design/methodology/approach – This paper examines the possibilities of applying open source in libraries. An evaluative approach is carried out to explore the features of the CUSAT digital library. The Google Analytics service is employed to measure the amount of use of digital library by users across the world. Findings – CUSAT has successfully applied DSpace open source software for building a digital library. The digital library has had visits from 78 countries, with the major share from India. The distribution of documents in the digital library is uneven. Past exam question papers share the major part of the collection. The number of research papers, articles and rare documents is less. Originality/value – The study is the first of its type that tries to understand digital library design and development using DSpace open source software in a university environment with a focus on the analysis of distribution of items and measuring the value by usage statistics employing the Google Analytics service. The digital library model can be useful for designing similar systems
Resumo:
Die gegenwärtige Entwicklung der internationalen Klimapolitik verlangt von Deutschland eine Reduktion seiner Treibhausgasemissionen. Wichtigstes Treibhausgas ist Kohlendioxid, das durch die Verbrennung fossiler Energieträger in die Atmosphäre freigesetzt wird. Die Reduktionsziele können prinzipiell durch eine Verminderung der Emissionen sowie durch die Schaffung von Kohlenstoffsenken erreicht werden. Senken beschreiben dabei die biologische Speicherung von Kohlenstoff in Böden und Wäldern. Eine wichtige Einflussgröße auf diese Prozesse stellt die räumliche Dynamik der Landnutzung einer Region dar. In dieser Arbeit wird das Modellsystem HILLS entwickelt und zur Simulation dieser komplexen Wirkbeziehungen im Bundesland Hessen genutzt. Ziel ist es, mit HILLS über eine Analyse des aktuellen Zustands hinaus auch Szenarien über Wege der zukünftigen regionalen Entwicklung von Landnutzung und ihrer Wirkung auf den Kohlenstoffhaushalt bis 2020 zu untersuchen. Für die Abbildung der räumlichen und zeitlichen Dynamik von Landnutzung in Hessen wird das Modell LUCHesse entwickelt. Seine Aufgabe ist die Simulation der relevanten Prozesse auf einem 1 km2 Raster, wobei die Raten der Änderung exogen als Flächentrends auf Ebene der hessischen Landkreise vorgegeben werden. LUCHesse besteht aus Teilmodellen für die Prozesse: (A) Ausbreitung von Siedlungs- und Gewerbefläche, (B) Strukturwandel im Agrarsektor sowie (C) Neuanlage von Waldflächen (Aufforstung). Jedes Teilmodell umfasst Methoden zur Bewertung der Standorteignung der Rasterzellen für unterschiedliche Landnutzungsklassen und zur Zuordnung der Trendvorgaben zu solchen Rasterzellen, die jeweils am besten für eine Landnutzungsklasse geeignet sind. Eine Validierung der Teilmodelle erfolgt anhand von statistischen Daten für den Zeitraum von 1990 bis 2000. Als Ergebnis eines Simulationslaufs werden für diskrete Zeitschritte digitale Karten der Landnutzugsverteilung in Hessen erzeugt. Zur Simulation der Kohlenstoffspeicherung wird eine modifizierte Version des Ökosystemmodells Century entwickelt (GIS-Century). Sie erlaubt einen gesteuerten Simulationslauf in Jahresschritten und unterstützt die Integration des Modells als Komponente in das HILLS Modellsystem. Es werden verschiedene Anwendungsschemata für GIS-Century entwickelt, mit denen die Wirkung der Stilllegung von Ackerflächen, der Aufforstung sowie der Bewirtschaftung bereits bestehender Wälder auf die Kohlenstoffspeicherung untersucht werden kann. Eine Validierung des Modells und der Anwendungsschemata erfolgt anhand von Feld- und Literaturdaten. HILLS implementiert eine sequentielle Kopplung von LUCHesse mit GIS-Century. Die räumliche Kopplung geschieht dabei auf dem 1 km2 Raster, die zeitliche Kopplung über die Einführung eines Landnutzungsvektors, der die Beschreibung der Landnutzungsänderung einer Rasterzelle während des Simulationszeitraums enthält. Außerdem integriert HILLS beide Modelle über ein dienste- und datenbankorientiertes Konzept in ein Geografisches Informationssystem (GIS). Auf diesem Wege können die GIS-Funktionen zur räumlichen Datenhaltung und Datenverarbeitung genutzt werden. Als Anwendung des Modellsystems wird ein Referenzszenario für Hessen mit dem Zeithorizont 2020 berechnet. Das Szenario setzt im Agrarsektor eine Umsetzung der AGENDA 2000 Politik voraus, die in großem Maße zu Stilllegung von Ackerflächen führt, während für den Bereich Siedlung und Gewerbe sowie Aufforstung die aktuellen Trends der Flächenausdehnung fortgeschrieben werden. Mit HILLS ist es nun möglich, die Wirkung dieser Landnutzungsänderungen auf die biologische Kohlenstoffspeicherung zu quantifizieren. Während die Ausdehnung von Siedlungsflächen als Kohlenstoffquelle identifiziert werden kann (37 kt C/a), findet sich die wichtigste Senke in der Bewirtschaftung bestehender Waldflächen (794 kt C/a). Weiterhin führen die Stilllegung von Ackerfläche (26 kt C/a) sowie Aufforstung (29 kt C/a) zu einer zusätzlichen Speicherung von Kohlenstoff. Für die Kohlenstoffspeicherung in Böden zeigen die Simulationsexperimente sehr klar, dass diese Senke nur von beschränkter Dauer ist.
Resumo:
Summary: Productivity and forage quality of legume-grass swards are important factors for successful arable farming in both organic and conventional farming systems. For these objectives the botanical composition of the swards is of particular importance, especially, the content of legumes due to their ability to fix airborne nitrogen. As it can vary considerably within a field, a non-destructive detection method while doing other tasks would facilitate a more targeted sward management and could predict the nitrogen supply of the soil for the subsequent crop. This study was undertaken to explore the potential of digital image analysis (DIA) for a non destructive prediction of legume dry matter (DM) contribution of legume-grass mixtures. For this purpose an experiment was conducted in a greenhouse, comprising a sample size of 64 experimental swards such as pure swards of red clover (Trifolium pratense L.), white clover (Trifolium repens L.) and lucerne (Medicago sativa L.) as well as binary mixtures of each legume with perennial ryegrass (Lolium perenne L.). Growth stages ranged from tillering to heading and the proportion of legumes from 0 to 80 %. Based on digital sward images three steps were considered in order to estimate the legume contribution (% of DM): i) The development of a digital image analysis (DIA) procedure in order to estimate legume coverage (% of area). ii) The description of the relationship between legume coverage (% area) and legume contribution (% of DM) derived from digital analysis of legume coverage related to the green area in a digital image. iii) The estimation of the legume DM contribution with the findings of i) and ii). i) In order to evaluate the most suitable approach for the estimation of legume coverage by means of DIA different tools were tested. Morphological operators such as erode and dilate support the differentiation of objects of different shape by shrinking and dilating objects (Soille, 1999). When applied to digital images of legume-grass mixtures thin grass leaves were removed whereas rounder clover leaves were left. After this process legume leaves were identified by threshold segmentation. The segmentation of greyscale images turned out to be not applicable since the segmentation between legumes and bare soil failed. The advanced procedure comprising morphological operators and HSL colour information could determine bare soil areas in young and open swards very accurately. Also legume specific HSL thresholds allowed for precise estimations of legume coverage across a wide range from 11.8 - 72.4 %. Based on this legume specific DIA procedure estimated legume coverage showed good correlations with the measured values across the whole range of sward ages (R2 0.96, SE 4.7 %). A wide range of form parameters (i.e. size, breadth, rectangularity, and circularity of areas) was tested across all sward types, but none did improve prediction accuracy of legume coverage significantly. ii) Using measured reference data of legume coverage and contribution, in a first approach a common relationship based on all three legumes and sward ages of 35, 49 and 63 days was found with R2 0.90. This relationship was improved by a legume-specific approach of only 49- and 63-d old swards (R2 0.94, 0.96 and 0.97 for red clover, white clover, and lucerne, respectively) since differing structural attributes of the legume species influence the relationship between these two parameters. In a second approach biomass was included in the model in order to allow for different structures of swards of different ages. Hence, a model was developed, providing a close look on the relationship between legume coverage in binary legume-ryegrass communities and the legume contribution: At the same level of legume coverage, legume contribution decreased with increased total biomass. This phenomenon may be caused by more non-leguminous biomass covered by legume leaves at high levels of total biomass. Additionally, values of legume contribution and coverage were transformed to the logit-scale in order to avoid problems with heteroscedasticity and negative predictions. The resulting relationships between the measured legume contribution and the calculated legume contribution indicated a high model accuracy for all legume species (R2 0.93, 0.97, 0.98 with SE 4.81, 3.22, 3.07 % of DM for red clover, white clover, and lucerne swards, respectively). The validation of the model by using digital images collected over field grown swards with biomass ranges considering the scope of the model shows, that the model is able to predict legume contribution for most common legume-grass swards (Frame, 1992; Ledgard and Steele, 1992; Loges, 1998). iii) An advanced procedure for the determination of legume DM contribution by DIA is suggested, which comprises the inclusion of morphological operators and HSL colour information in the analysis of images and which applies an advanced function to predict legume DM contribution from legume coverage by considering total sward biomass. Low residuals between measured and calculated values of legume dry matter contribution were found for the separate legume species (R2 0.90, 0.94, 0.93 with SE 5.89, 4.31, 5.52 % of DM for red clover, white clover, and lucerne swards, respectively). The introduced DIA procedure provides a rapid and precise estimation of legume DM contribution for different legume species across a wide range of sward ages. Further research is needed in order to adapt the procedure to field scale, dealing with differing light effects and potentially higher swards. The integration of total biomass into the model for determining legume contribution does not necessarily reduce its applicability in practice as a combined estimation of total biomass and legume coverage by field spectroscopy (Biewer et al. 2009) and DIA, respectively, may allow for an accurate prediction of the legume contribution in legume-grass mixtures.
Resumo:
A program that simulates a Digital Equipment Corporation PDP-11 computer and many of its peripherals on the AI Laboratory Time Sharing System (ITS) is described from a user's reference point of view. This simulator has a built in DDT-like command level which provides the user with the normal range of DDT facilities but also with several special debugging features built into the simulator. The DDT command language was implemented by Richard M. Stallman while the simulator was written by the author of this memo.