994 resultados para Library Administration.
Resumo:
The Computer Aided Parallelisation Tools (CAPTools) [Ierotheou, C, Johnson SP, Cross M, Leggett PF, Computer aided parallelisation tools (CAPTools)-conceptual overview and performance on the parallelisation of structured mesh codes, Parallel Computing, 1996;22:163±195] is a set of interactive tools aimed to provide automatic parallelisation of serial FORTRAN Computational Mechanics (CM) programs. CAPTools analyses the user's serial code and then through stages of array partitioning, mask and communication calculation, generates parallel SPMD (Single Program Multiple Data) messages passing FORTRAN. The parallel code generated by CAPTools contains calls to a collection of routines that form the CAPTools communications Library (CAPLib). The library provides a portable layer and user friendly abstraction over the underlying parallel environment. CAPLib contains optimised message passing routines for data exchange between parallel processes and other utility routines for parallel execution control, initialisation and debugging. By compiling and linking with different implementations of the library, the user is able to run on many different parallel environments. Even with today's parallel systems the concept of a single version of a parallel application code is more of an aspiration than a reality. However for CM codes the data partitioning SPMD paradigm requires a relatively small set of message-passing communication calls. This set can be implemented as an intermediate `thin layer' library of message-passing calls that enables the parallel code (especially that generated automatically by a parallelisation tool such as CAPTools) to be as generic as possible. CAPLib is just such a `thin layer' message passing library that supports parallel CM codes, by mapping generic calls onto machine specific libraries (such as CRAY SHMEM) and portable general purpose libraries (such as PVM an MPI). This paper describe CAPLib together with its three perceived advantages over other routes: - as a high level abstraction, it is both easy to understand (especially when generated automatically by tools) and to implement by hand, for the CM community (who are not generally parallel computing specialists); - the one parallel version of the application code is truly generic and portable; - the parallel application can readily utilise whatever message passing libraries on a given machine yield optimum performance.
Resumo:
This paper presents data relating to occupant pre-evacuation times from a university and a hospital outpatient facility. Although the two structures are entirely different they do employ relatively similar procedures: members of staff sweeping areas of the structure to encourage individuals to evacuate. However, the manner in which the dependent population reacts to these procedures is quite different. In the hospital case the patients only evacuated once a member of the nursing staff had instructed them to do so while in the university evacuation the students were less dependent upon the actions of the staff with over 50% of them evacuating with no prior prompting. Although this data may be useful in a variety of areas, it was collected primarily for use within evacuation models.
A policy-definition language and prototype implementation library for policy-based autonomic systems
Resumo:
This paper presents work towards generic policy toolkit support for autonomic computing systems in which the policies themselves can be adapted dynamically and automatically. The work is motivated by three needs: the need for longer-term policy-based adaptation where the policy itself is dynamically adapted to continually maintain or improve its effectiveness despite changing environmental conditions; the need to enable non autonomics-expert practitioners to embed self-managing behaviours with low cost and risk; and the need for adaptive policy mechanisms that are easy to deploy into legacy code. A policy definition language is presented; designed to permit powerful expression of self-managing behaviours. The language is very flexible through the use of simple yet expressive syntax and semantics, and facilitates a very diverse policy behaviour space through both hierarchical and recursive uses of language elements. A prototype library implementation of the policy support mechanisms is described. The library reads and writes policies in well-formed XML script. The implementation extends the state of the art in policy-based autonomics through innovations which include support for multiple policy versions of a given policy type, multiple configuration templates, and meta-policies to dynamically select between policy instances and templates. Most significantly, the scheme supports hot-swapping between policy instances. To illustrate the feasibility and generalised applicability of these tools, two dissimilar example deployment scenarios are examined. The first is taken from an exploratory implementation of self-managing parallel processing, and is used to demonstrate the simple and efficient use of the tools. The second example demonstrates more-advanced functionality, in the context of an envisioned multi-policy stock trading scheme which is sensitive to environmental volatility
Resumo:
Two evacuation trials were conducted within Brazilian library facilities by FSEG staff in January 2005. These trials represent one of the first such trials conducted in Brazil. The purpose of these evacuation trials was to collect pre-evacuation time data from a population with a cultural background different to that found in western Europe. In total some 34 pre-evacuation times were collected from the experiments and these ranged from 5 to 98 seconds with a mean pre-evacuation time of 46.7 seconds
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
Queen's University Library was one of 202 libraries, including 57 members of the Association of Research Libraries (ARL), to survey its users in spring 2004 using the LibQUAL+ survey instrument. LibQUAL+ was designed by ARL to assist libraries in assessing the quality of their services and identifying areas for improvement. # Overall: Queen's scored higher than the average for all ARL participants and 1st among the 2004 Canadian participants. This relatively high rating is due to very high scores in the dimensions of Library as Place and Affect of Service. However, there is considerable need for improvement in the area of Information Control where Queen's rated well below the ARL average. # Affect of Service: Queen's strong overall ratings are supported by the many respondent comments praising customer service throughout the system. The ratings and survey comments indicate greatest appreciation by faculty and more experienced students (e.g. graduate students) for the instruction and on-site services provided by the libraries. The ratings also indicate that undergraduates, growing up with the web, want and expected to be able to access library resources independently and do not value these services as highly. The comments also indicated some specific areas for improvement throughout the library system. # Library as Place : All Queen's libraries except for Law ranked well above the ARL and Canadian averages. Overall, Library as Place ranked lowest in importance among the service dimensions for all ARL participants including Queen's. Comparative analysis of LibQUAL results since the survey began shows a decline in “desired” ratings for Library as Place. However, undergraduates continue to give strong "desired" ratings to certain aspects of Library as Place and a relatively high rating for "minimum expected" service. The comments from Queen's survey respondents and ARL's analyses of focus groups indicate that undergraduates value the library much more as a place to study and work with peers rather than for its on-site resources and services. # Information Control: This is the area in greatest need of attention. While it ranked highest in importance for all user groups by a wide margin, Queen's performed poorly in this category. Overall, Queen's ranked far below both the ARL average and the top three Canadian scores. However, the major dissatisfaction was concentrated in the humanities/social sciences (Stauffer primary users) and the health sciences (Bracken primary users) where the overall rating of perceived service quality ranked below the minimum expected service rating. Primary users of the Education, Engineering/Science and Law libraries rated this service dimension higher than the ARL average. The great success of the Canadian National Site License Program (CNSLP) is reflected in the high overall rating generated by Engineering/Science Library users. The low ratings from the humanities and social sciences are supported by respondents' comments and are generally consistent with other ARL participants.
Resumo:
Analysis of 2007 LibQUAL+ results from the survey conducted by Queen's University in February 2007.
Resumo:
La Cadena Datos-Información-Conocimiento (DIC), denominada “Jerarquía de la Información” o “Pirámide del Conocimiento”, es uno de los modelos más importantes en la Gestión de la Información y la Gestión del Conocimiento. Por lo general, la estructuración de la cadena se ha ido definiendo como una arquitectura en la que cada elemento se levanta sobre el elemento inmediatamente inferior; sin embargo no existe un consenso en la definición de los elementos, ni acerca de los procesos que transforman un elemento de un nivel a uno del siguiente nivel. En este artículo se realiza una revisión de la Cadena Datos-Información-Conocimiento examinando las definiciones más relevantes sobre sus elementos y sobre su articulación en la literatura, para sintetizar las acepciones más comunes. Se analizan los elementos de la Cadena DIC desde la semiótica de Peirce; enfoque que nos permite aclarar los significados e identificar las diferencias, las relaciones y los roles que desempeñan en la cadena desde el punto de vista del pragmatismo. Finalmente se propone una definición de la Cadena DIC apoyada en las categorías triádicas de signos y la semiosis ilimitada de Peirce, los niveles de sistemas de signos de Stamper y las metáforas de Zeleny.
Resumo:
Los cambios sufridos por los modelos de comunicación científica hacen que las bibliotecas universitarias se vean obligadas a dar nuevos servicios. Para adecuarse al investigador los bibliotecarios están desarrollando habilidades, colaborando con cada vez más estamentos y sustentando el acceso abierto. Apoyándose en una lista de posibles servicios, basada en la literatura especializada, este trabajo pretende cuantificar y evaluar el apoyo a la investigación desde las bibliotecas universitarias españolas. El sondeo demuestra la aparición de nuevos servicios e infraestructuras. Pero estas asistencias no suelen sistematizarse, difundirse ni evaluarse. Y, por otra parte, las consecuentes inversiones en personal y TIC han generado una brecha entre universidades