23 resultados para Information literacy integration model
em Aston University Research Archive
Resumo:
Aston University has recently made PebblePad, an e-portfolio or personal learning system, available to all students within the University. The customisable Profiles within PebblePad allow students to self-declare their skills in particular areas, attaching evidence of their skills or an action plan for improvement to each statement. Formal Information Literacy (IL) teaching within Aston University is currently limited to Library & Information Services (LIS) Information Specialists delivering a maximum of one session to each student during each level of their degree. However, many of the skills are continually developed by students during the course of their academic studies. For this project, an IL skills profile was created within PebblePad, which was then promoted to groups of staff and students to complete during the academic session 2009-10. Functionality within PebblePad allowed students to share their IL skills profile, evidence, action plans or any other items they felt were appropriate with an LIS Information Specialist who was able to add comments and offer suggestions for activities to help the student to develop further. Activities were closely related to students’ coursework where possible: suggesting a student kept a short reflective log of their information searching and evaluating process for an upcoming essay, for example. Feedback on the usefulness of the IL Profile will be sought from students through focus groups and the communication tools in PebblePad. In this way, we hope to make students more aware of their IL skills and to offer IL skills support over a longer period of time than a single session can provide. We will present preliminary conclusions about the practicalities and benefits of a self-declaration approach to developing IL skills in students at Aston University.
Resumo:
Over the last six years, Aston University Library & Information Services Induction Team have worked on the Welcome experience for new and returning students to the Library. The article provides an overview of the Induction programme and how it has evolved to engage students pre and post arrival to the University.
Resumo:
How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.
Resumo:
Safety enforcement practitioners within Europe and marketers, designers or manufacturers of consumer products need to determine compliance with the legal test of "reasonable safety" for consumer goods, to reduce the "risks" of injury to the minimum. To enable freedom of movement of products, a method for safety appraisal is required for use as an "expert" system of hazard analysis by non-experts in safety testing of consumer goods for implementation consistently throughout Europe. Safety testing approaches and the concept of risk assessment and hazard analysis are reviewed in developing a model for appraising consumer product safety which seeks to integrate the human factors contribution of risk assessment, hazard perception, and information processing. The model develops a system of hazard identification, hazard analysis and risk assessment which can be applied to a wide range of consumer products through use of a series of systematic checklists and matrices and applies alternative numerical and graphical methods for calculating a final product safety risk assessment score. It is then applied in its pilot form by selected "volunteer" Trading Standards Departments to a sample of consumer products. A series of questionnaires is used to select participating Trading Standards Departments, to explore the contribution of potential subjective influences, to establish views regarding the usability and reliability of the model and any preferences for the risk assessment scoring system used. The outcome of the two stage hazard analysis and risk assessment process is considered to determine consistency in results of hazard analysis, final decisions regarding the safety of the sample product and to determine any correlation in the decisions made using the model and alternative scoring methods of risk assessment. The research also identifies a number of opportunities for future work, and indicates a number of areas where further work has already begun.
Resumo:
The control needed in the management of a project was analysed with particular reference to the unique needs of the construction industry within the context of site management. This was explored further by analysing the various problems facing managers within the overall system and determining to what extent the organisation would benefit from an integrated mangement information system. Integration and management of information within the organisational units and the cycles of events that make up the main sub-system was suggested as the means of achieving this objective. A conceptual model of the flow of information was constructed within the whole process of project management by examining the type of information and documents which are generated for the production cycle of a project. This model was analysed with respect to the site managers' needs and the minimum requirements for an overall integrated system. The most tedious and time-consuming task facing the site manager is the determination of weekly production costs, calculation and preparation of interim certificates and valuation of variations occurring during the production stage and finally the settlement and preparation of supplier and sub-contractors' accounts. These areas where microcomputers could be of most help were identified and a number of packages were designed and implemented for various contractors. The gradual integration of stand-alone packages within the whole of the construction industry is a logical sequence to achieve integration of management system. The methods of doing this were analysed together with the resulting advantages and disadvantages.
Resumo:
The future of public libraries has been threatened by funding cuts and new digital technologies which have led many people to question their traditional role and purpose. However, freedom of information, ready access to knowledge and information literacy in all its digital and analog guises are more important than ever. Thus, public libraries remain significant spaces and places where people can socially interact and learn. In many countries public libraries are reinventing themselves and part of this process has been the redesign of library services and the design and construction of new library building and facilities that articulate the values, purpose and role of what has been termed 'the next library'. Following discussion of new library developments in London, Birmingham and Worcester in the UK, Aarhus in Denmark and Helsinki in Finland, the article concludes that public libraries are now both social and media spaces as well as being important physical places that can help city dwellers decide what type of urban world they want to see.
Resumo:
The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage. © 2014 ARVO.
Resumo:
While diversity might give an organization a competitive advantage, individuals have a tendency to prefer homogenous group settings. Prior research suggests that group members who are dissimilar (vs. similar) to their peers in terms of a given diversity attribute (e.g. demographics, attitudes, values or traits) feel less attached to their work group, experience less satisfying and more conflicted relationships with their colleagues, and consequently are less effective. However, prior empirical findings tend to be weak and inconsistent, and it remains unclear when, how and to what extent such differences affect group members’ social integration (i.e. attachment with their work group, satisfaction and conflicted relationships with their peers) and effectiveness. To address these issues the current study conducted a meta-analysis and integrated the empirical results of 129 studies. For demographic diversity attributes (such as gender, ethnicity, race, nationality, age, functional background, and tenure) the findings support the idea that demographic dissimilarity undermines individual member performance via lower levels of social integration. These negative effects were more pronounced in pseudo teams – i.e. work groups in which group members pursue individual goals, work on individual tasks, and are rewarded for their individual performance. These negative effects were however non-existent in real teams - i.e. work groups in which groups members pursue group goals, work on interdependent tasks, and are rewarded (at least partially) based on their work group’s performance. In contrast, for underlying psychological diversity attributes (such as attitudes, personality, and values), the relationship between dissimilarity and social integration was more negative in real teams than in pseudo teams, which in return translated into even lower individual performance. At the same time however, differences in underlying psychological attributes had an even stronger positive effect on dissimilar group member’s individual performance, when the negative effects of social integration were controlled for. This implies that managers should implement real work groups to overcome the negative effects of group member’s demographic dissimilarity. To harness the positive effects of group members’ dissimilarity on underlying psychological attributes, they need to make sure that dissimilar group members become socially integrated.
Resumo:
Information systems have developed to the stage that there is plenty of data available in most organisations but there are still major problems in turning that data into information for management decision making. This thesis argues that the link between decision support information and transaction processing data should be through a common object model which reflects the real world of the organisation and encompasses the artefacts of the information system. The CORD (Collections, Objects, Roles and Domains) model is developed which is richer in appropriate modelling abstractions than current Object Models. A flexible Object Prototyping tool based on a Semantic Data Storage Manager has been developed which enables a variety of models to be stored and experimented with. A statistical summary table model COST (Collections of Objects Statistical Table) has been developed within CORD and is shown to be adequate to meet the modelling needs of Decision Support and Executive Information Systems. The COST model is supported by a statistical table creator and editor COSTed which is also built on top of the Object Prototyper and uses the CORD model to manage its metadata.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
This research examines the role of the information management process within a process-oriented enterprise, Xerox Ltd. The research approach is based on a post-positive paradigm and has resulted in thirty-five idiographic statements. The three major outcomes are: 1. The process-oriented holistic enterprise is an organisation that requires a long-term management commitment to its development. It depends on the careful management of people, tasks, information and technology. A complex integration of business processes is required and this can be managed through the use of consistent documentation techniques, clarity in the definition of process responsibilities and management attention to the global metrics and the centralisation of the management of the process model are critical to its success. 2. The role of the information management process within the context of a process-oriented enterprise is to provide flexible and cost-effective applications, technological, and process support to the business. This is best achieved through a centralisation of the management of information management and of the process model. A business-led approach combined with the consolidation of applications, information, process, and data architectures is central to providing effective business and process-focused support. 3. In a process oriented holistic enterprise, process and information management are inextricably linked. The model of process management depends heavily on information management, whilst the model of information management is totally focused around supporting and creating the process model. The two models are mutually creating - one cannot exist without the other. There is a duality concept of process and information management.
Resumo:
The number of remote sensing platforms and sensors rises almost every year, yet much work on the interpretation of land cover is still carried out using either single images or images from the same source taken at different dates. Two questions could be asked of this proliferation of images: can the information contained in different scenes be used to improve the classification accuracy and, what is the best way to combine the different imagery? Two of these multiple image sources are MODIS on the Terra platform and ETM+ on board Landsat7, which are suitably complementary. Daily MODIS images with 36 spectral bands in 250-1000 m spatial resolution and seven spectral bands of ETM+ with 30m and 16 days spatial and temporal resolution respectively are available. In the UK, cloud cover may mean that only a few ETM+ scenes may be available for any particular year and these may not be at the time of year of most interest. The MODIS data may provide information on land cover over the growing season, such as harvest dates, that is not present in the ETM+ data. Therefore, the primary objective of this work is to develop a methodology for the integration of medium spatial resolution Landsat ETM+ image, with multi-temporal, multi-spectral, low-resolution MODIS \Terra images, with the aim of improving the classification of agricultural land. Additionally other data may also be incorporated such as field boundaries from existing maps. When classifying agricultural land cover of the type seen in the UK, where crops are largely sown in homogenous fields with clear and often mapped boundaries, the classification is greatly improved using the mapped polygons and utilising the classification of the polygon as a whole as an apriori probability in classifying each individual pixel using a Bayesian approach. When dealing with multiple images from different platforms and dates it is highly unlikely that the pixels will be exactly co-registered and these pixels will contain a mixture of different real world land covers. Similarly the different atmospheric conditions prevailing during the different days will mean that the same emission from the ground will give rise to different sensor reception. Therefore, a method is presented with a model of the instantaneous field of view and atmospheric effects to enable different remote sensed data sources to be integrated.
Resumo:
Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach.