41 resultados para Use of information
em Aston University Research Archive
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Analysing investments in ISs in order to maximise benefits has become a prime concern, especially for private corporations. No formula of equilibrium exists that could link the injected amounts and accrued returns. The relationship is simply not straightforward. This thesis is based upon empirical work which involved sketching organisational ethnographies (four organographies and a sectography) into the role and value of information systems in Jordanian financial organisations. Besides deciphering the map of impacts, it explains the attributions of the variations in the impacts of ISs which were found to be related to the internal organisational processes: culturally and politically specific considerations, economically or technically rooted factors and environmental factors. The research serves as an empirical attempt to test out the applicability of adopting the interpretive paradigm to researching organisations in a developing country. The fieldwork comprised an exploratory stage, a detailed investigation of four case studies and a survey stage encompassing 16 organisations. Primary and secondary data were collected from multiple sources using a range of instruments. The evidence highlights the fact that little long term strategic planning was pursued; the emphasis was more focused on short term planning. There was no noticeable adoption of any strategic fit principle linking IS strategy to the corporate strategy. In addition, the benefits obtained were mostly intangible. Although ISs were central to the work of the organisations surveyed as the core technology, they were considered as tools or work enablers rather than weapons for competitive rivalry. The cultural specificity of IS impacts was evident and the cultural and political considerations were key factors in explaining the attributions of the variations in the impacts of ISs in JFOs. The thesis confirms that measuring the benefits of ISs is the problematic. However, in order to gain more insight, the phenomenon of "the use of ISs" has to be studied within its context.
Resumo:
This review will discuss the use of manual grading scales, digital photography, and automated image analysis in the quantification of fundus changes caused by age-related macular disease. Digital imaging permits processing of images for enhancement, comparison, and feature quantification, and these techniques have been investigated for automated drusen analysis. The accuracy of automated analysis systems has been enhanced by the incorporation of interactive elements, such that the user is able to adjust the sensitivity of the system, or manually add and remove pixels. These methods capitalize on both computer and human image feature recognition and the advantage of computer-based methodologies for quantification. The histogram-based adaptive local thresholding system is able to extract useful information from the image without being affected by the presence of other structures. More recent developments involve compensation for fundus background reflectance, which has most recently been combined with the Otsu method of global thresholding. This method is reported to provide results comparable with manual stereo viewing. Developments in this area are likely to encourage wider use of automated techniques. This will make the grading of photographs easier and cheaper for clinicians and researchers. © 2007 Elsevier Inc. All rights reserved.
Resumo:
The nature of Discrete-Event Simulation (DES) and the use of DES in organisations is changing. Two important developments are the use of Visual Interactive Modelling systems and the use of DES in Business Process Management (BPM) projects. Survey research is presented that shows that despite these developments usage of DES remains relatively low due to a lack of knowledge of the benefits of the technique. This paper considers two factors that could lead to a greater achievement and appreciation of the full benefit of DES and thus lead to greater usage. Firstly in relation to using DES to investigate social systems, both in the process of undertaking a simulation project and in the interpretation of the findings a 'soft' approach may generate more knowledge from the DES intervention and thus increase its benefit to businesses. Secondly in order to assess the full range of outcomes of DES the technique could be considered from the perspective of an information processing tool within the organisation. This will allow outcomes to be considered under the three modes of organisational information use of sense making, knowledge creating and decision making which relate to the theoretical areas of knowledge management, organisational learning and decision making respectively. The association of DES with these popular techniques could further increase its usage in business.
Resumo:
Recent studies of US industrial modernisation programmes argue that difficulties of establishing long-term relationships with users prevent them from facilitating development of innovation capabilities. The paper supports this argument through a survey of Japanese research institutes the US programmes were modelled on. In view of information asymmetries in their use, it tests the hypothesis that small and medium-sized firms start using the research institutes with ‘low information gap’ services and gradually move on to ‘high information gap’ services that often require more absorptive capacity. This is demonstrated both under one-to-one relationships and between groups of firms and a research institute.
Resumo:
Background: Self-tests are those where an individual can obtain a result without recourse to a health professional, by getting a result immediately or by sending a sample to a laboratory that returns the result directly. Self-tests can be diagnostic, for disease monitoring, or both. There are currently tests for more than 20 different conditions available to the UK public, and self-testing is marketed as a way of alerting people to serious health problems so they can seek medical help. Almost nothing is known about the extent to which people self-test for cancer or why they do this. Self-tests for cancer could alter perceptions of risk and health behaviour, cause psychological morbidity and have a significant impact on the demand for healthcare. This study aims to gain an understanding of the frequency of self-testing for cancer and characteristics of users. Methods: Cross-sectional survey. Adults registered in participating general practices in the West Midlands Region, will be asked to complete a questionnaire that will collect socio-demographic information and basic data regarding previous and potential future use of self-test kits. The only exclusions will be people who the GP feels it would be inappropriate to send a questionnaire, for example because they are unable to give informed consent. Freepost envelopes will be included and non-responders will receive one reminder. Standardised prevalence rates will be estimated. Discussion: Cancer related self-tests, currently available from pharmacies or over the Internet, include faecal occult blood tests (related to bowel cancer), prostate specific antigen tests (related to prostate cancer), breast cancer kits (self examination guide) and haematuria tests (related to urinary tract cancers). The effect of an increase in self-testing for cancer is unknown but may be considerable: it may affect the delivery of population based screening programmes; empower patients or cause unnecessary anxiety; reduce costs on existing healthcare services or increase demand to investigate patients with positive test results. It is important that more is known about the characteristics of those who are using self-tests if we are to determine the potential impact on health services and the public. © 2006 Wilson et al; licensee BioMed Central Ltd.
Resumo:
Experiments combining different groups or factors and which use ANOVA are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the number of replications required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the DF of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for each error term of the ANOVA. Finally, it is important to consider the design of the experiment because this determines the appropriate ANOVA to use. Some of the most common experimental designs used in the biosciences and their relevant ANOVAs are discussed by. If there is doubt about which ANOVA to use, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
This paper presents a Decision Support System framework based on Constrain Logic Programming and offers suggestions for using RFID technology to improve several of the critical procedures involved. This paper suggests that a widely distributed and semi-structured network of waste producing and waste collecting/processing enterprises can improve their planning both by the proposed Decision Support System, but also by implementing RFID technology to update and validate information in a continuous manner. © 2010 IEEE.
Resumo:
This work presents significant development into chaotic mixing induced through periodic boundaries and twisting flows. Three-dimensional closed and throughput domains are shown to exhibit chaotic motion under both time periodic and time independent boundary motions, A property is developed originating from a signature of chaos, sensitive dependence to initial conditions, which successfully quantifies the degree of disorder withjn the mixing systems presented and enables comparisons of the disorder throughout ranges of operating parameters, This work omits physical experimental results but presents significant computational investigation into chaotic systems using commercial computational fluid dynamics techniques. Physical experiments with chaotic mixing systems are, by their very nature, difficult to extract information beyond the recognition that disorder does, does not of partially occurs. The initial aim of this work is to observe whether it is possible to accurately simulate previously published physical experimental results through using commercial CFD techniques. This is shown to be possible for simple two-dimensional systems with time periodic wall movements. From this, and subsequent macro and microscopic observations of flow regimes, a simple explanation is developed for how boundary operating parameters affect the system disorder. Consider the classic two-dimensional rectangular cavity with time periodic velocity of the upper and lower walls, causing two opposing streamline motions. The degree of disorder within the system is related to the magnitude of displacement of individual particles within these opposing streamlines. The rationale is then employed in this work to develop and investigate more complex three-dimensional mixing systems that exhibit throughputs and time independence and are therefore more realistic and a significant advance towards designing chaotic mixers for process industries. Domains inducing chaotic motion through twisting flows are also briefly considered. This work concludes by offering possible advancements to the property developed to quantify disorder and suggestions of domains and associated boundary conditions that are expected to produce chaotic mixing.
Resumo:
The further development of the use of NMR relaxation times in chemical, biological and medical research has perhaps been curtailed by the length of time these measurements often take. The DESPOT (Driven Equilibrium Single Pulse Observation of T1) method has been developed, which reduces the time required to make a T1 measurement by a factor of up to 100. The technique has been studied extensively herein and the thesis contains recommendations for its successful experimental application. Modified DESPOT type equations for use when T2 relaxation is incomplete or where off-resonance effects are thought to be significant are also presented. A recently reported application of the DESPOT technique to MR imaging gave good initial results but suffered from the fact that the images were derived from spin systems that were not driven to equilibrium. An approach which allows equilibrium to be obtained with only one non-acquisition sequence is presented herein and should prove invaluable in variable contrast imaging. A DESPOT type approach has also been successfully applied to the measurement of T1. T_1's can be measured, using this approach significantly faster than by the use of the classical method. The new method also provides a value for T1 simultaneously and therefore the technique should prove valuable in intermediate energy barrier chemical exchange studies. The method also gives rise to the possibility of obtaining simultaneous T1 and T1 MR images. The DESPOT technique depends on rapid multipulsing at nutation angles, normally less than 90^o. Work in this area has highlighted the possible time saving for spectral acquisition over the classical technique (90^o-5T_1)_n. A new method based on these principles has been developed which permits the rapid multipulsing of samples to give T_1 and M_0 ratio information. The time needed, however, is only slightly longer than would be required to determine the M_0 ratio alone using the classical technique. In ^1H decoupled ^13C spectroscopy the method also gives nOe ratio information for the individual absorptions in the spectrum.
Resumo:
Lead in petrol has been identified as a health hazard and attempts are being made to create a lead-free atmosphere. Through an intensive study a review is made of the various options available to the automobile and petroleum industry. The economic and atmospheric penalties coupled with automobile fuel consumption trends are calculated and presented in both graphical and tabulated form. Experimental measurements of carbon monoxide and hydrocarbon emissions are also presented for certain selected fuels. Reduction in CO and HC's with the employment of a three-way catalyst is also discussed. All tests were carried out on a Fiat 127A engine at wide open throttle and standard timing setting. A Froude dynamometer was used to vary engine speed. With the introduction of lead-free petrol, interest in combustion chamber deposits in spark ignition engines has ben renewed. These deposits cause octane requirement increase or rise in engine knock and decreased volumetric efficiency. The detrimental effect of the deposits has been attributed to the physical volume of the deposit and to changes in heat transfer. This study attempts to assess why leaded deposits, though often greater in mass and volume, yield relatively lower ORI when compared to lead-free deposits under identical operating conditions. This has been carried out by identifying the differences in the physical nature of the deposit and then through measurement of the thermal conductivity and permeability of the deposits. The measured thermal conductivity results are later used in a mathematical model to determine heat transfer rates and temperature variation across the engine wall and deposit. For the model, the walls of the combustion cylinder and top are assumed to be free of engine deposit, the major deposit being on the piston head. Seven different heat transfer equations are formulated describing heat flow at each part of the four stroke cycle, and the variation of cylinder wall area exposed to gas mixture is accounted for. The heat transfer equations are solved using numerical methods and temperature variations across the wall identified. Though the calculations have been carried out for one particular moment in the cycle, similar calculations are possible for every degree of the crank angle, and thus further information regarding location of maximum temperatures at every degree of the crank angle may also be determined. In conclusion, thermal conductivity values of leaded and lead-free deposits have been found. The fundamental concepts of a mathematical model with great potential have been formulated and it is hoped that with future work it may be used in a simulation for different engine construction materials and motor fuels, leading to better design of future prototype engines.
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.