904 resultados para Use of Market Information
Resumo:
Purpose – The purpose of this paper is to investigate the joint effects of market orientation (MO) and corporate social responsibility (CSR) on firm performance. Design/methodology/approach – Data were collected via a questionnaire survey of star-rated hotels in China and a total of 143 valid responses were received. The hypotheses were tested by employing structural equation modelling with a maximum likelihood estimation option. Findings – It was found that although both MO and CSR could enhance performance, once the effects of CSR are accounted for, the direct effects of MO on performance diminish considerably to almost non-existent. Although this result may be due to the fact that the research is conducted in China, a country where CSR might be crucially important to performance given the country's socialist legacy, it nonetheless provides strong evidence that MO's impact on organizational performance is mediated by CSR. Research limitations/implications – The main limitations include the use of cross-sectional data, the subjective measurement of performance and the uniqueness of the research setting (China). The findings provide an additional important insight into the processes by which a market oriented culture is transformed into superior organizational performance. Originality/value – This paper is one of the first to examine the joint effects of MO and CSR on business performance. The empirical evidence from China adds to the existing literature on the respective importance of MO and CSR.
Resumo:
Experiments combining different groups or factors and which use ANOVA are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the number of replications required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the DF of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for each error term of the ANOVA. Finally, it is important to consider the design of the experiment because this determines the appropriate ANOVA to use. Some of the most common experimental designs used in the biosciences and their relevant ANOVAs are discussed by. If there is doubt about which ANOVA to use, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
This paper presents a Decision Support System framework based on Constrain Logic Programming and offers suggestions for using RFID technology to improve several of the critical procedures involved. This paper suggests that a widely distributed and semi-structured network of waste producing and waste collecting/processing enterprises can improve their planning both by the proposed Decision Support System, but also by implementing RFID technology to update and validate information in a continuous manner. © 2010 IEEE.
Resumo:
This work presents significant development into chaotic mixing induced through periodic boundaries and twisting flows. Three-dimensional closed and throughput domains are shown to exhibit chaotic motion under both time periodic and time independent boundary motions, A property is developed originating from a signature of chaos, sensitive dependence to initial conditions, which successfully quantifies the degree of disorder withjn the mixing systems presented and enables comparisons of the disorder throughout ranges of operating parameters, This work omits physical experimental results but presents significant computational investigation into chaotic systems using commercial computational fluid dynamics techniques. Physical experiments with chaotic mixing systems are, by their very nature, difficult to extract information beyond the recognition that disorder does, does not of partially occurs. The initial aim of this work is to observe whether it is possible to accurately simulate previously published physical experimental results through using commercial CFD techniques. This is shown to be possible for simple two-dimensional systems with time periodic wall movements. From this, and subsequent macro and microscopic observations of flow regimes, a simple explanation is developed for how boundary operating parameters affect the system disorder. Consider the classic two-dimensional rectangular cavity with time periodic velocity of the upper and lower walls, causing two opposing streamline motions. The degree of disorder within the system is related to the magnitude of displacement of individual particles within these opposing streamlines. The rationale is then employed in this work to develop and investigate more complex three-dimensional mixing systems that exhibit throughputs and time independence and are therefore more realistic and a significant advance towards designing chaotic mixers for process industries. Domains inducing chaotic motion through twisting flows are also briefly considered. This work concludes by offering possible advancements to the property developed to quantify disorder and suggestions of domains and associated boundary conditions that are expected to produce chaotic mixing.
Resumo:
The further development of the use of NMR relaxation times in chemical, biological and medical research has perhaps been curtailed by the length of time these measurements often take. The DESPOT (Driven Equilibrium Single Pulse Observation of T1) method has been developed, which reduces the time required to make a T1 measurement by a factor of up to 100. The technique has been studied extensively herein and the thesis contains recommendations for its successful experimental application. Modified DESPOT type equations for use when T2 relaxation is incomplete or where off-resonance effects are thought to be significant are also presented. A recently reported application of the DESPOT technique to MR imaging gave good initial results but suffered from the fact that the images were derived from spin systems that were not driven to equilibrium. An approach which allows equilibrium to be obtained with only one non-acquisition sequence is presented herein and should prove invaluable in variable contrast imaging. A DESPOT type approach has also been successfully applied to the measurement of T1. T_1's can be measured, using this approach significantly faster than by the use of the classical method. The new method also provides a value for T1 simultaneously and therefore the technique should prove valuable in intermediate energy barrier chemical exchange studies. The method also gives rise to the possibility of obtaining simultaneous T1 and T1 MR images. The DESPOT technique depends on rapid multipulsing at nutation angles, normally less than 90^o. Work in this area has highlighted the possible time saving for spectral acquisition over the classical technique (90^o-5T_1)_n. A new method based on these principles has been developed which permits the rapid multipulsing of samples to give T_1 and M_0 ratio information. The time needed, however, is only slightly longer than would be required to determine the M_0 ratio alone using the classical technique. In ^1H decoupled ^13C spectroscopy the method also gives nOe ratio information for the individual absorptions in the spectrum.
Resumo:
Lead in petrol has been identified as a health hazard and attempts are being made to create a lead-free atmosphere. Through an intensive study a review is made of the various options available to the automobile and petroleum industry. The economic and atmospheric penalties coupled with automobile fuel consumption trends are calculated and presented in both graphical and tabulated form. Experimental measurements of carbon monoxide and hydrocarbon emissions are also presented for certain selected fuels. Reduction in CO and HC's with the employment of a three-way catalyst is also discussed. All tests were carried out on a Fiat 127A engine at wide open throttle and standard timing setting. A Froude dynamometer was used to vary engine speed. With the introduction of lead-free petrol, interest in combustion chamber deposits in spark ignition engines has ben renewed. These deposits cause octane requirement increase or rise in engine knock and decreased volumetric efficiency. The detrimental effect of the deposits has been attributed to the physical volume of the deposit and to changes in heat transfer. This study attempts to assess why leaded deposits, though often greater in mass and volume, yield relatively lower ORI when compared to lead-free deposits under identical operating conditions. This has been carried out by identifying the differences in the physical nature of the deposit and then through measurement of the thermal conductivity and permeability of the deposits. The measured thermal conductivity results are later used in a mathematical model to determine heat transfer rates and temperature variation across the engine wall and deposit. For the model, the walls of the combustion cylinder and top are assumed to be free of engine deposit, the major deposit being on the piston head. Seven different heat transfer equations are formulated describing heat flow at each part of the four stroke cycle, and the variation of cylinder wall area exposed to gas mixture is accounted for. The heat transfer equations are solved using numerical methods and temperature variations across the wall identified. Though the calculations have been carried out for one particular moment in the cycle, similar calculations are possible for every degree of the crank angle, and thus further information regarding location of maximum temperatures at every degree of the crank angle may also be determined. In conclusion, thermal conductivity values of leaded and lead-free deposits have been found. The fundamental concepts of a mathematical model with great potential have been formulated and it is hoped that with future work it may be used in a simulation for different engine construction materials and motor fuels, leading to better design of future prototype engines.
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
This thesis is concerned with the use of the synoptic approach within decision making concerning nuclear waste management. The synoptic approach to decision making refers to an approach to rational decision making that assumes as an ideal, comprehensiveness of information and analysis. Two case studies are examined in which a high degree of synoptic analysis has been used within the decision making process. The case studies examined are the Windscale Inquiry into the decision to build the THORP reprocessing plant and the Nirex safety assessment of nuclear waste disposal. The case studies are used to test Lindblom's hypothesis that a synoptic approach to decision making is not achievable. In the first case study Lindblom's hypothesis is tested through the evaluation of the decision to build the THORP plant, taken following the Windscale Inquiry. It is concluded that the incongruity of this decision supports Lindblom's hypothesis. However, it has been argued that the Inquiry should be seen as a legitimisation exercise for a decision that was effectively predetermined, rather than a rigorous synoptic analysis. Therefore, the Windscale Inquiry does not provide a robust test of the synoptic method. It was concluded that a methodology was required, that allowed robust conclusions to be drawn, despite the ambiguity of the role of the synoptic method in decision making. Thus, the methodology adopted for the second case study was modified. In this case study the synoptic method was evaluated directly. This was achieved through the analysis of the cogency of the Nirex safety assessment. It was concluded that the failure of Nirex to provide a cogent synoptic analysis supported Lindblom's criticism of the synoptic method. Moreover, it was found that the synoptic method failed in the way that Lindblom predicted that it would.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The aim of this project was to carry out a fundamental study to assess the potential of colour image analysis for use in investigations of fire damaged concrete. This involved:(a) Quantification (rather than purely visual assessment) of colour change as an indicator of the thermal history of concrete.(b) Quantification of the nature and intensity of crack development as an indication of the thermal history of concrete, supporting and in addition to, colour change observations.(c) Further understanding of changes in the physical and chemical properties of aggregate and mortar matrix after heating.(d) An indication of the relationship between cracking and non-destructive methods of testing e.g. UPV or Schmidt hammer. Results showed that colour image analysis could be used to quantify the colour changes found when concrete is heated. Development of red colour coincided with significant reduction in compressive strength. Such measurements may be used to determine the thermal history of concrete by providing information regarding the temperature distribution that existed at the height of a fire. The actual colours observed depended on the types of cement and aggregate that were used to make the concrete. With some aggregates it may be more appropriate to only analyse the mortar matrix. Petrographic techniques may also be used to determine the nature and density of cracks developing at elevated temperatures and values of crack density correlate well with measurements of residual compressive strength. Small differences in crack density were observed with different cements and aggregates, although good correlations were always found with the residual compressive strength. Taken together these two techniques can provide further useful information for the evaluation of fire damaged concrete. This is especially so since petrographic analysis can also provide information on the quality of the original concrete such as cement content and water / cement ratio. Concretes made with blended cements tended to produce small differences in physical and chemical properties compared to those made with unblended cements. There is some evidence to suggest that a coarsening of pore structure in blended cements may lead to onset of cracking at lower temperatures. The use of DTA/TGA was of little use in assessing the thermal history of concrete made with blended cements. Corner spalling and sloughing off, as observed in columns, was effectively reproduced in tests on small scale specimens and the crack distributions measured. Relationships between compressive strength/cracking and non-destructive methods of testing are discussed and an outline procedure for site investigations of fire damaged concrete is described.
Resumo:
In previous sea-surface variability studies, researchers have failed to utilise the full ERS-1 mission due to the varying orbital characteristics in each mission phase, and most have simply ignored the Ice and Geodetic phases. This project aims to introduce a technique which will allow the straightforward use of all orbital phases, regardless of orbit type. This technique is based upon single satellite crossovers. Unfortunately the ERS-1 orbital height is still poorly resolved (due to higher air drag and stronger gravitational effects) when compared with that of TOPEX/Poseidon (T/P), so to make best use of the ERS-1 crossover data corrections to the ERS-1 orbital heights are calculated by fitting a cubic-spline to dual-crossover residuals with T/P. This correction is validated by comparison of dual satellite crossovers with tide gauge data. The crossover processing technique is validated by comparing the extracted sea-surface variability information with that from T/P repeat pass data. The two data sets are then combined into a single consistent data set for analysis of sea-surface variability patterns. These patterns are simplified by the use of an empirical orthogonal function decomposition which breaks the signals into spatial modes which are then discussed separately. Further studies carried out on these data include an analysis of the characteristics of the annual signal, discussion of evidence for Rossby wave propagation on a global basis, and finally analysis of the evidence for global mean sea level rise.
Resumo:
SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.
Resumo:
Satellite information, in combination with conventional point source measurements, can be a valuable source of information. This thesis is devoted to the spatial estimation of areal rainfall over a region using both the measurements from a dense and sparse network of rain-gauges and images from the meteorological satellites. A primary concern is to study the effects of such satellite assisted rainfall estimates on the performance of rainfall-runoff models. Low-cost image processing systems and peripherals are used to process and manipulate the data. Both secondary as well as primary satellite images were used for analysis. The secondary data was obtained from the in-house satellite receiver and the primary data was obtained from an outside source. Ground truth data was obtained from the local Water Authority. A number of algorithms are presented that combine the satellite and conventional data sources to produce areal rainfall estimates and the results are compared with some of the more traditional methodologies. The results indicate that the satellite cloud information is valuable in the assessment of the spatial distribution of areal rainfall, for both half-hourly as well as daily estimates of rainfall. It is also demonstrated how the performance of the simple multiple regression rainfall-runoff model is improved when satellite cloud information is used as a separate input in addition to rainfall estimates from conventional means. The use of low-cost equipment, from image processing systems to satellite imagery, makes it possible for developing countries to introduce such systems in areas where the benefits are greatest.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.