982 resultados para Reference Model
Resumo:
This thesis is concerned with the role of diagenesis in forming ore deposits. Two sedimentary 'ore-types' have been examined; the Proterozoic copper-cobalt orebodies of the Konkola Basin on the Zambian Copperbelt, and the Permian Marl Slate of North East England. Facies analysis of the Konkola Basin shows the Ore-Shale to have formed in a subtidal to intertidal environment. A sequence of diagenetic events is outlined from which it is concluded that the sulphide ores are an integral part of the diagenetic process. Sulphur isotope data establish that the sulphides formed as a consequence of the bacterial reduction of sulphate, while the isotopic and geochemical composition of carbonates is shown to reflect changes in the compositions of diagenetic pore fluids. Geochemical studies indicate that the copper and cobalt bearing mineralising fluids probably had different sources. Veins which crosscut the orebodies contain hydrocarbon inclusions, and are shown to be of late diagenetic lateral secretion origin. RbiSr dating indicates that the Ore-Shale was subject to metamorphism at 529 A- 20 myrs. The sedimentology and petrology of the Marl Slate are described. Textural and geochemical studies suggest that much of the pyrite (framboidal) in the Marl Slate formed in an anoxic water column, while euhedral pyrite and base metal sulphides formed within the sediment during early diagenesis. Sulphur isotope data confirm that conditions were almost "ideal" for sulphide formation during Marl Slate deposition, the limiting factors in ore formation being the restricted supply of chalcophile elements. Carbon and oxygen isotope data, along with petrographic observations, indicate that much of the calcite and dolomite occurring in the Marl Slate is primary, and probably formed in isotopic equilibrium. A depositional model is proposed which explains all of the data presented and links the lithological variations with fluctuations in the anoxicioxic boundary layer of the water column.
Resumo:
The objective of this research was to investigate the effects of normal aging and the additional effects of chronic exposure to two experimental diets, one enriched in aluminium, the other enriched in lecithin, on aspects of the behaviour and brain histology of the female mouse. The aluminium diet was administered in an attempt to develop a rodent model of Dementia of the Alzheimer Type (DAT). With normal aging, almost all assessed aspects of behaviour were found to be impaired. As regards cognition, selective impairments of single-trial passive avoidance and Morris place learning were observed. While all aspects of open-field behaviour were impaired, the degree of impairment was directly related to the degree of motoric complexity. Deficits were also observed on non-visual sensorimotor coordination tasks and in olfactory discrimination. Histologically, neuron loss, gliosis, vacuolation and congophilic angiopathy were observed in several of the brain regions/fibre tracts believed to contribute to the control of some of the assessed behaviours. The aluminium treatment had very selective effects on both behaviour and brain histology, inducing several features observed in DAT. Behaviourally, the treatment induced impaired spatial reference memory; reduced ambulation; disturbed olfactory function and induced the premature development of the senile pattern of swimming. Histologically, significant neuron loss and gliosis were observed in the hippocampus, entorhinal cortex, amygdala, medial septum, pyriform and pr-frontal cortex. In addition, the brain distribution of congophilic angiopathy was significantly increased by the treatment. The lecithin treatment had effects on both non-cognitive and cognitive aspects of behaviour. The effects of aging on open-field ambulation and rearing were partially ameliorated by the treatment. A similar effect was observed for single-trial passive avoidance performance. Age-dependent improvements in acquisition/retention were observed in 17-23 month mice and Morris place task performance was improved in 11 and 17 month mice. Histologically, a partial sparing of neurons in the cerebellum, hippocampus, entorhinal cortex and subiculum was observed.
Resumo:
The Report of the Robens Committee (1972), the Health and Safety at Work Act (1974) and the Safety Representatives and Safety Committees Regulations (1977) provide the framework within which this study of certain aspects of health and safety is carried out. The philosophy of self-regulation is considered and its development is set within an historical and an industrial relations perspective. The research uses a case study approach to examine the effectiveness of self-regulation in health and safety in a public sector organisation. Within this approach, methodological triangulation employs the techniques of interviews, questionnaires, observation and documentary analysis. The work is based in four departments of a Scottish Local Authority and particular attention is given to three of the main 'agents' of self-regulation - safety representatives, supervisors and safety committees and their interactions, strategies and effectiveness. A behavioural approach is taken in considering the attitudes, values, motives and interactions of safety representatives and management. Major internal and external factors, which interact and which influence the effectiveness of joint self-regulation of health and safety, are identified. It is emphasised that an organisation cannot be studied without consideration of the context within which it operates both locally and in the wider environment. One of these factors, organisational structure, is described as bureaucratic and the model of a Representative Bureaucracy described by Gouldner (1954) is compared with findings from the present study. An attempt is made to ascertain how closely the Local Authority fits Gouldner's model. This research contributes both to knowledge and to theory in the subject area by providing an in-depth study of self-regulation in a public sector organisation, which when compared with such studies as those of Beaumont (1980, 1981, 1982) highlights some of the differences between the public and private sectors. Both empirical data and hypothetical models are used to provide description and explanation of the operation of the health and safety system in the Local Authority. As data were collected during a dynamic period in economic, political and social terms, the research discusses some of the effects of the current economic recession upon safety organisation.
Resumo:
Differential perception of innovation is a research area which has been advocated as a suitable topic for study in recent years. It developed from the problems encountered within earlier perception of innovation studies which sought to establish what characteristics of an innovation affected the ease of its adoption. While some success was achieved In relating perception of innovation to adoption behaviour, variability encountered Within groups expected - to fercelve innovation similarly suggested that the needs and experiences of the potential adopter were significantly affecting the research findings. Such analysis being supported by both sociological and psychological perceptual research. The present study sought to identify the presence of differential perception of innovation and explore the nature of the process. It was decided to base the research in an organisational context and to concentrate upon manufacturing innovation. It has been recognised that such adoption of technological innovation is commonly the product of a collective decision-making process, involving individuals from a variety of occupational backgrounds, both in terms of occupational speciality and level within the hierarchy. Such roles appeared likely to significantly influence perception of technological innovation, as gathered through an appropriate measure and were readily identifiable. Data vas collected by means of a face-to-face card presentation technique, a questionnaire and through case study material. Differential perception of innovation effects were apparent In the results, many similarities and differences of perception being related to the needs and experiences of the individuals studied. Phenomenological analysis, which recognises the total nature of experience in infiuencing behaviour, offered the best means of explaining the findings. It was also clear that the bureaucratic model of role definition was not applicable to the area studied, it seeming likely that such definitions are weaker under conditions of uncertainty, such as encountered in innovative decision-making.
Resumo:
The purpose of this thesis is twofold: to examine the validity of the rotating-field and cross-field theories of the single-phase induction motor when applied to a cage rotor machine; and to examine the extent to which skin effect is likely to modify the characteristics of a cage rotor machine. A mathematical analysis is presented for a single-phase induction motor in which the rotor parameters are modified by skin effect. Although this is based on the usual type of ideal machine, a new form of model rotor allows approximations for skin effect phenomena to be included as an integral part of the analysis. Performance equations appropriate to the rotating-field and cross-field theories are deduced, and the corresponding explanations for the steady-state mode of operation are critically examined. The evaluation of the winding currents and developed torque is simplified by the introduction of new dimensionless factors which are functions of the resistance/reactance ratios of the rotor and the speed. Tables of the factors are included for selected numerical values of the parameter ratios, and these are used to deduce typical operating characteristics for both cage and wound rotor machines. It is shown that a qualitative explanation of the mode of operation of a cage rotor machine is obtained from either theory; but the operating characteristics must be deduced from the performance equations of the rotating-field theory, because of the restrictions on the values of the rotor parameters imposed by skin effect.
Resumo:
Purpose: The purpose of this paper is to investigate enterprise resource planning (ERP) systems development and emerging practices in the management of enterprises (i.e. parts of companies working with parts of other companies to deliver a complex product and/or service) and identify any apparent correlations. Suitable a priori contingency frameworks are then used and extended to explain apparent correlations. Discussion is given to provide guidance for researchers and practitioners to deliver better strategic, structural and operational competitive advantage through this approach; coined here as the "enterprization of operations". Design/methodology/approach: Theoretical induction uses a new empirical longitudinal case study from Zoomlion (a Chinese manufacturing company) built using an adapted form of template analysis to produce a new contingency framework. Findings: Three main types of enterprises and the three main types of ERP systems are defined and correlations between them are explained. Two relevant a priori frameworks are used to induct a new contingency model to support the enterprization of operations; known as the dynamic enterprise reference grid for ERP (DERG-ERP). Research limitations/implications: The findings are based on one longitudinal case study. Further case studies are currently being conducted in the UK and China. Practical implications: The new contingency model, the DERG-ERP, serves as a guide for ERP vendors, information systems management and operations managers hoping to grow and sustain their competitive advantage with respect to effective enterprise strategy, enterprise structure and ERP systems. Originality/value: This research explains how ERP systems and the effective management of enterprises should develop in order to sustain competitive advantage with respect to enterprise strategy, enterprise structure and ERP systems use. © Emerald Group Publishing Limited.
Resumo:
The target of no-reference (NR) image quality assessment (IQA) is to establish a computational model to predict the visual quality of an image. The existing prominent method is based on natural scene statistics (NSS). It uses the joint and marginal distributions of wavelet coefficients for IQA. However, this method is only applicable to JPEG2000 compressed images. Since the wavelet transform fails to capture the directional information of images, an improved NSS model is established by contourlets. In this paper, the contourlet transform is utilized to NSS of images, and then the relationship of contourlet coefficients is represented by the joint distribution. The statistics of contourlet coefficients are applicable to indicate variation of image quality. In addition, an image-dependent threshold is adopted to reduce the effect of content to the statistical model. Finally, image quality can be evaluated by combining the extracted features in each subband nonlinearly. Our algorithm is trained and tested on the LIVE database II. Experimental results demonstrate that the proposed algorithm is superior to the conventional NSS model and can be applied to different distortions. © 2009 Elsevier B.V. All rights reserved.
Resumo:
* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.
Resumo:
This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1mm for displacements parallel to the fluoroscopic plane, and of order of 10mm for the orthogonal displacement. © 2010 P. Bifulco et al.
Resumo:
In this paper a full analytic model for pause intensity (PI), a no-reference metric for video quality assessment, is presented. The model is built upon the video play out buffer behavior at the client side and also encompasses the characteristics of a TCP network. Video streaming via TCP produces impairments in play continuity, which are not typically reflected in current objective metrics such as PSNR and SSIM. Recently the buffer under run frequency/probability has been used to characterize the buffer behavior and as a measurement for performance optimization. But we show, using subjective testing, that under run frequency cannot reflect the viewers' quality of experience for TCP based streaming. We also demonstrate that PI is a comprehensive metric made up of a combination of phenomena observed in the play out buffer. The analytical model in this work is verified with simulations carried out on ns-2, showing that the two results are closely matched. The effectiveness of the PI metric has also been proved by subjective testing on a range of video clips, where PI values exhibit a good correlation with the viewers' opinion scores. © 2012 IEEE.
Resumo:
Setting out from the database of Operophtera brumata, L. in between 1973 and 2000 due to the Light Trap Network in Hungary, we introduce a simple theta-logistic population dynamical model based on endogenous and exogenous factors, only. We create an indicator set from which we can choose some elements with which we can improve the fitting results the most effectively. Than we extend the basic simple model with additive climatic factors. The parameter optimization is based on the minimized root mean square error. The best model is chosen according to the Akaike Information Criterion. Finally we run the calibrated extended model with daily outputs of the regional climate model RegCM3.1, regarding 1961-1990 as reference period and 2021-2050 with 2071-2100 as future predictions. The results of the three time intervals are fitted with Beta distributions and compared statistically. The expected changes are discussed.
Resumo:
The objective of this study was to develop a model to predict transport and fate of gasoline components of environmental concern in the Miami River by mathematically simulating the movement of dissolved benzene, toluene, xylene (BTX), and methyl-tertiary-butyl ether (MTBE) occurring from minor gasoline spills in the inter-tidal zone of the river. Computer codes were based on mathematical algorithms that acknowledge the role of advective and dispersive physical phenomena along the river and prevailing phase transformations of BTX and MTBE. Phase transformations included volatilization and settling. ^ The model used a finite-difference scheme of steady-state conditions, with a set of numerical equations that was solved by two numerical methods: Gauss-Seidel and Jacobi iterations. A numerical validation process was conducted by comparing the results from both methods with analytical and numerical reference solutions. Since similar trends were achieved after the numerical validation process, it was concluded that the computer codes algorithmically were correct. The Gauss-Seidel iteration yielded at a faster convergence rate than the Jacobi iteration. Hence, the mathematical code was selected to further develop the computer program and software. The model was then analyzed for its sensitivity. It was found that the model was very sensitive to wind speed but not to sediment settling velocity. ^ A computer software was developed with the model code embedded. The software was provided with two major user-friendly visualized forms, one to interface with the database files and the other to execute and present the graphical and tabulated results. For all predicted concentrations of BTX and MTBE, the maximum concentrations were over an order of magnitude lower than current drinking water standards. It should be pointed out, however, that smaller concentrations than the latter reported standards and values, although not harmful to humans, may be very harmful to organisms of the trophic levels of the Miami River ecosystem and associated waters. This computer model can be used for the rapid assessment and management of the effects of minor gasoline spills on inter-tidal riverine water quality. ^
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier-Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier–Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.
Resumo:
Continuous high-resolution mass accumulation rates (MAR) and X-ray fluorescence (XRF) measurements from marine sediment records in the Bay of Biscay (NE Atlantic) have allowed the determination of the timing and the amplitude of the 'Fleuve Manche' (Channel River) discharges during glacial stages MIS 10, MIS 8, MIS 6 and MIS 4-2. These results have yielded detailed insight into the Middle and Late Pleistocene glaciations in Europe and the drainage network of the western and central European rivers over the last 350 kyr. This study provides clear evidence that the 'Fleuve Manche' connected the southern North Sea basin with the Bay of Biscay during each glacial period and reveals that 'Fleuve Manche' activity during the glaciations MIS 10 and MIS 8 was significantly less than during MIS 6 and MIS 2. We correlate the significant 'Fleuve Manche' activity, detected during MIS 6 and MIS 2, with the extensive Saalian (Drenthe Substage) and the Weichselian glaciations, respectively, confirming that the major Elsterian glaciation precedes the glacial MIS 10. In detail, massive 'Fleuve Manche' discharges occurred at ca 155 ka (mid-MIS 6) and during Termination I, while no significant discharges are found during Termination II. It is assumed that a substantial retreat of the European ice sheet at ca 155 kyr, followed by the formation of ice-free conditions between the British Isles and Scandinavia until Termination II, allowed meltwater to flow northwards through the North Sea basin during the second part of the MIS 6. We assume that this glacial pattern corresponds to the Warthe Substage glacial maximum, therefore indicating that the data presented here equates to the Drenthe and the Warthe glacial advances at ca 175-160 ka and ca 150-140 ka, respectively. Finally, the correlation of our records with ODP site 980 reveals that massive 'Fleuve Manche' discharges, related to partial or complete melting of the European ice masses, were synchronous with strong decreases in both the rate of deep-water formation and the strength of the Atlantic thermohaline circulation. 'Fleuve Manche' discharges over the last 350 kyr probably participated, with other meltwater sources, in the collapse of the thermohaline circulation by freshening the northern Atlantic surface water.