915 resultados para fits


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current methods for retrieving near-surface winds from scatterometer observations over the ocean surface require a forward sensor model which maps the wind vector to the measured backscatter. This paper develops a hybrid neural network forward model, which retains the physical understanding embodied in CMOD4, but incorporates greater flexibility, allowing a better fit to the observations. By introducing a separate model for the midbeam and using a common model for the fore and aft beams, we show a significant improvement in local wind vector retrieval. The hybrid model also fits the scatterometer observations more closely. The model is trained in a Bayesian framework, accounting for the noise on the wind vector inputs. We show that adding more high wind speed observations in the training set improves wind vector retrieval at high wind speeds without compromising performance at medium or low wind speeds. Copyright 2001 by the American Geophysical Union.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the angular distributions for elastic and. inelastic scattering of fast neutrons in fusion .reactor materials have been studied. Lithium and lead material are likely to be common components of fusion reactor wall configuration design. The measurements were performed using an associated particle time-of- flight technique. The 14 and 14.44 Mev neutrons were produced by the T(d,n} 4He reaction with deuterons being accelerated in a 150kev SAMES type J accelerator at ASTON and in.the 3. Mev DYNAMITRON at the Joint Radiation Centre, Birmingham respectively. The associated alpha-particles and fast. neutrons were detected.by means of a plastic scintillator mounted on a fast focused photomultiplier tube. The samples used were extended flat plates of thicknesses up to 0.9 mean-free-path for Lithium and 1.562 mean-free-path for Lead. The differential elastic scattering cross-sections were measured for 14 Mev neutrons for various thicknesses of Lithium and Lead in the angular range from zero to; 90º. In addition, the angular distributions of elastically scattered 14,.44 Mev .neutrons from Lithium samples were studied in the same angular range. Inelastic scattering to the 4.63 Mev state in 7Li and the 2.6 Mev state, and 4.1 Mev state in 208Pb have:been :measured.The results are compared to ENDF/B-IV data files and to previous measurements. For the Lead samples the differential neutron scattering:cross-sections for discrete 3 Mev ranges and the angular distributions were measured. The increase in effective cross-section due to multiple scattering effects,as the sample thickness increased:was found to be predicted by the empirical .relation ....... A good fit to the exoerimental data was obtained using the universal constant............ The differential elastic scattering cross-section data for thin samples of Lithium and Lead were analyzed in terms of optical model calculations using the. computer code. RAROMP. Parameter search procedures produced good fits to the·cross-sections. For the case of thick samples of Lithium and Lead, the measured angular distributions of :the scattered neutrons were compared to the predictions of the continuous slowing down model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work described in this thesis is the development of an ultrasonic tomogram to provide outlines of cross-sections of the ulna in vivo. This instrument, used in conjunction with X-ray densitometry previously developed in this department, would provide actual bone mineral density to a high resolution. It was hoped that the accuracy of the plot obtained from the tomogram would exceed that of existing ultrasonic techniques by about five times. Repeat measurements with these instruments to follow bone mineral changes would involve very low X-ray doses. A theoretical study has been made of acoustic diffraction, using a geometrical transform applicable to the integration of three different Green's functions, for axisymmetric systems. This has involved the derivation of one of these in a form amenable to computation. It is considered that this function fits the boundary conditions occurring in medical ultrasonography more closely than those used previously. A three dimensional plot of the pressure field using this function has been made for a ring transducer, in addition to that for disc transducers using all three functions. It has been shown how the theory may be extended to investigate the nature and magnitude of the particle velocity, at any point in the field, for the three functions mentioned. From this study. a concept of diffraction fronts has been developed, which has made it possible to determine energy flow also in a diffracting system. Intensity has been displayed in a manner similar to that used for pressure. Plots have been made of diffraction fronts and energy flow direction lines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work, the elastic scattering of fast neutrons from iron and concrete samples were studied at incident neutron energies of 14.0 and 14.4 Mev, using a neutron spectrometer based on the associated particle time-of-flight technique. These samples were chosen because of their importance in the design of fusion reactor shielding and construction. Using the S.A.M.E.S. accelerator and the 3 M v Dynamitron accelerator at the Radiation Centre, 14.0 and 14.4 Mev neutrons were produced by the T(d, n)4He reaction at incident deuteron energies of 140 keV and 900 keV mass III ions respectively. The time of origin of the neutron was determined by detecting the associated alpha particles. The samples used were extended flat plates of thicknesses up to 1.73 mean free paths for iron and 2.3 mean free paths for concrete. The associated alpha particles and fast neutrons were detected by means of a plastic scintillator mounted on a fast focused photomultiplier tube. The differential neutron elastic scattering cross-sections were measured for 14 Mev neutrons in various thicknesses of iron and concrete in the angular range from zero to 90°. In addition, the angular distributions of 14.4 Mev neutrons after passing through extended samples of iron were measured at several scattering angles in the same angular range. The measurements obtained for the thin sample of iron were compared with the results of Coon et al. The differential cross-sections for the thin iron sample were also analyzed on the optical model using the computer code RAROMP. For the concrete sample, the angular distribution of the thin sample was compared with the cross-sections calculated from the major constituent elements of concrete, and with the predicted values of the optical model for those elements. No published data could be found to compare with the results of the concrete differential cross-sections. In the case of thick samples of iron and concrete, the number of scattered neutrons were compared with a phenomological calculation based on the continuous slowing down model. The variation of measured cross-sections with sample thickness were found to follow the empirical relation σ = σ0 eαx. By using the universal constant "K", good fits were obtained to the experimental data. In parallel with the work at 14.0 and 14.4 Mev, an associated particle time-of-flight spectrometer was investigated which used the 2H(d,n)3He reaction for 3.02 Mev neutron energy at the incident deuteron energy of 1 Mev.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Attention defines our mental ability to select and respond to stimuli, internal or external, on the basis of behavioural goals in the presence of competing, behaviourally irrelevant, stimuli. The frontal and parietal cortices are generally agreed to be involved with attentional processing, in what is termed the 'fronto-parietal' network. The left parietal cortex has been seen as the site for temporal attentional processing, whereas the right parietal cortex has been seen as the site for spatial attentional processing. There is much debate about when the modulation of the primary visual cortex occurs, whether it is modulated in the feedforward sweep of processing or modulated by feedback projections from extrastriate and higher cortical areas. MEG and psychophysical measurements were used to look at spatially selective covert attention. Dual-task and cue-based paradigms were used. It was found that the posterior parietal cortex (PPC), in particular the SPL and IPL, was the main site of activation during these experiments, and that the left parietal lobe was activated more strongly than the right parietal lobe throughout. The levels of activation in both parietal and occipital areas were modulated in accordance with attentional demands. It is likely that spatially selective covert attention is dominated by the left parietal lobe, and that this takes the form of the proposed sensory-perceptual lateralization within the parietal lobes. Another form of lateralization is proposed, termed the motor-processing lateralization, the side of dominance being determined by handedness, being reversed in left- relative to right-handers. In terms of the modulation of the primary visual cortex, it was found that it is unlikely that V1 is modulated initially; rather the modulation takes the form of feedback from higher extrastriate and parietal areas. This fits with the idea of preattentive visual processing, a commonly accepted idea which, in itself, prevents the concept of initial modulation of V1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years there has been a great effort to combine the technologies and techniques of GIS and process models. This project examines the issues of linking a standard current generation 2½d GIS with several existing model codes. The focus for the project has been the Shropshire Groundwater Scheme, which is being developed to augment flow in the River Severn during drought periods by pumping water from the Shropshire Aquifer. Previous authors have demonstrated that under certain circumstances pumping could reduce the soil moisture available for crops. This project follows earlier work at Aston in which the effects of drawdown were delineated and quantified through the development of a software package that implemented a technique which brought together the significant spatially varying parameters. This technique is repeated here, but using a standard GIS called GRASS. The GIS proved adequate for the task and the added functionality provided by the general purpose GIS - the data capture, manipulation and visualisation facilities - were of great benefit. The bulk of the project is concerned with examining the issues of the linkage of GIS and environmental process models. To this end a groundwater model (Modflow) and a soil moisture model (SWMS2D) were linked to the GIS and a crop model was implemented within the GIS. A loose-linked approach was adopted and secondary and surrogate data were used wherever possible. The implications of which relate to; justification of a loose-linked versus a closely integrated approach; how, technically, to achieve the linkage; how to reconcile the different data models used by the GIS and the process models; control of the movement of data between models of environmental subsystems, to model the total system; the advantages and disadvantages of using a current generation GIS as a medium for linking environmental process models; generation of input data, including the use of geostatistic, stochastic simulation, remote sensing, regression equations and mapped data; issues of accuracy, uncertainty and simply providing adequate data for the complex models; how such a modelling system fits into an organisational framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Report of the Robens Committee (1972), the Health and Safety at Work Act (1974) and the Safety Representatives and Safety Committees Regulations (1977) provide the framework within which this study of certain aspects of health and safety is carried out. The philosophy of self-regulation is considered and its development is set within an historical and an industrial relations perspective. The research uses a case study approach to examine the effectiveness of self-regulation in health and safety in a public sector organisation. Within this approach, methodological triangulation employs the techniques of interviews, questionnaires, observation and documentary analysis. The work is based in four departments of a Scottish Local Authority and particular attention is given to three of the main 'agents' of self-regulation - safety representatives, supervisors and safety committees and their interactions, strategies and effectiveness. A behavioural approach is taken in considering the attitudes, values, motives and interactions of safety representatives and management. Major internal and external factors, which interact and which influence the effectiveness of joint self-regulation of health and safety, are identified. It is emphasised that an organisation cannot be studied without consideration of the context within which it operates both locally and in the wider environment. One of these factors, organisational structure, is described as bureaucratic and the model of a Representative Bureaucracy described by Gouldner (1954) is compared with findings from the present study. An attempt is made to ascertain how closely the Local Authority fits Gouldner's model. This research contributes both to knowledge and to theory in the subject area by providing an in-depth study of self-regulation in a public sector organisation, which when compared with such studies as those of Beaumont (1980, 1981, 1982) highlights some of the differences between the public and private sectors. Both empirical data and hypothetical models are used to provide description and explanation of the operation of the health and safety system in the Local Authority. As data were collected during a dynamic period in economic, political and social terms, the research discusses some of the effects of the current economic recession upon safety organisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A nonlinear dynamic model of microbial growth is established based on the theories of the diffusion response of thermodynamics and the chemotactic response of biology. Except for the two traditional variables, i.e. the density of bacteria and the concentration of attractant, the pH value, a crucial influencing factor to the microbial growth, is also considered in this model. The pH effect on the microbial growth is taken as a Gaussian function G0e-(f- fc)2/G1, where G0, G1 and fc are constants, f represents the pH value and fc represents the critical pH value that best fits for microbial growth. To study the effects of the reproduction rate of the bacteria and the pH value on the stability of the system, three parameters a, G0 and G1 are studied in detail, where a denotes the reproduction rate of the bacteria, G0 denotes the impacting intensity of the pH value to microbial growth and G1 denotes the bacterial adaptability to the pH value. When the effect of the pH value of the solution which microorganisms live in is ignored in the governing equations of the model, the microbial system is more stable with larger a. When the effect of the bacterial chemotaxis is ignored, the microbial system is more stable with the larger G1 and more unstable with the larger G0 for f0 > fc. However, the stability of the microbial system is almost unaffected by the variation G0 and G1 and it is always stable for f0 < fc under the assumed conditions in this paper. In the whole system model, it is more unstable with larger G1 and more stable with larger G0 for f0 < fc. The system is more stable with larger G1 and more unstable with larger G0 for f0 > fc. However, the system is more unstable with larger a for f0 < fc and the stability of the system is almost unaffected by a for f0 > fc. The results obtained in this study provide a biophysical insight into the understanding of the growth and stability behavior of microorganisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research shows that consumers are readily embracing the Internet to buy products. This paper proposes that, in the case of grocery shopping, this may lead to sub-optimal decisions at the household level. Decisions online on what, where and from who to buy are normally taken by one individual. In the case of grocery shopping, decisions, however, need to be ‘vetted’ by ‘other’ individuals within the household. The ‘household wide related’ decisions influence how information technologies and systems for commerce should be designed and managed for optimum decision making. This paper argues, unlike previous research, that e-grocery retailing is failing to grow to its full potential not solely because of the ‘classical’ hazards and perceived risks associated with doing grocery shopping online but because e-grocery retailing strategy has failed to acknowledge the micro-household level specificities that affect decision making. Our exploratory research is based on empirical evidence which were collected through telephone interviews. We offer an insight into how e-grocery ‘fits’ and is ‘disrupted’ by the reality of day to day consumption decision making at the household level. Our main finding is to advocate a more role-neutral, multi-user and multi-technology approach to e-grocery shopping which re-defines the concept of the main shopper/decision maker thereby reconceptualising the ‘shopping logic’ for grocery products.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is conventional wisdom that collusion is more likely the fewer firms there are in a market and the more symmetric they are. This is often theoretically justified in terms of a repeated non-cooperative game. Although that model fits more easily with tacit than overt collusion, the impression sometimes given is that ‘one model fits all’. Moreover, the empirical literature offers few stylized facts on the most simple of questions—how few are few and how symmetric is symmetric? This paper attempts to fill this gap while also exploring the interface of tacit and overt collusion, albeit in an indirect way. First, it identifies the empirical model of tacit collusion that the European Commission appears to have employed in coordinated effects merger cases—apparently only fairly symmetric duopolies fit the bill. Second, it shows that, intriguingly, the same story emerges from the quite different experimental literature on tacit collusion. This offers a stark contrast with the findings for a sample of prosecuted cartels; on average, these involve six members (often more) and size asymmetries among members are often considerable. The indirect nature of this ‘evidence’ cautions against definitive conclusions; nevertheless, the contrast offers little comfort for those who believe that the same model does, more or less, fit all.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To develop a new schematic scheme for efficiently recording the key parameters of gas permeable contact lens (GP) fits based on current consensus. Methods: Over 100 established GP fitters and educators met to discuss the parameters proposed in educational material for evaluating GP fit and concluded on the key parameters that should be recorded. The accuracy and variability of evaluating the fluorescein pattern of GP fit was determined by having 35 experienced contact lens practitioners from across the world, grading 5 images of a range of fits and the topographer simulation of the same fits, in random, order using the proposed scheme. The accuracy of the grading was compared to objective image analysis of the fluorescein intensity of the same images. Results: The key information to record to adequately describe the fit of an GP was agreed as: the manufacturer, brand and lens parameters; settling time; comfort on a 5 point scale; centration; movement on blink on a ±2 scale; and the Primary Fluorescein Pattern in the central, mid-peripheral and edge regions of the lens averaged along the horizontal and vertical lens axes, on a ±2 scale. On average 50-60% of practitioners selected the median grade when subjectively rating fluorescein intensity and this was correlated to objective quantification (r= 0.602, p< 0.001). Objective grading suggesting horizontal median fluorescein intensity was generally symmetrical, as was the vertical meridian, but this was not the case for subjective grading. Simulated fluorescein patterns were subjectively and objectively graded as being less intense than real photographs (p< 0.01). Conclusion: GP fit recording can be standardised and simplified to enhance GP practice. © 2013 British Contact Lens Association.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years the topic of risk management has moved up the agenda of both government and industry, and private sector initiatives to improve risk and internal control systems have been mirrored by similar promptings for change in the public sector. Both regulators and practitioners now view risk management as an integral part of the process of corporate governance, and an aid to the achievement of strategic objectives. The paper uses case study material on the risk management control system at Birmingham City Council to extend existing theory by developing a contingency theory for the public sector. The case demonstrates that whilst the structure of the control system fits a generic model, the operational details indicate that controls are contingent upon three core variables—central government policies, information and communication technology and organisational size. All three contingent variables are suitable for testing the theory across the broader public sector arena.