878 resultados para Observational techniques and algorithms
Resumo:
An introduction to the theory and practice of optometry in one succinct volume. From the fundamental science of vision to clinical techniques and the management of common ocular conditions, this book encompasses the essence of contemporary optometric practice. Now in full colour and featuring over 400 new illustrations, this popular text which will appeal to both students and practitioners wishing to keep up to date has been revised significantly. The new edition incorporates recent advances in technology and a complete overview of clinical procedures to improve and update everyday patient care. Contributions from well-known international experts deliver a broad perspective and understanding of current optometric practice. A useful aid for students and the newly qualified practitioner, while providing a rapid reference guide for the more experienced clinician.
Resumo:
This thesis seeks to describe the development of an inexpensive and efficient clustering technique for multivariate data analysis. The technique starts from a multivariate data matrix and ends with graphical representation of the data and pattern recognition discriminant function. The technique also results in distances frequency distribution that might be useful in detecting clustering in the data or for the estimation of parameters useful in the discrimination between the different populations in the data. The technique can also be used in feature selection. The technique is essentially for the discovery of data structure by revealing the component parts of the data. lhe thesis offers three distinct contributions for cluster analysis and pattern recognition techniques. The first contribution is the introduction of transformation function in the technique of nonlinear mapping. The second contribution is the us~ of distances frequency distribution instead of distances time-sequence in nonlinear mapping, The third contribution is the formulation of a new generalised and normalised error function together with its optimal step size formula for gradient method minimisation. The thesis consists of five chapters. The first chapter is the introduction. The second chapter describes multidimensional scaling as an origin of nonlinear mapping technique. The third chapter describes the first developing step in the technique of nonlinear mapping that is the introduction of "transformation function". The fourth chapter describes the second developing step of the nonlinear mapping technique. This is the use of distances frequency distribution instead of distances time-sequence. The chapter also includes the new generalised and normalised error function formulation. Finally, the fifth chapter, the conclusion, evaluates all developments and proposes a new program. for cluster analysis and pattern recognition by integrating all the new features.
Resumo:
The satellite ERS-1 was launched in July 1991 in a period of high solar activity. Sparse laser tracking and the failure of the experimental microwave system (PRARE) compounded the orbital errors which resulted from mismodelling of atmospheric density and hence surface forces. Three attempts are presented here to try and refine the coarse laser orbits of ERS-1, made prior to the availability of the full altimetric dataset. The results of the first attempt indicate that by geometrically modelling the satellite shape some improvement in orbital precision may be made for any satellite; especially one where no area tables already exist. The second and third refinement attempts are based on the introduction of data from some second satellite; in these examples SPOT-2 and TOPEX/Poseidon are employed. With SPOT-2 the method makes use of the orbital similarities to produce along-track corrections for the more fully tracked SPOT-2. Transferring these corrections to ERS-1 produces improvements in the precise orbits thus determined. With TOPEX/Poseidon the greater altitude results in a more precise orbit (gravity field and atmospheric errors are of less importance). Thus, by computing height differences at crossover points of the TOPEX/Poseidon and ERS-1 ground tracks the poorer orbit of ERS-1 may be improved by the addition of derived radial corrections. In the positive light of all three results several potential modification are suggested and some further avenues of investigation indicated.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
In recent years Web has become mainstream medium for communication and information dissemination. This paper presents approaches and methods for adaptive learning implementation, which are used in some contemporary web-interfaced Learning Management Systems (LMSs). The problem is not how to create electronic learning materials, but how to locate and utilize the available information in personalized way. Different attitudes to personalization are briefly described in section 1. The real personalization requires a user profile containing information about preferences, aims, and educational history to be stored and used by the system. These issues are considered in section 2. A method for development and design of adaptive learning content in terms of learning strategy system support is represented in section 3. Section 4 includes a set of innovative personalization services that are suggested by several very important research projects (SeLeNe project, ELENA project, etc.) dated from the last few years. This section also describes a model for role- and competency-based learning customization that uses Web Services approach. The last part presents how personalization techniques are implemented in Learning Grid-driven applications.
Resumo:
The development of a new set of frost property measurement techniques to be used in the control of frost growth and defrosting processes in refrigeration systems was investigated. Holographic interferometry and infrared thermometry were used to measure the temperature of the frost-air interface, while a beam element load sensor was used to obtain the weight of a deposited frost layer. The proposed measurement techniques were tested for the cases of natural and forced convection, and the characteristic charts were obtained for a set of operational conditions. ^ An improvement of existing frost growth mathematical models was also investigated. The early stage of frost nucleation was commonly not considered in these models and instead an initial value of layer thickness and porosity was regularly assumed. A nucleation model to obtain the droplet diameter and surface porosity at the end of the early frosting period was developed. The drop-wise early condensation in a cold flat plate under natural convection to a hot (room temperature) and humid air was modeled. A nucleation rate was found, and the relation of heat to mass transfer (Lewis number) was obtained. It was found that the Lewis number was much smaller than unity, which is the standard value usually assumed for most frosting numerical models. The nucleation model was validated against available experimental data for the early nucleation and full growth stages of the frosting process. ^ The combination of frost top temperature and weight variation signals can now be used to control the defrosting timing and the developed early nucleation model can now be used to simulate the entire process of frost growth in any surface material. ^
Resumo:
This review discusses menu analysis models in depth to identify the models strengths and weaknesses in attempt to discover opportunities to enhance existing models and evolve menu analysis toward a comprehensive analytical model.
Resumo:
The purpose of the current study was to attempt to model various cognitive and social processes that are believed to lead to false confessions. More specifically, this study manipulated the variables of experimenter expectancy, guilt-innocence of the suspect, and interrogation techniques using the Russano et al. (2005) paradigm. The primary measure of interest was the likelihood of the participant signing the confession statement. By manipulating experimenter expectancy, the current study sought to further explore the social interactions that may occur in the interrogation room. In addition, in past experiments, the interrogator has typically been restricted to the use of one or two interrogation techniques. In the present study, interrogators were permitted to select from 15 different interrogation techniques when attempting to solicit a confession from participants. ^ Consistent with Rusanno et al. (2005), guilty participants (94%) were more likely to confess to the act of cheating than innocent participants (31%). The variable of experimenter expectancy did not effect confessions rates, length of interrogation, or the type of interrogation techniques used. Path analysis revealed feelings of pressure and the weighing of consequences on the part of the participant were associated with the signing of the confession statement. The findings suggest the guilt/innocence of the participant, the participant's perceptions of the interrogation situation, and length of interrogation play a pivotal role in the signing of the confession statement. Further examination of these variables may provide researchers with a better understanding of the relationship between interrogations and confessions. ^
Resumo:
Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael Welzl
Resumo:
The authors wish to acknowledge the generous financial support provided in association with this volume to the Geological Society and the Petroleum Group by Badley Geoscience Ltd, BP, CGG Robertson, Dana Petroleum Ltd, Getech Group plc, Maersk Oil North Sea UK Ltd, Midland Valley Exploration Ltd, Rock Deformation Research (Schlumberger) and Borehole Image & Core Specialists (Wildcat Geoscience, Walker Geoscience and Prolog Geoscience). We would like to thank the fine team at the Geological Society’s Publishing House for the excellent support and encouragement that they have provided to the editors and authors of this Special Publication.
Resumo:
Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael Welzl
Resumo:
The authors wish to acknowledge the generous financial support provided in association with this volume to the Geological Society and the Petroleum Group by Badley Geoscience Ltd, BP, CGG Robertson, Dana Petroleum Ltd, Getech Group plc, Maersk Oil North Sea UK Ltd, Midland Valley Exploration Ltd, Rock Deformation Research (Schlumberger) and Borehole Image & Core Specialists (Wildcat Geoscience, Walker Geoscience and Prolog Geoscience). We would like to thank the fine team at the Geological Society’s Publishing House for the excellent support and encouragement that they have provided to the editors and authors of this Special Publication.
Resumo:
The control of radioactive backgrounds will be key in the search for neutrinoless double beta decay at the SNO+ experiment. Several aspects of the SNO+ back- grounds have been studied. The SNO+ tellurium purification process may require ultra low background ethanol as a reagent. A low background assay technique for ethanol was developed and used to identify a source of ethanol with measured 238U and 232Th concentrations below 2.8 10^-13 g/g and 10^-14 g/g respectively. It was also determined that at least 99:997% of the ethanol can be removed from the purified tellurium using forced air ow in order to reduce 14C contamination. In addition, a quality-control technique using an oxygen sensor was studied to monitor 222Rn contamination due to air leaking into the SNO+ scintillator during transport. The expected sensitivity of the technique is 0.1mBq/L or better depending on the oxygen sensor used. Finally, the dependence of SNO+ neutrinoless double beta decay sensitivity on internal background levels was studied using Monte Carlo simulation. The half-life limit to neutrinoless double beta decay of 130Te after 3 years of operation was found to be 4.8 1025 years under default conditions.
Resumo:
Economic losses resulting from disease development can be reduced by accurate and early detection of plant pathogens. Early detection can provide the grower with useful information on optimal crop rotation patterns, varietal selections, appropriate control measures, harvest date and post harvest handling. Classical methods for the isolation of pathogens are commonly used only after disease symptoms. This frequently results in a delay in application of control measures at potentially important periods in crop production. This paper describes the application of both antibody and DNA based systems to monitor infection risk of air and soil borne fungal pathogens and the use of this information with mathematical models describing risk of disease associated with environmental parameters.
Resumo:
F-123-R; issued June 1, 1997; two different reports were issued from the Center for Aquatic Ecology with report number 1997 (9)