975 resultados para subjective evaluation
Resumo:
Safety enforcement practitioners within Europe and marketers, designers or manufacturers of consumer products need to determine compliance with the legal test of "reasonable safety" for consumer goods, to reduce the "risks" of injury to the minimum. To enable freedom of movement of products, a method for safety appraisal is required for use as an "expert" system of hazard analysis by non-experts in safety testing of consumer goods for implementation consistently throughout Europe. Safety testing approaches and the concept of risk assessment and hazard analysis are reviewed in developing a model for appraising consumer product safety which seeks to integrate the human factors contribution of risk assessment, hazard perception, and information processing. The model develops a system of hazard identification, hazard analysis and risk assessment which can be applied to a wide range of consumer products through use of a series of systematic checklists and matrices and applies alternative numerical and graphical methods for calculating a final product safety risk assessment score. It is then applied in its pilot form by selected "volunteer" Trading Standards Departments to a sample of consumer products. A series of questionnaires is used to select participating Trading Standards Departments, to explore the contribution of potential subjective influences, to establish views regarding the usability and reliability of the model and any preferences for the risk assessment scoring system used. The outcome of the two stage hazard analysis and risk assessment process is considered to determine consistency in results of hazard analysis, final decisions regarding the safety of the sample product and to determine any correlation in the decisions made using the model and alternative scoring methods of risk assessment. The research also identifies a number of opportunities for future work, and indicates a number of areas where further work has already begun.
Resumo:
This work is concerned with the development of techniques for the evaluation of large-scale highway schemes with particular reference to the assessment of their costs and benefits in the context of the current transport planning (T.P.P.) process. It has been carried out in close cooperation with West Midlands County Council, although its application and results are applicable elsewhere. The background to highway evaluation and its development in recent years has been described and the emergence of a number of deficiencies in current planning practise noted. One deficiency in particular stood out, that stemming from inadequate methods of scheme generation and the research has concentrated upon improving this stage of appraisal, to ensure that subsequent stages of design, assessment and implementation are based upon a consistent and responsive foundation. Deficiencies of scheme evaluation were found to stem from inadequate development of appraisal methodologies suffering from difficulties of valuation, measurement and aggregation of the disparate variables that characterise highway evaluation. A failure to respond to local policy priorities was also noted. A 'problem' rather than 'goals' based approach to scheme generation was taken, as it represented the current and foreseeable resource allocation context more realistically. A review of techniques with potential for highway problem based scheme generation, which would work within a series of practical and theoretical constraints were assessed and that of multivariate analysis, and classical factor analysis in particular, was selected, because it offerred considerable application to the difficulties of valuation, measurement and aggregation that existed. Computer programs were written to adapt classical factor analysis to the requirements of T.P.P. highway evaluation, using it to derive a limited number of factors which described the extensive quantity of highway problem data. From this, a series of composite problem scores for 1979 were derived for a case study area of south Birmingham, based upon the factorial solutions, and used to assess highway sites in terms of local policy issues. The methodology was assessed in the light of its ability to describe highway problems in both aggregate and disaggregate terms, to guide scheme design, coordinate with current scheme evaluation methods, and in general to improve upon current appraisal. Analysis of the results was both in subjective, 'common-sense' terms and using statistical methods to assess the changes in problem definition, distribution and priorities that emerged. Overall, the technique was found to improve upon current scheme generation methods in all respects and in particular in overcoming the problems of valuation, measurement and aggregation without recourse to unsubstantiated and questionable assumptions. A number of deficiencies which remained have been outlined and a series of research priorities described which need to be reviewed in the light of current and future evaluation needs.
Resumo:
The binding theme of this thesis is the examination of both phakic and pseudophakic accommodation by means of theoretical modelling and the application of a new biometric measuring technique. Anterior Segment Optical Coherence Tomography (AS-OCT) was used to assess phakic accommodative changes in 30 young subjects (19.4 2.0 years; range, 18 to 25 years). A new method of assessing curvature change with this technique was employed with limited success. Changes in axial accommodative spacing, however, proved to be very similar to those of the Scheimpflug-based data. A unique biphasic trend in the position of the posterior crystalline lens surface during accommodation was discovered, which has not been alluded to in the literature. All axial changes with accommodation were statistically significant (p < 0.01) with the exception of corneal thickness (p = 0.81). A two-year follow-up study was undertaken for a cohort of subjects previously implanted with a new accommodating intraocular lens (AIOL) (Lenstec Tetraflex KH3500). All measures of best corrected distance visual acuity (BCDVA; +0.04 0.24 logMAR), distance corrected near visual acuity (DCNVA; +0.61 0.17 logMAR) and contrast sensitivity (+1.35 0.21 log units) were good. The subjective accommodation response quantified with the push-up technique (1.53 0.64 D) and defocus curves (0.77 0.29 D) was greater than the objective stimulus response (0.21 0.19 D). AS-OCT measures with accommodation stimulus revealed a small mean posterior movement of the AIOLs (0.02 0.03 mm for a 4.0 D stimulus); this is contrary to proposed mechanism of the anterior focus-shift principle.
Resumo:
AIM: To determine the validity and reliability of the measurement of corneal curvature and non-invasive tear break-up time (NITBUT) measures using the Oculus Keratograph. METHOD: One hundred eyes of 100 patients had their corneal curvature assessed with the Keratograph and the Nidek ARKT TonorefII. NITBUT was then measured objectively with the Keratograph with Tear Film Scan software and subjectively with the Keeler Tearscope. The Keratograph measurements of corneal curvature and NITBUT were repeated to test reliability. The ocular sensitivity disease index questionnaire was completed to quantify ocular comfort. RESULTS: The Keratograph consistently measured significantly flatter corneal curvatures than the ARKT (MSE difference: +1.83±0.44D), but was repeatable (p>0.05). Keratograph NITBUT measurements were significantly lower than observation using the Tearscope (by 12.35±7.45s; pp < 0.001) and decreased on subsequent measurement (by -1.64 ± 6.03s; p < 0.01). The Keratograph measures the first time the tears break up anywhere on the cornea with 63% of subjects having NI-TBUT's <5s and a further 22% having readings between 5 and 10s. The Tearscope results were found to correlate better with the patients symptoms (r = -0.32) compared to the Keratograph (r = -0.19). Conclusions: The Keratograph requires a calibration off-set to be comparable to other keratometry devices. Its current software detects very early tear film changes, recording significantly lower NITBUT values than conventional subjective assessment. Adjustments to instrumentation software have the potential to enhance the value of Keratograph objective measures in clinical practice.
Resumo:
Accommodating Intraocular Lenses (IOLs), multifocal IOLs (MIOLs) and toric IOLs are designed to provide a greater level of spectacle independency post cataract surgery. All of these IOLs are reliant on the accurate calculation of intraocular lens power determined through reliable ocular biometry. A standardised defocus area metric and reading performance index metric were devised for the evaluation of the range of focus and the reading ability of subjects implanted with presbyopic correcting IOLs. The range of clear vision after implantation of an MIOL is extended by a second focal point; however, this results in the prevalence of dysphotopsia. A bespoke halometer was designed and validated to assess this photopic phenomenon. There is a lack of standardisation in the methods used for determining IOL orientation and thus rotation. A repeatable, objective method was developed to allow the accurate assessment of IOL rotation, which was used to determine the rotational and positional stability of a closed loop haptic IOL. A new commercially available biometry device was validated for use with subjects prior to cataract surgery. The optical low coherence reflectometry instrument proved to be a valid method for assessing ocular biometry and covered a wider range of ocular parameters in comparison with previous instruments. The advantages of MIOLs were shown to include an extended range of clear vision translating into greater reading ability. However, an increased prevalence of dysphotopsia was shown with a bespoke halometer, which was dependent on the MIOL optic design. Implantation of a single optic accommodating IOL did not improve reading ability but achieved high subjective ratings of near vision. The closed-loop haptic IOL displayed excellent rotational stability in the late period but relatively poor rotational stability in the early period post implantation. The orientation error was compounded by the high frequency of positional misalignment leading to an extensive overall misalignment of the IOL. This thesis demonstrates the functionality of new IOL lens designs and the importance of standardised testing methods, thus providing a greater understanding of the consequences of implanting these IOLs. Consequently, the findings of the thesis will influence future designs of IOLs and testing methods.
Resumo:
Background: The aim was to evaluate the validity and repeatability of the auto-refraction function of the Nidek OPD-Scan III (Nidek Technologies, Gamagori, Japan) compared with non-cycloplegic subjective refraction. The Nidek OPD-Scan III is a new aberrometer/corneal topographer workstation based on the skiascopy principle. It combines a wavefront aberrometer, topographer, autorefractor, auto keratometer and pupillometer/pupillographer. Methods: Objective refraction results obtained using the Nidek OPD-Scan III were compared with non-cycloplegic subjective refraction for 108 eyes of 54 participants (29 female) with a mean age of 23.7±9.5 years. Intra-session and inter-session variability were assessed on 14 subjects (28 eyes). Results: The Nidek OPD-Scan III gave slightly more negative readings than results obtained by subjective refraction (Nidek mean difference -0.19±0.36 DS, p<0.01 for sphere; -0.19±0.35 DS, p<0.01 for mean spherical equivalent; -0.002±0.23 DC, p=0.91 for cylinder; -0.06±0.38 DC, p=0.30 for J0 and -0.36±0.31 DC for J45, p=0.29). Auto-refractor results for 74 per cent of spherical readings and 60 per cent of cylindrical powers were within±0.25 of subjective refraction. There was high intra-session and inter-session repeatability for all parameters; 90 per cent of inter-session repeatability results were within 0.25 D. Conclusion: The Nidek OPD-Scan III gives valid and repeatable measures of objective refraction when compared with non-cycloplegic subjective refraction. © 2013 The Authors. Clinical and Experimental Optometry © 2013 Optometrists Association Australia.
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.
Resumo:
A clinical evaluation of the Shin-Nippon SRW-5000 (Japan), a newly released commercial autorefractor, was undertaken to assess its repeatability and validity compared to subjective refraction. Measurements of refractive error were performed on 200 eyes of 100 subjects (aged 24.4±8.0 years) subjectively (non-cycloplegic) by one optometrist and objectively with the SRW-5000 autorefractor by a second. Repeatability was assessed by examining the differences between the seven autorefractor readings taken from each eye and by re-measuring the objective prescription of 50 eyes at a subsequent session. Although the SRW-5000 read slightly more plus than subjective refraction (mean spherical equivalent +0.16±0.44D), it was found to be highly valid (accurate) compared to subjective refraction and repeatable over the prescription range of +6.50 to -15.00D examined. The Shin-Nippon SRW-5000 autorefractor is therefore a valuable complement to subjective refraction and as it offers the advantage of a binocular open field-of-view, has a great potential benefit for accommodation research studies. Copyright © 2001 The College of Optometrists.
Resumo:
Background/aim: The technique of photoretinoscopy is unique in being able to measure the dynamics of the oculomotor system (ocular accommodation, vergence, and pupil size) remotely (working distance typically 1 metre) and objectively in both eyes simultaneously. The aim af this study was to evaluate clinically the measurement of refractive error by a recent commercial photoretinoscopic device, the PowerRefractor (PlusOptiX, Germany). Method: The validity and repeatability of the PowerRefractor was compared to: subjective (non-cycloplegic) refraction on 100 adult subjects (mean age 23.8 (SD 5.7) years) and objective autarefractian (Shin-Nippon SRW-5000, Japan) on 150 subjects (20.1 (4.2) years). Repeatability was assessed by examining the differences between autorefractor readings taken from each eye and by re-measuring the objective prescription of 100 eyes at a subsequent session. Results: On average the PowerRefractor prescription was not significantly different from the subjective refraction, although quite variable (difference -0.05 (0.63) D, p = 0.41) and more negative than the SRW-5000 prescription (by -0.20 (0.72) D, p<0.001). There was no significant bias in the accuracy of the instrument with regard to the type or magnitude of refractive error. The PowerRefractor was found to be repeatable over the prescription range of -8.75D to +4.00D (mean spherical equivalent) examined. Conclusion: The PowerRefractor is a useful objective screening instrument and because of its remote and rapid measurement of both eyes simultaneously is able to assess the oculomotor response in a variety of unrestricted viewing conditions and patient types.
Resumo:
Purpose: To assess the inter and intra observer variability of subjective grading of the retinal arterio-venous ratio (AVR) using a visual grading and to compare the subjectively derived grades to an objective method using a semi-automated computer program. Methods: Following intraocular pressure and blood pressure measurements all subjects underwent dilated fundus photography. 86 monochromatic retinal images with the optic nerve head centred (52 healthy volunteers) were obtained using a Zeiss FF450+ fundus camera. Arterio-venous ratios (AVR), central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) were calculated on three separate occasions by one single observer semi-automatically using the software VesselMap (ImedosSystems, Jena, Germany). Following the automated grading, three examiners graded the AVR visually on three separate occasions in order to assess their agreement. Results: Reproducibility of the semi-automatic parameters was excellent (ICCs: 0.97 (CRAE); 0.985 (CRVE) and 0.952 (AVR)). However, visual grading of AVR showed inter grader differences as well as discrepancies between subjectively derived and objectively calculated AVR (all p < 0.000001). Conclusion: Grader education and experience leads to inter-grader differences but more importantly, subjective grading is not capable to pick up subtle differences across healthy individuals and does not represent true AVR when compared with an objective assessment method. Technology advancements mean we no longer rely on opthalmoscopic evaluation but can capture and store fundus images with retinal cameras, enabling us to measure vessel calibre more accurately compared to visual estimation; hence it should be integrated in optometric practise for improved accuracy and reliability of clinical assessments of retinal vessel calibres. © 2014 Spanish General Council of Optometry.
Resumo:
PURPOSE: To assess the surface tear breakup time and clinical performance of three daily disposable silicone hydrogel contact lenses over 16 hours of wear. METHODS: Thirty-nine patients (mean [±SD] age, 22.1 [±3.5] years) bilaterally wore (narafilcon A, filcon II-3, and delefilcon A) contact lenses in a prospective, randomized, masked, 1-week crossover clinical trial. Tear film was assessed by the tear meniscus height (TMH), ocular/contact lens surface temperature dynamics, and lens surface noninvasive breakup time at 8, 12, and 16 hours of wear. Clinical performance and ocular physiology were assessed by subjective questionnaire, by high-/low-contrast logMAR (logarithm of the minimum angle of resolution) acuity, and through bulbar and limbal hyperemia grading. Corneal and conjunctival staining were assessed after lens removal. RESULTS: Delefilcon A demonstrated a longer noninvasive breakup time (13.4 [±4.4] seconds) than filcon II-3 (11.6 [±3.7] seconds; p < 0.001) and narafilcon A (12.3 [±3.7] seconds; p < 0.001). A greater TMH (0.35 [±0.11] mm) was shown by delefilcon A than filcon II-3 (0.32 [±0.10] seconds; p = 0.016). Delefilcon A showed less corneal staining after 16 hours of lens wear (0.7 [±0.6] Efron grade) than filcon II-3 (1.1 [±0.7]; p < 0.001) and narafilcon A (0.9 [±0.7]; p = 0.031). Time was not a significant factor for prelens tear film stability (F = 0.594, p = 0.555) or TMH (F = 0.632, p = 0.534). Lens brand did not affect temperature (F = 1.220, p = 0.308), but it decreased toward the end of the day (F = 19.497, p < 0.001). Comfort, quality of vision, visual acuity and contrast acuity, and limbal grading were similar between the lens brands but decreased with time during the day (p < 0.05). CONCLUSIONS: The tear breakup time over the contact lens surface differed between lens types and may have a role in protecting the ocular surface.
Resumo:
Purpose: Technological devices such as smartphones and tablets are widely available and increasingly used as visual aids. This study evaluated the use of a novel app for tablets (MD_evReader) developed as a reading aid for individuals with a central field loss resulting from macular degeneration. The MD_evReader app scrolls text as single lines (similar to a news ticker) and is intended to enhance reading performance using the eccentric viewing technique by both reducing the demands on the eye movement system and minimising the deleterious effects of perceptual crowding. Reading performance with scrolling text was compared with reading static sentences, also presented on a tablet computer. Methods: Twenty-six people with low vision (diagnosis of macular degeneration) read static or dynamic text (scrolled from right to left), presented as a single line at high contrast on a tablet device. Reading error rates and comprehension were recorded for both text formats, and the participant’s subjective experience of reading with the app was assessed using a simple questionnaire. Results: The average reading speed for static and dynamic text was not significantly different and equal to or greater than 85 words per minute. The comprehension scores for both text formats were also similar, equal to approximately 95% correct. However, reading error rates were significantly (p=0.02) less for dynamic text than for static text. The participants’ questionnaire ratings of their reading experience with the MD_evReader were highly positive and indicated a preference for reading with this app compared with their usual method. Conclusions: Our data show that reading performance with scrolling text is at least equal to that achieved with static text and in some respects (reading error rate) is better than static text. Bespoke apps informed by an understanding of the underlying sensorimotor processes involved in a cognitive task such as reading have excellent potential as aids for people with visual impairments.
Resumo:
In the article - Menu Analysis: Review and Evaluation - by Lendal H. Kotschevar, Distinguished Professor School of Hospitality Management, Florida International University, Kotschevar’s initial statement reads: “Various methods are used to evaluate menus. Some have quite different approaches and give different information. Even those using quite similar methods vary in the information they give. The author attempts to describe the most frequently used methods and to indicate their value. A correlation calculation is made to see how well certain of these methods agree in the information they give.” There is more than one way to look at the word menu. The culinary selections decided upon by the head chef or owner of a restaurant, which ultimately define the type of restaurant is one way. The physical outline of the food, which a patron actually holds in his or her hand, is another. These descriptions are most common to the word, menu. The author primarily concentrates on the latter description, and uses the act of counting the number of items sold on a menu to measure the popularity of any particular item. This, along with a formula, allows Kotschevar to arrive at a specific value per item. Menu analysis would appear a difficult subject to broach. How does a person approach a menu analysis, how do you qualify and quantify a menu; it seems such a subjective exercise. The author offers methods and outlines on approaching menu analysis from empirical perspectives. “Menus are often examined visually through the evaluation of various factors. It is a subjective method but has the advantage of allowing scrutiny of a wide range of factors which other methods do not,” says Distinguished Professor, Kotschevar. “The method is also highly flexible. Factors can be given a score value and scores summed to give a total for a menu. This allows comparison between menus. If the one making the evaluations knows menu values, it is a good method of judgment,” he further offers. The author wants you to know that assigning values is fundamental to a pragmatic menu analysis; it is how the reviewer keeps score, so to speak. Value merit provides reliable criteria from which to gauge a particular menu item. In the final analysis, menu evaluation provides the mechanism for either keeping or rejecting selected items on a menu. Kotschevar provides at least three different matrix evaluation methods; they are defined as the Miller method, the Smith and Kasavana method, and the Pavesic method. He offers illustrated examples of each via a table format. These are helpful tools since trying to explain the theories behind the tables would be difficult at best. Kotschevar also references examples of analysis methods which aren’t matrix based. The Hayes and Huffman - Goal Value Analysis - is one such method. The author sees no one method better than another, and suggests that combining two or more of the methods to be a benefit.
Resumo:
Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.
Resumo:
Same-sex parenting is by no means a new phenomenon but the legal recognition and acceptance of gay and lesbian couples as parents is a relatively recent development in most countries. Traditionally, such recognition has been opposed on the basis of the claim that the best interests of children could not be met by gay and lesbian parents. This thesis examines the validity of this argument and it explores the true implications of the best interests principle in this context. The objective is to move away from subjective or moral conceptions of the best interests principle to an understanding which is informed by relevant sociological and psychological data and which is guided by reference to the rights contained in the UN Convention on the Rights of the Child. Using this perspective, the thesis addresses the overarching issue of whether the law should offer legal recognition and protection to gay and lesbian families and the more discrete matter of how legal protection should be provided. It is argued that the best interests principle can be used to demand that same-sex parenting arrangements should be afforded legal recognition and protection. Suggestions are also presented as to the most appropriate manner of providing for this recognition. In this regard, guidance is drawn from the English and South African experience in this area. Overall, the objective is to assess the current laws from the perspective of the best interests principle so as to ensure that the law operates in a manner which adheres to the rights and interests of children.