965 resultados para Toric geometry


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two perceptions of the marginality of home economics are widespread across educational and other contexts. One is that home economics and those who engage in its pedagogy are inevitably marginalised within patriarchal relations in education and culture. This is because home economics is characterised as women's knowledge, for the private domain of the home. The other perception is that only orthodox epistemological frameworks of inquiry should be used to interrogate this state of affairs. These perceptions have prompted leading theorists in the field to call for non-essentialist approaches to research in order to re-think the thinking that has produced this cul-de-sac positioning of home economics as a body of knowledge and a site of teacher practice. This thesis takes up the challenge of working to locate a space outside the frame of modernist research theory and methods, recognising that this shift in epistemology is necessary to unsettle the idea that home economics is inevitably marginalised. The purpose of the study is to reconfigure how we have come to think about home economics teachers and the profession of home economics as a site of cultural practice, in order to think it otherwise (Lather, 1991). This is done by exploring how the culture of home economics is being contested from within. To do so, the thesis uses a 'posthumanist' approach, which rejects the conception of the individual as a unitary and fixed entity, but instead as a subject in process, shaped by desires and language which are not necessarily consciously determined. This posthumanist project focuses attention on pedagogical body subjects as the 'unsaid' of home economics research. It works to transcend the modernist dualism of mind/body, and other binaries central to modernist work, including private/public, male/female,paid/unpaid, and valued/unvalued. In so doing, it refuses the simple margin/centre geometry so characteristic of current perceptions of home economics itself. Three studies make up this work. Studies one and two serve to document the disciplined body of home economics knowledge, the governance of which works towards normalisation of the 'proper' home economics teacher. The analysis of these accounts of home economics teachers by home economics teachers, reveals that home economics teachers are 'skilled' yet they 'suffer' for their profession. Further,home economics knowledge is seen to be complicit in reinforcing the traditional roles of masculinity and femininity, thereby reinforcing heterosexual normativity which is central to patriarchal society. The third study looks to four 'atypical'subjects who defy the category of 'proper' and 'normal' home economics teacher. These 'atypical' bodies are 'skilled' but fiercely reject the label of 'suffering'. The discussion of the studies is a feminist poststructural account, using Russo's (1994) notion of the grotesque body, which is emergent from Bakhtin's (1968) theory of the carnivalesque. It draws on the 'shreds' of home economics pedagogy,scrutinising them for their subversive, transformative potential. In this analysis, the giving and taking of pleasure and fun in the home economics classroom presents moments of surprise and of carnival. Foucault's notion of the construction of the ethical individual shows these 'atypical' bodies to be 'immoderate' yet striving hard to be 'continent' body subjects. This research captures moments of transgression which suggest that transformative moments are already embodied in the pedagogical practices of home economics teachers, and these can be 'seen' when re-looking through postmodemist lenses. Hence, the cultural practices ofhome economics as inevitably marginalised are being contested from within. Until now, home economics as a lived culture has failed to recognise possibilities for reconstructing its own field beyond the confines of modernity. This research is an example of how to think about home economics teachers and the profession as a reconfigured cultural practice. Future research about home economics as a body of knowledge and a site of teacher practice need not retell a simple story of oppression. Using postmodemist epistemologies is one way to provide opportunities for new ways of looking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The aim was to document contact lens prescribing trends in Australia between 2000 and 2009. ---------- Methods: A survey of contact lens prescribing trends was conducted each year between 2000 and 2009. Australian optometrists were asked to provide information relating to 10 consecutive contact lens fittings between January and March each year. ---------- Results: Over the 10-year survey period, 1,462 practitioners returned survey forms representing a total of 13,721 contact lens fittings. The mean age (± SD) of lens wearers was 33.2 ± 13.6 years and 65 per cent were female. Between 2006 and 2009, rigid lens new fittings decreased from 18 to one per cent. Low water content lenses reduced from 11.5 to 3.2 per cent of soft lens fittings between 2000 and 2008. Between 2005 and 2009, toric lenses and multifocal lenses represented 26 and eight per cent, respectively, of all soft lenses fitted. Daily disposable, one- to two-week replacement and monthly replacement lenses accounted for 11.6, 30.0 and 46.5 per cent of all soft lens fittings over the survey period, respectively. The proportion of new soft fittings and refittings prescribed as extended wear has generally declined throughout the past decade. Multi-purpose lens care solutions dominate the market. Rigid lenses and monthly replacement soft lenses are predominantly worn on a full-time basis, whereas daily disposable soft lenses are mainly worn part-time.---------- Conclusions: This survey indicates that technological advances, such as the development of new lens materials, manufacturing methods and lens designs, and the availability of various lens replacement options, have had a significant impact on the contact lens market during the first decade of the 21st Century.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The aim was to construct and advise on the use of a cost-per-wear model based on contact lens replacement frequency, to form an equitable basis for cost comparison. ---------- Methods: The annual cost of professional fees, contact lenses and solutions when wearing daily, two-weekly and monthly replacement contact lenses is determined in the context of the Australian market for spherical, toric and multifocal prescription types. This annual cost is divided by the number of times lenses are worn per year, resulting in a ‘cost-per-wear’. The model is presented graphically as the cost-per-wear versus the number of times lenses are worn each week for daily replacement and reusable (two-weekly and monthly replacement) lenses.---------- Results: The cost-per-wear for two-weekly and monthly replacement spherical lenses is almost identical but decreases with increasing frequency of wear. The cost-per-wear of daily replacement spherical lenses is lower than for reusable spherical lenses, when worn from one to four days per week but higher when worn six or seven days per week. The point at which the cost-per-wear is virtually the same for all three spherical lens replacement frequencies (approximately AUD$3.00) is five days of lens wear per week. A similar but upwardly displaced (higher cost) pattern is observed for toric lenses, with the cross-over point occurring between three and four days of wear per week (AUD$4.80). Multifocal lenses have the highest price, with cross-over points for daily versus two-weekly replacement lenses at between four and five days of wear per week (AUD$5.00) and for daily versus monthly replacement lenses at three days per week (AUD$5.50).---------- Conclusions: This cost-per-wear model can be used to assist practitioners and patients in making an informed decision in relation to the cost of contact lens wear as one of many considerations that must be taken into account when deciding on the most suitable lens replacement modality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gel dosimeters are of increasing interest in the field of radiation oncology as the only truly three-dimensional integrating radiation dosimeter. There are a range of ferrous-sulphate and polymer gel dosimeters. To be of use, they must be water-equivalent. On their own, this relates to their radiological properties as determined by their composition. In the context of calibration of gel dosimeters, there is the added complexity of the calibration geometry; the presence of containment vessels may influence the dose absorbed. Five such methods of calibration are modelled here using the Monte Carlo method. It is found that the Fricke gel best matches water for most of the calibration methods, and that the best calibration method involves the use of a large tub into which multiple fields of different dose are directed. The least accurate calibration method involves the use of a long test tube along which a depth dose curve yields multiple calibration points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gel dosimeters are of increasing interest in the field of radiation oncology as the only truly three-dimensional integrating radiation dosimeter. There are a range of ferrous-sulphate and polymer gel dosimeters. To be of use, they must be water-equivalent. On their own, this relates to their radiological properties as determined by their composition. In the context of calibration of gel dosimeters, there is the added complexity of the calibration geometry; the presence of containment vessels may influence the dose absorbed. Five such methods of calibration are modelled here using the Monte Carlo method. It is found that the Fricke gel best matches water for most of the calibration methods, and that the best calibration method involves the use of a large tub into which multiple fields of different dose are directed. The least accurate calibration method involves the use of a long test tube along which a depth dose curve yields multiple calibration points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earlier studies have shown that the influence of fixation stability on bone healing diminishes with advanced age. The goal of this study was to unravel the relationship between mechanical stimulus and age on callus competence at a tissue level. Using 3D in vitro micro-computed tomography derived metrics, 2D in vivo radiography, and histology, we investigated the influences of age and varying fixation stability on callus size, geometry, microstructure, composition, remodeling, and vascularity. Compared were four groups with a 1.5-mm osteotomy gap in the femora of Sprague–Dawley rats: Young rigid (YR), Young semirigid (YSR), Old rigid (OR), Old semirigid (OSR). Hypothesis was that calcified callus microstructure and composition is impaired due to the influence of advanced age, and these individuals would show a reduced response to fixation stabilities. Semirigid fixations resulted in a larger ΔCSA (Callus cross-sectional area) compared to rigid groups. In vitro μCT analysis at 6 weeks postmortem showed callus bridging scores in younger animals to be superior than their older counterparts (pb0.01). Younger animals showed (i) larger callus strut thickness (pb0.001), (ii) lower perforation in struts (pb0.01), and (iii) higher mineralization of callus struts (pb0.001). Callus mineralization was reduced in young animals with semirigid fracture fixation but remained unaffected in the aged group. While stability had an influence, age showed none on callus size and geometry of callus. With no differences observed in relative osteoid areas in the callus ROI, old as well as semirigid fixated animals showed a higher osteoclast count (pb0.05). Blood vessel density was reduced in animals with semirigid fixation (pb0.05). In conclusion, in vivo monitoring indicated delayed callus maturation in aged individuals. Callus bridging and callus competence (microstructure and mineralization) were impaired in individuals with an advanced age. This matched with increased bone resorption due to higher osteoclast numbers. Varying fixator configurations in older individuals did not alter the dominant effect of advanced age on callus tissue mineralization, unlike in their younger counterparts. Age-associated influences appeared independent from stability. This study illustrates the dominating role of osteoclastic activity in age-related impaired healing, while demonstrating the optimization of fixation parameters such as stiffness appeared to be less effective in influencing healing in aged individuals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate the influence of soft contact lenses on regional variations in corneal thickness and shape while taking account of natural diurnal variations in these corneal parameters. Methods: Twelve young, healthy subjects wore 4 different types of soft contact lenses on 4 different days. The lenses were of two different materials (silicone hydrogel, hydrogel), designs (spherical, toric) and powers (–3.00, –7.00 D). Corneal thickness and topography measurements were taken before and after 8 hours of lens wear and on two days without lens wear, using the Pentacam HR system. Results: The hydrogel toric contact lens caused the greatest level of corneal thickening in the central (20.3 ± 10.0 microns) as well as peripheral cornea (24.1 ± 9.1 microns) (p < 0.001) with an obvious regional swelling of the cornea beneath the stabilizing zones. The anterior corneal surface generally showed slight flattening. All contact lenses resulted in central posterior corneal steepening and this was weakly correlated with central corneal swelling (p = 0.03) and peripheral corneal swelling (p = 0.01). Conclusions: There was an obvious regional corneal swelling apparent after wear of the hydrogel soft toric lenses, due to the location of the thicker stabilization zones of the toric lenses. However with the exception of the hydrogel toric lens, the magnitude of corneal swelling induced by the contact lenses over the 8 hours of wear was less than the natural diurnal thinning of the cornea over this same period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thoroughly revised and updated, this popular book provides a comprehensive yet easy to read guide to modern contact lens practice. Beautifully re-designed in a clean, contemporary layout, this second edition presents relevant and up-to-date information in a systematic manner, with a logical flow of subject matter from front to back. This book wonderfully captures the ‘middle ground’ in the contact lens field … somewhere between a dense research-based tome and a basic fitting guide. As such, it is ideally suited for both students and general eye care practitioners who require a practical, accessible and uncluttered account of the contact lens field. Contents Part 1 Introduction Historical perspective. The anterior eye Visual optics Clinical instruments Part 2 Soft contact lenses Soft lens materials Soft lens manufacture Soft lens optics Soft lens measurement Soft lens design and fitting Soft toric lens design and fitting Soft lens care systems Part 3 Rigid contact lenses Rigid lens materials Rigid lens manufacture Rigid lens optics Rigid lens measurement Rigid lens design and fitting Rigid toric lens design and fitting Rigid lens care systems Part 4 Lens replacement modalities Unplanned lens replacement Daily soft lens replacement Planned soft lens replacement Planned rigid lens replacement Part 5 Special lenses and fitting considerations Scleral lenses Tinted lenses Presbyopia Continuous wear Sport Keratoconus High ametropia Paediatric fitting Therapeutic applications Post-refractive Surgery Post-keratoplasty Orthokeratology Diabetes Part 6 Patient examination and management History taking Preliminary examination Patient education Aftercare Complications Digital imaging Compliance Practice management Appendices Index

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An investigation of cylindrical iron rods burning in pressurised oxygen under microgravity conditions is presented. It has been shown that, under similar experimental conditions, the melting rate of a burning, cylindrical iron rod is higher in microgravity than in normal gravity by a factor of 1.8 ± 0.3. This paper presents microanalysis of quenched samples obtained in a microgravity environment in a 2.0 s duration drop tower facility in Brisbane, Australia. These images indicate that the solid/liquid interface is highly convex in reduced gravity, compared to the planar geometry typically observed in normal gravity, which increases the contact area between liquid and solid phases by a factor of 1.7 ± 0.1. Thus, there is good agreement between the proportional increase in solid/liquid interface surface area and melting rate in microgravity. This indicates that the cause of the increased melting rates for cylindrical iron rods burning in microgravity is altered interfacial geometry at the solid/liquid interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer simulation has been widely accepted as an essential tool for the analysis of many engineering systems. It is nowadays perceived to be the most readily available and feasible means of evaluating operations in real railway systems. Based on practical experience and theoretical models developed in various applications, this paper describes the design of a general-purpose simulation system for train operations. Its prime objective is to provide a single comprehensive computer-aided engineering tool for most studies on railway operations so that various aspects of the railway systems with different operation characteristics can be investigated and analysed in depth. This system consists of three levels of simulation. The first is a single-train simulator calculating the running time of a train between specific points under different track geometry and traction conditions. The second is a dual-train simulator which is to find the minimum headway between two trains under different movement constraints, such as signalling systems. The third is a whole-system multi-train simulator which carries out process simulation of the real operation of a railway system according to a practical or planned train schedule or headway; and produces an overall evaluation of system performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.