91 resultados para Very long path length

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The majority of tertiary practice-led creative arts disciplines became part of the Australian university system as a result of the creation of the Unified National System of tertiary education in 1988. Over the past two decades, research has grown as the yardstick by which academic performance in the Australian university sector is recognised and rewarded. Academics in artistic disciplines, who struggled to adapt to a culture and workload expectations different from their previous, predominantly teaching based, employment, continue to see their research under-valued within the established evaluation framework. Despite a late 1990s Australian government funded inquiry, many of the inequities remain. While the Excellence in Research in Australia (ERA) exercise has acknowledged the non-text outputs of artist-academics in its evaluation of 'research outcomes', much of the process remains resolutely framed by measures that work against creative arts researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A generic method for the synthesis of metal-7,7,8,8-tetracyanoquinodimethane (TCNQ) charge-transfer complexes on both conducting and nonconducting substrates is achieved by photoexcitation of TCNQ in acetonitrile in the presence of a sacrificial electron donor and the relevant metal cation. The photochemical reaction leads to reduction of TCNQ to the TCNQ- monoanion. In the presence of Mx+(MeCN), reaction with TCNQ-(MeCN) leads to deposition of Mx+[TCNQ]x crystals onto a solid substrate with morphologies that are dependent on the metal cation. Thus, CuTCNQ phase I photocrystallizes as uniform microrods, KTCNQ as microrods with a random size distribution, AgTCNQ as very long nanowires up to 30 μm in length and with diameters of less than 180 nm, and Co[TCNQ]2(H2O)2 as nanorods and wires. The described charge-transfer complexes have been characterized by optical and scanning electron microscopy and IR and Raman spectroscopy. The CuTCNQ and AgTCNQ complexes are of particular interest for use in memory storage and switching devices. In principle, this simple technique can be employed to generate all classes of metal−TCNQ complexes and opens up the possibility to pattern them in a controlled manner on any type of substrate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction Total scatter factor (or output factor) in megavoltage photon dosimetry is a measure of relative dose relating a certain field size to a reference field size. The use of solid phantoms has been well established for output factor measurements, however to date these phantoms have not been tested with small fields. In this work, we evaluate the water equivalency of a number of solid phantoms for small field output factor measurements using the EGSnrc Monte Carlo code. Methods The following small square field sizes were simulated using BEAMnrc: 5, 6, 7, 8, 10 and 30 mm. Each simulated phantom geometry was created in DOSXYZnrc and consisted of a silicon diode (of length and width 1.5 mm and depth 0.5 mm) submersed in the phantom at a depth of 5 g/cm2. The source-to-detector distance was 100 cm for all simulations. The dose was scored in a single voxel at the location of the diode. Interaction probabilities and radiation transport parameters for each material were created using custom PEGS4 files. Results A comparison of the resultant output factors in the solid phantoms, compared to the same factors in a water phantom are shown in Fig. 1. The statistical uncertainty in each point was less than or equal to 0.4 %. The results in Fig. 1 show that the density of the phantoms affected the output factor results, with higher density materials (such as PMMA) resulting in higher output factors. Additionally, it was also calculated that scaling the depth for equivalent path length had negligible effect on the output factor results at these field sizes. Discussion and conclusions Electron stopping power and photon mass energy absorption change minimally with small field size [1]. Also, it can be seen from Fig. 1 that the difference from water decreases with increasing field size. Therefore, the most likely cause for the observed discrepancies in output factors is differing electron disequilibrium as a function of phantom density. When measuring small field output factors in a solid phantom, it is important that the density is very close to that of water.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain connectivity analyses are increasingly popular for investigating organization. Many connectivity measures including path lengths are generally defined as the number of nodes traversed to connect a node in a graph to the others. Despite its name, path length is purely topological, and does not take into account the physical length of the connections. The distance of the trajectory may also be highly relevant, but is typically overlooked in connectivity analyses. Here we combined genotyping, anatomical MRI and HARDI to understand how our genes influence the cortical connections, using whole-brain tractography. We defined a new measure, based on Dijkstra's algorithm, to compute path lengths for tracts connecting pairs of cortical regions. We compiled these measures into matrices where elements represent the physical distance traveled along tracts. We then analyzed a large cohort of healthy twins and show that our path length measure is reliable, heritable, and influenced even in young adults by the Alzheimer's risk gene, CLU.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The third edition of the Australian Standard AS1742 Manual of Uniform Traffic Control Devices Part 7 provides a method of calculating the sighting distance required to safely proceed at passive level crossings based on the physics of moving vehicles. This required distance becomes greater with higher line speeds and slower, heavier vehicles so that it may return quite a long sighting distance. However, at such distances, there are also concerns around whether drivers would be able to reliably identify a train in order to make an informed decision regarding whether it would be safe to proceed across the level crossing. In order to determine whether drivers are able to make reliable judgements to proceed in these circumstances, this study assessed the distance at which a train first becomes identifiable to a driver as well as their, ability to detect the movement of the train. A site was selected in Victoria, and 36 participants with good visual acuity observed 4 trains in the 100-140 km/h range. While most participants could detect the train from a very long distance (2.2 km on average), they could only detect that the train was moving at much shorter distances (1.3 km on average). Large variability was observed between participants, with 4 participants consistently detecting trains later than other participants. Participants tended to improve in their capacity to detect the presence of the train with practice, but a similar trend was not observed for detection of the movement of the train. Participants were consistently poor at accurately judging the approach speed of trains, with large underestimations at all investigated distances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To investigate whether wearing different presbyopic vision corrections alters the pattern of eye and head movements when viewing and responding to driving-related traffic scenes. Methods: Participants included 20 presbyopes (mean age: 56.1 ± 5.7 years) who had no experience of wearing presbyopic vision corrections, apart from single vision (SV) reading spectacles. Each participant wore five different vision corrections: distance SV lenses, progressive addition spectacle lenses (PAL), bifocal spectacle lenses (BIF), monovision (MV) and multifocal contact lenses (MTF CL). For each visual condition, participants were required to view videotape recordings of traffic scenes, track a reference vehicle, and identify a series of peripherally presented targets. Digital numerical display panels were also included as near visual stimuli (simulating the visual displays of a vehicle speedometer and radio). Eye and head movements were measured, and the accuracy of target recognition was also recorded. Results: The path length of eye movements while viewing and responding to driving-related traffic scenes was significantly longer when wearing BIF and PAL than MV and MTF CL (both p ≤ 0.013). The path length of head movements was greater with SV, BIF, and PAL than MV and MTF CL (all p < 0.001). Target recognition and brake response times were not significantly affected by vision correction, whereas target recognition was less accurate when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze (p = 0.008), regardless of vision correction. Conclusions: Different presbyopic vision corrections alter eye and head movement patterns. The longer path length of eye and head movements and greater number of saccades associated with the spectacle presbyopic corrections may affect some aspects of driving performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: First-eye cataract surgery can reduce the rate of falls among older adults, yet the effect of second-eye surgery on the rate of falling remains unclear. The present study investigated the effect of monocular and binocular simulated cataract blur on postural stability among older adults. Methods: Postural stability was assessed on 34 healthy older adults (mean 68.2 years, SD 3.5) with normal vision, using a portable force platform (BT4, HUR Labs, Finland) which collected data on centre of pressure (COP) displacement. Stability was assessed on firm and foam surfaces under four binocular viewing conditions using Vistech filters to simulate cataract blur: [1] best-corrected vision both eyes; [2] blur over non-dominant eye, [3] blur over dominant eye and [4] blur over both eyes. Binocular logMAR visual acuity, Pelli-Robson contrast sensitivity and stereoacuity were also measured under these viewing conditions and ocular dominance measured using the hole-in-card test. Generalized estimating equations with an exchangeable correlation structure examined the effect of the surface and vision conditions on postural stability. Results: Visual acuity and contrast sensitivity were significantly reduced under monocular and binocular cataract blur compared to normal viewing. All blur conditions resulted in loss of stereoacuity. Binocular cataract blur significantly reduced postural stability compared to normal vision on the firm (COP path length; p=0.013) and foam surface (anterior-posterior COP RMS, COP path length and COP area; p<0.01). However, no significant differences in postural stability were found between the monocular blur conditions compared to normal vision, or between the dominant and non-dominant monocular blur conditions on either the firm or foam surfaces. Conclusions: Findings indicate that binocular blur significantly impairs postural stability, and suggests that improvements in postural stability may justify first-eye cataract surgery, particularly during somatosensory disruption. Postural stability was not significantly impaired in the monocular cataract blur conditions compared to the normal vision condition, nor was there any effect of ocular dominance on postural stability in the presence of monocular cataract blur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presbyopia affects individuals from the age of 45 years onwards, resulting in difficulty in accurately focusing on near objects. There are many optical corrections available including spectacles or contact lenses that are designed to enable presbyopes to see clearly at both far and near distances. However, presbyopic vision corrections also disturb aspects of visual function under certain circumstances. The impact of these changes on activities of daily living such as driving are, however, poorly understood. Therefore, the aim of this study was to determine which aspects of driving performance might be affected by wearing different types of presbyopic vision corrections. In order to achieve this aim, three experiments were undertaken. The first experiment involved administration of a questionnaire to compare the subjective driving difficulties experienced when wearing a range of common presbyopic contact lens and spectacle corrections. The questionnaire was developed and piloted, and included a series of items regarding difficulties experienced while driving under day and night-time conditions. Two hundred and fifty five presbyopic patients responded to the questionnaire and were categorised into five groups, including those wearing no vision correction for driving (n = 50), bifocal spectacles (BIF, n = 54), progressive addition lenses spectacles (PAL, n = 50), monovision (MV, n = 53) and multifocal contact lenses (MTF CL, n = 48). Overall, ratings of satisfaction during daytime driving were relatively high for all correction types. However, MV and MTF CL wearers were significantly less satisfied with aspects of their vision during night-time than daytime driving, particularly with regard to disturbances from glare and haloes. Progressive addition lens wearers noticed more distortion of peripheral vision, while BIF wearers reported more difficulties with tasks requiring changes in focus and those who wore no vision correction for driving reported problems with intermediate and near tasks. Overall, the mean level of satisfaction for daytime driving was quite high for all of the groups (over 80%), with the BIF wearers being the least satisfied with their vision for driving. Conversely, at night, MTF CL wearers expressed the least satisfaction. Research into eye and head movements has become increasingly of interest in driving research as it provides a means of understanding how the driver responds to visual stimuli in traffic. Previous studies have found that wearing PAL can affect eye and head movement performance resulting in slower eye movement velocities and longer times to stabilize the gaze for fixation. These changes in eye and head movement patterns may have implications for driving safety, given that the visual tasks for driving include a range of dynamic search tasks. Therefore, the second study was designed to investigate the influence of different presbyopic corrections on driving-related eye and head movements under standardized laboratory-based conditions. Twenty presbyopes (mean age: 56.1 ± 5.7 years) who had no experience of wearing presbyopic vision corrections, apart from single vision reading spectacles, were recruited. Each participant wore five different types of vision correction: single vision distance lenses (SV), PAL, BIF, MV and MTF CL. For each visual condition, participants were required to view videotape recordings of traffic scenes, track a reference vehicle and identify a series of peripherally presented targets while their eye and head movements were recorded using the faceLAB® eye and head tracking system. Digital numerical display panels were also included as near visual stimuli (simulating the visual displays of a vehicle speedometer and radio). The results demonstrated that the path length of eye movements while viewing and responding to driving-related traffic scenes was significantly longer when wearing BIF and PAL than MV and MTF CL. The path length of head movements was greater with SV, BIF and PAL than MV and MTF CL. Target recognition was less accurate when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze, regardless of vision correction type. The third experiment aimed to investigate the real world driving performance of presbyopes while wearing different vision corrections measured on a closed-road circuit at night-time. Eye movements were recorded using the ASL Mobile Eye, eye tracking system (as the faceLAB® system proved to be impractical for use outside of the laboratory). Eleven participants (mean age: 57.25 ± 5.78 years) were fitted with four types of prescribed vision corrections (SV, PAL, MV and MTF CL). The measures of driving performance on the closed-road circuit included distance to sign recognition, near target recognition, peripheral light-emitting-diode (LED) recognition, low contrast road hazards recognition and avoidance, recognition of all the road signs, time to complete the course, and driving behaviours such as braking, accelerating, and cornering. The results demonstrated that driving performance at night was most affected by MTF CL compared to PAL, resulting in shorter distances to read signs, slower driving speeds, and longer times spent fixating road signs. Monovision resulted in worse performance in the task of distance to read a signs compared to SV and PAL. The SV condition resulted in significantly more errors made in interpreting information from in-vehicle devices, despite spending longer time fixating on these devices. Progressive addition lenses were ranked as the most preferred vision correction, while MTF CL were the least preferred vision correction for night-time driving. This thesis addressed the research question of how presbyopic vision corrections affect driving performance and the results of the three experiments demonstrated that the different types of presbyopic vision corrections (e.g. BIF, PAL, MV and MTF CL) can affect driving performance in different ways. Distance-related driving tasks showed reduced performance with MV and MTF CL, while tasks which involved viewing in-vehicle devices were significantly hampered by wearing SV corrections. Wearing spectacles such as SV, BIF and PAL induced greater eye and head movements in the simulated driving condition, however this did not directly translate to impaired performance on the closed- road circuit tasks. These findings are important for understanding the influence of presbyopic vision corrections on vision under real world driving conditions. They will also assist the eye care practitioner to understand and convey to patients the potential driving difficulties associated with wearing certain types of presbyopic vision corrections and accordingly to support them in the process of matching patients to optical corrections which meet their visual needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure-building phenomena within clay aggregates are governed by forces acting between clay particles. Measurements of such forces are important to understand in order to manipulate the aggregate structure for applications such as dewatering of mineral processing tailings. A parallel particle orientation is required when conducting XRD investigation on the oriented samples and conduct force measurements acting between basal planes of clay mineral platelets using at. force microscopy (AFM). To investigate how smectite clay platelets were oriented on silicon wafer substrate when dried from suspension range of methods like SEM, XRD and AFM were employed. From these investigations, we conclude that high clay concns. and larger particle diams. (up to 5 μm) in suspension result in random orientation of platelets in the substrate. The best possible laminar orientation in the clay dry film, represented in the XRD 0 0 1/0 2 0 intensity ratio of 47 was obtained by drying thin layers from 0.02 wt.% clay suspensions of the natural pH. Conducted AFM investigations show that smectite studied in water based electrolytes show very long-range repulsive forces lower in strength than electrostatic forces from double-layer repulsion. It was suggested that these forces may have structural nature. Smectite surface layers rehydrate in water environment forms surface gel with spongy and cellular texture which cushion approaching AFM probe. This structural effect can be measured in distances larger than 1000 nm from substrate surface and when probe penetrate this gel layer, structural linkages are forming between substrate and clay covered probe. These linkages prevent subsequently smooth detachments of AFM probe on way back when retrieval. This effect of tearing new formed structure apart involves larger adhesion-like forces measured in retrieval. It is also suggested that these effect may be enhanced by the nano-clay particles interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently it has been shown that the consumption of a diet high in saturated fat is associated with impaired insulin sensitivity and increased incidence of type 2 diabetes. In contrast, diets that are high in monounsaturated fatty acids (MUFAs) or polyunsaturated fatty acids (PUFAs), especially very long chain n-3 fatty acids (FAs), are protective against disease. However, the molecular mechanisms by which saturated FAs induce the insulin resistance and hyperglycaemia associated with metabolic syndrome and type 2 diabetes are not clearly defined. It is possible that saturated FAs may act through alternative mechanisms compared to MUFA and PUFA to regulate of hepatic gene expression and metabolism. It is proposed that, like MUFA and PUFA, saturated FAs regulate the transcription of target genes. To test this hypothesis, hepatic gene expression analysis was undertaken in a human hepatoma cell line, Huh-7, after exposure to the saturated FA, palmitate. These experiments showed that palmitate is an effective regulator of gene expression for a wide variety of genes. A total of 162 genes were differentially expressed in response to palmitate. These changes not only affected the expression of genes related to nutrient transport and metabolism, they also extend to other cellular functions including, cytoskeletal architecture, cell growth, protein synthesis and oxidative stress response. In addition, this thesis has shown that palmitate exposure altered the expression patterns of several genes that have previously been identified in the literature as markers of risk of disease development, including CVD, hypertension, obesity and type 2 diabetes. The altered gene expression patterns associated with an increased risk of disease include apolipoprotein-B100 (apo-B100), apo-CIII, plasminogen activator inhibitor 1, insulin-like growth factor-I and insulin-like growth factor binding protein 3. This thesis reports the first observation that palmitate directly signals in cultured human hepatocytes to regulate expression of genes involved in energy metabolism as well as other important genes. Prolonged exposure to long-chain saturated FAs reduces glucose phosphorylation and glycogen synthesis in the liver. Decreased glucose metabolism leads to elevated rates of lipolysis, resulting in increased release of free FAs. Free FAs have a negative effect on insulin action on the liver, which in turn results in increased gluconeogenesis and systemic dyslipidaemia. It has been postulated that disruption of glucose transport and insulin secretion by prolonged excessive FA availability might be a non-genetic factor that has contributed to the staggering rise in prevalence of type 2 diabetes. As glucokinase (GK) is a key regulatory enzyme of hepatic glucose metabolism, changes in its activity may alter flux through the glycolytic and de novo lipogenic pathways and result in hyperglycaemia and ultimately insulin resistance. This thesis investigated the effects of saturated FA on the promoter activity of the glycolytic enzyme, GK, and various transcription factors that may influence the regulation of GK gene expression. These experiments have shown that the saturated FA, palmitate, is capable of decreasing GK promoter activity. In addition, quantitative real-time PCR has shown that palmitate incubation may also regulate GK gene expression through a known FA sensitive transcription factor, sterol regulatory element binding protein-1c (SREBP-1c), which upregulates GK transcription. To parallel the investigations into the mechanisms of FA molecular signalling, further studies of the effect of FAs on metabolic pathway flux were performed. Although certain FAs reduce SREBP-1c transcription in vitro, it is unclear whether this will result in decreased GK activity in vivo where positive effectors of SREBP-1c such as insulin are also present. Under these conditions, it is uncertain if the inhibitory effects of FAs would be overcome by insulin. The effects of a combination of FAs, insulin and glucose on glucose phosphorylation and metabolism in cultured primary rat hepatocytes at concentrations that mimic those in the portal circulation after a meal was examined. It was found that total GK activity was unaffected by an increased concentration of insulin, but palmitate and eicosapentaenoic acid significantly lowered total GK activity in the presence of insulin. Despite the fact that total GK enzyme activity was reduced in response to FA incubation, GK enzyme translocation from the inactive, nuclear bound, to active, cytoplasmic state was unaffected. Interestingly, none of the FAs tested inhibited glucose phosphorylation or the rate of glycolysis when insulin is present. These results suggest that in the presence of insulin the levels of the active, unbound cytoplasmic GK are sufficient to buffer a slight decrease in GK enzyme activity and decreased promoter activity caused by FA exposure. Although a high fat diet has been associated with impaired hepatic glucose metabolism, there is no evidence from this thesis that FAs themselves directly modulate flux through the glycolytic pathway in isolated primary hepatocytes when insulin is also present. Therefore, although FA affected expression of a wide range of genes, including GK, this did not affect glycolytic flux in the presence of insulin. However, it may be possible that a saturated FA-induced decrease in GK enzyme activity when combined with the onset of insulin resistance may promote the dys-regulation of glucose homeostasis and the subsequent development of hyperglycaemia, metabolic syndrome and type 2 diabetes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the first in a series of four articles which will explore different aspects of air pollution, its impact on health and challenges in defining the boundaries between impact and nonimpact on health. Hardly a new topic one might say. Indeed, it’s been an issue for centuries, millennia even! For example, Pliny the Elder (AD 23-79), a Roman officer and author of the ‘Natural History’ recommended that: “…quarry slaves from asbestos mines not be purchased because they die young”, and suggested: “…the use of a respirator, made of transparent bladder skin, to protect workers from asbestos dust.” Closer to modern times, a Danish Proverb states: "Fresh air impoverishes the doctor". While none of these statements are an air quality guideline in a modern sense, they do illustrate that, for a very long time, we have known that there is a link between air quality and health, and that some measures were taken to reduce the impact of the exposure to the pollutants. Obviously, we are much more sophisticated now!