894 resultados para POINT IMAGING TECHNIQUE
Resumo:
In this study, the authors propose a novel video stabilisation algorithm for mobile platforms with moving objects in the scene. The quality of videos obtained from mobile platforms, such as unmanned airborne vehicles, suffers from jitter caused by several factors. In order to remove this undesired jitter, the accurate estimation of global motion is essential. However it is difficult to estimate global motions accurately from mobile platforms due to increased estimation errors and noises. Additionally, large moving objects in the video scenes contribute to the estimation errors. Currently, only very few motion estimation algorithms have been developed for video scenes collected from mobile platforms, and this paper shows that these algorithms fail when there are large moving objects in the scene. In this study, a theoretical proof is provided which demonstrates that the use of delta optical flow can improve the robustness of video stabilisation in the presence of large moving objects in the scene. The authors also propose to use sorted arrays of local motions and the selection of feature points to separate outliers from inliers. The proposed algorithm is tested over six video sequences, collected from one fixed platform, four mobile platforms and one synthetic video, of which three contain large moving objects. Experiments show our proposed algorithm performs well to all these video sequences.
Resumo:
In this paper, we consider a modified anomalous subdiffusion equation with a nonlinear source term for describing processes that become less anomalous as time progresses by the inclusion of a second fractional time derivative acting on the diffusion term. A new implicit difference method is constructed. The stability and convergence are discussed using a new energy method. Finally, some numerical examples are given. The numerical results demonstrate the effectiveness of theoretical analysis
Resumo:
Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc. The geometric and dosimetric accuracy of CTCombine’s output has been assessed by simulating simple and complex treatments applied to a rotated planar phantom and a rotated humanoid phantom and comparing the resulting virtual EPID images with the images acquired using experimental measurements and independent simulations of equivalent phantoms. It is expected that CTCombine will be useful for Monte Carlo studies of EPID dosimetry as well as other EPID imaging applications.
Resumo:
Over the last few years various research groups around the world have employed X-ray Computed Tomography (CT) imaging in the study of mummies – Toronto-Boston (1,2), Manchester(3). Prior to the development of CT scanners, plane X-rays were used in the investigation of mummies. Xeroradiography has also been employed(4). In a xeroradiograph, objects of similar X-ray density (very difficult to see on a conventional X-ray) appear edge-enhanced and so are seen much more clearly. CT scanners became available in the early 1970s. A CT scanner produces cross-sectional X-rays of objects. On a conventional X-radiograph individual structures are often very difficult to see because all the structures lying in the path of the X-ray beam are superimposed, a problem that does not occur with CT. Another advantage of CT is that the information in a series of consecutive images may be combined to produce a three-dimensional reconstruction of an object. Slices of different thickness and magnification may be chosen. Why CT a mummy? Prior to the availability of CT scanners, the only way of finding out about the inside of a mummy in any detail was to unwrap and dissect it. This has been done by various research groups – most notably the Manchester, UK and Pennsylvania University, USA mummy projects(5,6). Unwrapping a mummy and carrying out an autopsy is obviously very destructive. CT studies hold the possibility of producing a lot more information than is possible from plain X-rays and are able to show the undisturbed arrangement of the wrapped body. CT is also able to provide information about the internal structure of bones, organ packs, etc that wouldn’t be possible without sawing through the bones etc. The mummy we have scanned is encased in a coffin which would have to have been broken open in order to remove the body.
Resumo:
In children, joint hypermobility (typified by structural instability of joints) manifests clinically as neuro-muscular and musculo-skeletal conditions and conditions associated with development and organization of control of posture and gait (Finkelstein, 1916; Jahss, 1919; Sobel, 1926; Larsson, Mudholkar, Baum and Srivastava, 1995; Murray and Woo, 2001; Hakim and Grahame, 2003; Adib, Davies, Grahame, Woo and Murray, 2005:). The process of control of the relative proportions of joint mobility and stability, whilst maintaining equilibrium in standing posture and gait, is dependent upon the complex interrelationship between skeletal, muscular and neurological function (Massion, 1998; Gurfinkel, Ivanenko, Levik and Babakova, 1995; Shumway-Cook and Woollacott, 1995). The efficiency of this relies upon the integrity of neuro-muscular and musculo-skeletal components (ligaments, muscles, nerves), and the Central Nervous System’s capacity to interpret, process and integrate sensory information from visual, vestibular and proprioceptive sources (Crotts, Thompson, Nahom, Ryan and Newton, 1996; Riemann, Guskiewicz and Shields, 1999; Schmitz and Arnold, 1998) and development and incorporation of this into a representational scheme (postural reference frame) of body orientation with respect to internal and external environments (Gurfinkel et al., 1995; Roll and Roll, 1988). Sensory information from the base of support (feet) makes significant contribution to the development of reference frameworks (Kavounoudias, Roll and Roll, 1998). Problems with the structure and/ or function of any one, or combination of these components or systems, may result in partial loss of equilibrium and, therefore ineffectiveness or significant reduction in the capacity to interact with the environment, which may result in disability and/ or injury (Crotts et al., 1996; Rozzi, Lephart, Sterner and Kuligowski, 1999b). Whilst literature focusing upon clinical associations between joint hypermobility and conditions requiring therapeutic intervention has been abundant (Crego and Ford, 1952; Powell and Cantab, 1983; Dockery, in Jay, 1999; Grahame, 1971; Childs, 1986; Barton, Bird, Lindsay, Newton and Wright, 1995a; Rozzi, et al., 1999b; Kerr, Macmillan, Uttley and Luqmani, 2000; Grahame, 2001), there has been a deficit in controlled studies in which the neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility have been quantified and considered within the context of organization of postural control in standing balance and gait. This was the aim of this project, undertaken as three studies. The major study (Study One) compared the fundamental neuro-muscular and musculo-skeletal characteristics of 15 children with joint hypermobility, and 15 age (8 and 9 years), gender, height and weight matched non-hypermobile controls. Significant differences were identified between previously undiagnosed hypermobile (n=15) and non-hypermobile children (n=15) in passive joint ranges of motion of the lower limbs and lumbar spine, muscle tone of the lower leg and foot, barefoot CoP displacement and in parameters of barefoot gait. Clinically relevant differences were also noted in barefoot single leg balance time. There were no differences between groups in isometric muscle strength in ankle dorsiflexion, knee flexion or extension. The second comparative study investigated foot morphology in non-weight bearing and weight bearing load conditions of the same children with and without joint hypermobility using three dimensional images (plaster casts) of their feet. The preliminary phase of this study evaluated the casting technique against direct measures of foot length, forefoot width, RCSP and forefoot to rearfoot angle. Results indicated accurate representation of elementary foot morphology within the plaster images. The comparative study examined the between and within group differences in measures of foot length and width, and in measures above the support surface (heel inclination angle, forefoot to rearfoot angle, normalized arch height, height of the widest point of the heel) in the two load conditions. Results of measures from plaster images identified that hypermobile children have different barefoot weight bearing foot morphology above the support surface than non-hypermobile children, despite no differences in measures of foot length or width. Based upon the differences in components of control of posture and gait in the hypermobile group, identified in Study One and Study Two, the final study (Study Three), using the same subjects, tested the immediate effect of specifically designed custom-made foot orthoses upon balance and gait of hypermobile children. The design of the orthoses was evaluated against the direct measures and the measures from plaster images of the feet. This ascertained the differences in morphology of the modified casts used to mould the orthoses and the original image of the foot. The orthoses were fitted into standardized running shoes. The effect of the shoe alone was tested upon the non-hypermobile children as the non-therapeutic equivalent condition. Immediate improvement in balance was noted in single leg stance and CoP displacement in the hypermobile group together with significant immediate improvement in the percentage of gait phases and in the percentage of the gait cycle at which maximum plantar flexion of the ankle occurred in gait. The neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility are different from those of non-hypermobile children. The Beighton, Solomon and Soskolne (1973) screening criteria successfully classified joint hypermobility in children. As a result of this study joint hypermobility has been identified as a variable which must be controlled in studies of foot morphology and function in children. The outcomes of this study provide a basis upon which to further explore the association between joint hypermobility and neuro-muscular and musculo-skeletal conditions, and, have relevance for the physical education of children with joint hypermobility, for footwear and orthotic design processes, and, in particular, for clinical identification and treatment of children with joint hypermobility.
Corneal topography with Scheimpflug imaging and videokeratography : comparative study of normal eyes
Resumo:
PURPOSE: To compare the repeatability within anterior corneal topography measurements and agreement between measurements with the Pentacam HR rotating Scheimpflug camera and with a previously validated Placido disk–based videokeratoscope (Medmont E300). ------ SETTING: Contact Lens and Visual Optics Laboratory, School of Optometry, Queensland University of Technology, Brisbane, Queensland, Australia. ----- METHODS: Normal eyes in 101 young adult subjects had corneal topography measured using the Scheimpflug camera (6 repeated measurements) and videokeratoscope (4 repeated measurements). The best-fitting axial power corneal spherocylinder was calculated and converted into power vectors. Corneal higher-order aberrations (HOAs) (up to the 8th Zernike order) were calculated using the corneal elevation data from each instrument. ----- RESULTS: Both instruments showed excellent repeatability for axial power spherocylinder measurements (repeatability coefficients <0.25 diopter; intraclass correlation coefficients >0.9) and good agreement for all power vectors. Agreement between the 2 instruments was closest when the mean of multiple measurements was used in analysis. For corneal HOAs, both instruments showed reasonable repeatability for most aberration terms and good correlation and agreement for many aberrations (eg, spherical aberration, coma, higher-order root mean square). For other aberrations (eg, trefoil and tetrafoil), the 2 instruments showed relatively poor agreement. ----- CONCLUSIONS: For normal corneas, the Scheimpflug system showed excellent repeatability and reasonable agreement with a previously validated videokeratoscope for the anterior corneal axial curvature best-fitting spherocylinder and several corneal HOAs. However, for certain aberrations with higher azimuthal frequencies, the Scheimpflug system had poor agreement with the videokeratoscope; thus, caution should be used when interpreting these corneal aberrations with the Scheimpflug system.
Resumo:
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Resumo:
A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.
Resumo:
Free-radical processes underpin the thermo-oxidative degradation of polyolefins. Thus, to extend the lifetime of these polymers, stabilizers are generally added during processing to scavenge the free radicals formed as the polymer degrades. Nitroxide radical precursors, such as hindered amine stabilizers (HAS),1,2 are common polypropylene additives as the nitroxide moiety is a potent scavenger of polymer alkyl radicals (R¥). Oxidation of HAS by radicals formed during polypropylene degradation yields nitroxide radicals (RRNO¥), which rapidly trap the polymer degradation species to produce alkoxyamines, thus retarding oxidative polymer degradation. This increase in polymer stability is demonstrated by a lengthening of the “induction period” of the polymer (the time prior to a sharp rise in the oxidation of the polymer). Instrumental techniques such as chemiluminescence or infrared spectroscopy are somewhat limited in detecting changes in the polymer during the initial stages of degradation. Therefore, other methods for observing polymer degradation have been sought as the useful life of a polymer does not extend far beyond its “induction period”
Resumo:
The international focus on embracing daylighting for energy efficient lighting purposes and the corporate sector’s indulgence in the perception of workplace and work practice “transparency” has spurned an increase in highly glazed commercial buildings. This in turn has renewed issues of visual comfort and daylight-derived glare for occupants. In order to ascertain evidence, or predict risk, of these events; appraisals of these complex visual environments require detailed information on the luminances present in an occupant’s field of view. Conventional luminance meters are an expensive and time consuming method of achieving these results. To create a luminance map of an occupant’s visual field using such a meter requires too many individual measurements to be a practical measurement technique. The application of digital cameras as luminance measurement devices has solved this problem. With high dynamic range imaging, a single digital image can be created to provide luminances on a pixel-by-pixel level within the broad field of view afforded by a fish-eye lens: virtually replicating an occupant’s visual field and providing rapid yet detailed luminance information for the entire scene. With proper calibration, relatively inexpensive digital cameras can be successfully applied to the task of luminance measurements, placing them in the realm of tools that any lighting professional should own. This paper discusses how a digital camera can become a luminance measurement device and then presents an analysis of results obtained from post occupancy measurements from building assessments conducted by the Mobile Architecture Built Environment Laboratory (MABEL) project. This discussion leads to the important realisation that the placement of such tools in the hands of lighting professionals internationally will provide new opportunities for the lighting community in terms of research on critical issues in lighting such as daylight glare and visual quality and comfort.