971 resultados para linear measurements


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a growing interest in the use of megavoltage cone-beam computed tomography (MV CBCT) data for radiotherapy treatment planning. To calculate accurate dose distributions, knowledge of the electron density (ED) of the tissues being irradiated is required. In the case of MV CBCT, it is necessary to determine a calibration-relating CT number to ED, utilizing the photon beam produced for MV CBCT. A number of different parameters can affect this calibration. This study was undertaken on the Siemens MV CBCT system, MVision, to evaluate the effect of the following parameters on the reconstructed CT pixel value to ED calibration: the number of monitor units (MUs) used (5, 8, 15 and 60 MUs), the image reconstruction filter (head and neck, and pelvis), reconstruction matrix size (256 by 256 and 512 by 512), and the addition of extra solid water surrounding the ED phantom. A Gammex electron density CT phantom containing EDs from 0.292 to 1.707 was imaged under each of these conditions. The linear relationship between MV CBCT pixel value and ED was demonstrated for all MU settings and over the range of EDs. Changes in MU number did not dramatically alter the MV CBCT ED calibration. The use of different reconstruction filters was found to affect the MV CBCT ED calibration, as was the addition of solid water surrounding the phantom. Dose distributions from treatment plans calculated with simulated image data from a 15 MU head and neck reconstruction filter MV CBCT image and a MV CBCT ED calibration curve from the image data parameters and a 15 MU pelvis reconstruction filter showed small and clinically insignificant differences. Thus, the use of a single MV CBCT ED calibration curve is unlikely to result in any clinical differences. However, to ensure minimal uncertainties in dose reporting, MV CBCT ED calibration measurements could be carried out using parameter-specific calibration measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 1991, McNabb introduced the concept of mean action time (MAT) as a finite measure of the time required for a diffusive process to effectively reach steady state. Although this concept was initially adopted by others within the Australian and New Zealand applied mathematics community, it appears to have had little use outside this region until very recently, when in 2010 Berezhkovskii and coworkers rediscovered the concept of MAT in their study of morphogen gradient formation. All previous work in this area has been limited to studying single–species differential equations, such as the linear advection–diffusion–reaction equation. Here we generalise the concept of MAT by showing how the theory can be applied to coupled linear processes. We begin by studying coupled ordinary differential equations and extend our approach to coupled partial differential equations. Our new results have broad applications including the analysis of models describing coupled chemical decay and cell differentiation processes, amongst others.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Linear adaptive channel equalization using the least mean square (LMS) algorithm and the recursive least-squares(RLS) algorithm for an innovative multi-user (MU) MIMOOFDM wireless broadband communications system is proposed. The proposed equalization method adaptively compensates the channel impairments caused by frequency selectivity in the propagation environment. Simulations for the proposed adaptive equalizer are conducted using a training sequence method to determine optimal performance through a comparative analysis. Results show an improvement of 0.15 in BER (at a SNR of 16 dB) when using Adaptive Equalization and RLS algorithm compared to the case in which no equalization is employed. In general, adaptive equalization using LMS and RLS algorithms showed to be significantly beneficial for MU-MIMO-OFDM systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant wheel-rail dynamic forces occur because of imperfections in the wheels and/or rail. One of the key responses to the transmission of these forces down through the track is impact force on the sleepers. Dynamic analysis of nonlinear systems is very complicated and does not lend itself easily to a classical solution of multiple equations. Trying to deduce the behaviour of track components from experimental data is very difficult because such data is hard to obtain and applies to only the particular conditions of the track being tested. The finite element method can be the best solution to this dilemma. This paper describes a finite element model using the software package ANSYS for various sized flat defects in the tread of a wheel rolling at a typical speed on heavy haul track. The paper explores the dynamic response of a prestressed concrete sleeper to these defects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter represents the analytical solution of two-dimensional linear stretching sheet problem involving a non-Newtonian liquid and suction by (a) invoking the boundary layer approximation and (b) using this result to solve the stretching sheet problem without using boundary layer approximation. The basic boundary layer equations for momentum, which are non-linear partial differential equations, are converted into non-linear ordinary differential equations by means of similarity transformation. The results reveal a new analytical procedure for solving the boundary layer equations arising in a linear stretching sheet problem involving a non-Newtonian liquid (Walters’ liquid B). The present study throws light on the analytical solution of a class of boundary layer equations arising in the stretching sheet problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

None of currently used tonometers produce estimated IOP values that are free of errors. Measurement incredibility arises from indirect measurement of corneal deformation and the fact that pressure calculations are based on population averaged parameters of anterior segment. Reliable IOP values are crucial for understanding and monitoring of number of eye pathologies e.g. glaucoma. We have combined high speed swept source OCT with air-puff chamber. System provides direct measurement of deformation of cornea and anterior surface of the lens. This paper describes in details the performance of air-puff ssOCT instrument. We present different approaches of data presentation and analysis. Changes in deformation amplitude appears to be good indicator of IOP changes. However, it seems that in order to provide accurate intraocular pressure values an additional information on corneal biomechanics is necessary. We believe that such information could be extracted from data provided by air-puff ssOCT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oxidative stress caused by generation of free radicals and related reactive oxygen species (ROS) at the sites of deposition has been proposed as a mechanism for many of the adverse health outcomes associated with exposure to particulate matter (PM). Recently, a new profluorescent nitroxide molecular probe (BPEAnit) developed at QUT was applied in an entirely novel, rapid and non-cell based assay for assessing the oxidative potential of particles (i.e. potential of particles to induce oxidative stress). The technique was applied on particles produced by several combustion sources, namely cigarette smoke, diesel exhaust and wood smoke. One of the main findings from the initial studies undertaken at QUT was that the oxidative potential per PM mass significantly varies for different combustion sources as well as the type of fuel used and combustion conditions. However, possibly the most important finding from our studies was that there was a strong correlation between the organic fraction of particles and the oxidative potential measured by the PFN assay, which clearly highlights the importance of organic species in particle-induced toxicity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. The purpose of this article was to present methods capable of estimating the size and shape of the human eye lens without resorting to phakometry or magnetic resonance imaging (MRI). Methods. Previously published biometry and phakometry data of 66 emmetropic eyes of 66 subjects (age range [18, 63] years, spherical equivalent range [−0.75, +0.75] D) were used to define multiple linear regressions for the radii of curvature and thickness of the lens, from which the lens refractive index could be derived. MRI biometry was also available for a subset of 30 subjects, from which regressions could be determined for the vertex radii of curvature, conic constants, equatorial diameter, volume, and surface area. All regressions were compared with the phakometry and MRI data; the radii of curvature regressions were also compared with a method proposed by Bennett and Royston et al. Results. The regressions were in good agreement with the original measurements. This was especially the case for the regressions of lens thickness, volume, and surface area, which each had an R2 > 0.6. The regression for the posterior radius of curvature had an R2 < 0.2, making this regression unreliable. For all other regressions we found 0.25 < R2 < 0.6. The Bennett-Royston method also produced a good estimation of the radii of curvature, provided its parameters were adjusted appropriately. Conclusions. The regressions presented in this article offer a valuable alternative in case no measured lens biometry values are available; however care must be taken for possible outliers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Individual exposure to ultraviolet radiation (UVR) is challenging to measure, particularly for diseases with substantial latency periods between first exposure and diagnosis of outcome, such as cancer. To guide the choice of surrogates for long-term UVR exposure in epidemiologic studies, we assessed how well stable sun-related individual characteristics and environmental/meteorological factors predicted daily personal UVR exposure measurements. Methods We evaluated 123 United States Radiologic Technologists subjects who wore personal UVR dosimeters for 8 hours daily for up to 7 days (N = 837 days). Potential predictors of personal UVR derived from a self-administered questionnaire, and public databases that provided daily estimates of ambient UVR and weather conditions. Factors potentially related to personal UVR exposure were tested individually and in a model including all significant variables. Results The strongest predictors of daily personal UVR exposure in the full model were ambient UVR, latitude, daily rainfall, and skin reaction to prolonged sunlight (R2 = 0.30). In a model containing only environmental and meteorological variables, ambient UVR, latitude, and daily rainfall were the strongest predictors of daily personal UVR exposure (R2 = 0.25). Conclusions In the absence of feasible measures of individual longitudinal sun exposure history, stable personal characteristics, ambient UVR, and weather parameters may help estimate long-term personal UVR exposure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an Image Based Visual Servo control design for Fixed Wing Unmanned Aerial Vehicles tracking locally linear infrastructure in the presence of wind using a body fixed imaging sensor. Visual servoing offers improved data collection by posing the tracking task as one of controlling a feature as viewed by the inspection sensor, although is complicated by the introduction of wind as aircraft heading and course angle no longer align. In this work it is shown that the effects of wind alter the desired line angle required for continuous tracking to equal the wind correction angle as would be calculated to set a desired course. A control solution is then sort by linearizing the interaction matrix about the new feature pose such that kinematics of the feature can be augmented with the lateral dynamics of the aircraft, from which a state feedback control design is developed. Simulation results are presented comparing no compensation, integral control and the proposed controller using the wind correction angle, followed by an assessment of response to atmospheric disturbances in the form of turbulence and wind gusts