965 resultados para Enthalpy calibration
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
Introduction: The motivation for developing megavoltage (and kilovoltage) cone beam CT (MV CBCT) capabilities in the radiotherapy treatment room was primarily based on the need to improve patient set-up accuracy. There has recently been an interest in using the cone beam CT data for treatment planning. Accurate treatment planning, however, requires knowledge of the electron density of the tissues receiving radiation in order to calculate dose distributions. This is obtained from CT, utilising a conversion between CT number and electron density of various tissues. The use of MV CBCT has particular advantages compared to treatment planning with kilovoltage CT in the presence of high atomic number materials and requires the conversion of pixel values from the image sets to electron density. Therefore, a study was undertaken to characterise the pixel value to electron density relationship for the Siemens MV CBCT system, MVision, and determine the effect, if any, of differing the number of monitor units used for acquisition. If a significant difference with number of monitor units was seen then pixel value to ED conversions may be required for each of the clinical settings. The calibration of the MV CT images for electron density offers the possibility for a daily recalculation of the dose distribution and the introduction of new adaptive radiotherapy treatment strategies. Methods: A Gammex Electron Density CT Phantom was imaged with the MVCB CT system. The pixel value for each of the sixteen inserts, which ranged from 0.292 to 1.707 relative electron density to the background solid water, was determined by taking the mean value from within a region of interest centred on the insert, over 5 slices within the centre of the phantom. These results were averaged and plotted against the relative electron densities of each insert with a linear least squares fit was preformed. This procedure was performed for images acquired with 5, 8, 15 and 60 monitor units. Results: The linear relationship between MVCT pixel value and ED was demonstrated for all monitor unit settings and over a range of electron densities. The number of monitor units utilised was found to have no significant impact on this relationship. Discussion: It was found that the number of MU utilised does not significantly alter the pixel value obtained for different ED materials. However, to ensure the most accurate and reproducible MV to ED calibration, one MU setting should be chosen and used routinely. To ensure accuracy for the clinical situation this MU setting should correspond to that which is used clinically. If more than one MU setting is used clinically then an average of the CT values acquired with different numbers of MU could be utilized without loss in accuracy. Conclusions: No significant differences have been shown between the pixel value to ED conversion for the Siemens MV CT cone beam unit with change in monitor units. Thus as single conversion curve could be utilised for MV CT treatment planning. To fully utilise MV CT imaging for radiotherapy treatment planning further work will be undertaken to ensure all corrections have been made and dose calculations verified. These dose calculations may be either for treatment planning purposes or for reconstructing the delivered dose distribution from transit dosimetry measurements made using electronic portal imaging devices. This will potentially allow the cumulative dose distribution to be determined through the patient’s multi-fraction treatment and adaptive treatment strategies developed to optimize the tumour response.
Resumo:
We have taken a new method of calibrating portal images of IMRT beams and used this to measure patient set-up accuracy and delivery errors, such as leaf errors and segment intensity errors during treatment. A calibration technique was used to remove the intensity modulations from the images leaving equivalent open field images that show patient anatomy that can be used for verification of the patient position. The images of the treatment beam can also be used to verify the delivery of the beam in terms of multileaf collimator leaf position and dosimetric errors. A series of controlled experiments delivering an IMRT anterior beam to the head and neck of a humanoid phantom were undertaken. A 2mm translation in the position of the phantom could be detected. With intentional introduction of delivery errors into the beam this method allowed us to detect leaf positioning errors of 2mm and variation in monitor units of 1%. The method was then applied to the case of a patient who received IMRT treatment to the larynx and cervical nodes. The anterior IMRT beam was imaged during four fractions and the images calibrated and investigated for the characteristic signs of patient position error and delivery error that were shown in the control experiments. No significant errors were seen. The method of imaging the IMRT beam and calibrating the images to remove the intensity modulations can be a useful tool in verifying both the patient position and the delivery of the beam.
Resumo:
Stereo visual odometry has received little investigation in high altitude applications due to the generally poor performance of rigid stereo rigs at extremely small baseline-to-depth ratios. Without additional sensing, metric scale is considered lost and odometry is seen as effective only for monocular perspectives. This paper presents a novel modification to stereo based visual odometry that allows accurate, metric pose estimation from high altitudes, even in the presence of poor calibration and without additional sensor inputs. By relaxing the (typically fixed) stereo transform during bundle adjustment and reducing the dependence on the fixed geometry for triangulation, metrically scaled visual odometry can be obtained in situations where high altitude and structural deformation from vibration would cause traditional algorithms to fail. This is achieved through the use of a novel constrained bundle adjustment routine and accurately scaled pose initializer. We present visual odometry results demonstrating the technique on a short-baseline stereo pair inside a fixed-wing UAV flying at significant height (~30-100m).
Resumo:
It is exciting to be living at a time when the big questions in biology can be investigated using modern genetics and computing [1]. Bauzà-Ribot et al.[2] take on one of the fundamental drivers of biodiversity, the effect of continental drift in the formation of the world’s biota 3 and 4, employing next-generation sequencing of whole mitochondrial genomes and modern Bayesian relaxed molecular clock analysis. Bauzà-Ribot et al.[2] conclude that vicariance via plate tectonics best explains the genetic divergence between subterranean metacrangonyctid amphipods currently found on islands separated by the Atlantic Ocean. This finding is a big deal in biogeography, and science generally [3], as many other presumed biotic tectonic divergences have been explained as probably due to more recent transoceanic dispersal events [4]. However, molecular clocks can be problematic 5 and 6 and we have identified three issues with the analyses of Bauzà-Ribot et al.[2] that cast serious doubt on their results and conclusions. When we reanalyzed their mitochondrial data and attempted to account for problems with calibration 5 and 6, modeling rates across branches 5 and 7 and substitution saturation [5], we inferred a much younger date for their key node. This implies either a later trans-Atlantic dispersal of these crustaceans, or more likely a series of later invasions of freshwaters from a common marine ancestor, but either way probably not ancient tectonic plate movements.
Resumo:
High-performance liquid chromatography coupled with solid phase extraction method was developed for determination of isofraxidin in rat plasma after oral administration of Acanthopanax senticosus extract (ASE), and pharmacokinetic parameters of isofraxidin either in ASE or pure compound were measured. The HPLC analysis was performed on a Dikma Diamonsil RP(18) column (4.6 mm x 150 mm, 5 microm) with the isocratic elution of solvent A (acetonitrile) and solvent B (0.1% aqueous phosphoric acid, v/v) (A : B = 22 : 78) and the detection wavelength was set at 343 nm. The calibration curve was linear over the range of 0.156-15.625 microg/ml. The limit of detection was 60 ng/ml. The intra-day precision was 5.8%, and the inter-day precision was 6.0%. The recovery was 87.30+/-1.73%. When the dosage of ASE is equal to pure compound caculated by the amount of isofraxidin, it has been found to have two maximum concentrations in plasma while the pure compound only showed one peak in the plasma concentration-time curve. The determined content of isofraxidin in plasma after oral administration of ASE is the total contents of free isofraxidin and its precursors in ASE in vitro. The pharmacokinetic characteristics of ASE showed the priority of the extract and the properities of traditional Chinese medicine.
Resumo:
A method for the rapid and simultaneous determination of 6,7-dimethylesculetin (CAS 120-08-1) and geniposide (CAS 24512-63-8) in rat plasma has been developed, using validated high performance liquid chromatography (HPLC) with solid phase extraction (SPE). The HPLC analysis was performed on a commercially available column (200 mm x 4.6 mm, 5 microm) with acetonitrile-methanol-0.1% aqueous formic acid as mobile phase and the UV detection at 343 nm and 238 nm for 6,7-dimethylesculetin and geniposide, respectively. The calibration curves for 6,7-dimethylesculetin and geniposide were linear over the range 0.4-25.6 microg/mL and 1.12-71.68 microg/mL, respectively. The lower limits of quantitation were 0.40 microg/ mL and 1.12 microg/mL, and the lower limits of detection were 0.06 microg/mL and 0.09 microg/ mL, respectively. The intra-day and inter-day precision for 6,7-dimethylesculetin and geniposide were < 5%, whereas the absolute recovery percentages were > 74%. A successful application of the developed HPLC analysis was demonstrated for the pharmacokinetic study of a Traditional Chinese Medicine formula of Yin Chen Hao Tang preparation.
Resumo:
High performance liquid chromatography (HPLC) coupled with the solid phase extraction method was developed for determining cimifugin (a coumarin derivative; one of Saposhnikovia divaricatae's constituents) in rat plasma after oral administration of Saposhnikovia divaricatae extract (SDE), and the pharmacokinetics of cimifugin either in SDE or as a single compound was investigated. The HPLC analysis was performed on a commercially available column (4.6 mm x 200 mm, 5 pm) with the isocratic elution of solvent A (Methanol) and solvent B (Water) (A:B=60:40) and the detection wavelength was set at 250 nm. The calibration curve was linear over the range of 0.100-10.040 microg/mL. The limit of detection was 30 ng/mL. At the rat plasma concentrations of 0.402, 4.016, 10.040 microg/mL, the intra-day precision was 6.21%, 3.98%, and 2.23%; the inter-day precision was 7.59%, 4.26%, and 2.09%, respectively. The absolute recovery was 76.58%, 76.61%, and 77.67%, respectively. When the dosage of SDE was equal to the pure compound calculated by the amount of cimifugin, it was found to have two maximum peaks while the pure compound only showed one peak in the plasma concentration-time curve. The pharmacokinetic characteristics of SDE showed the superiority of the extract and the properties of traditional Chinese medicine.
Resumo:
Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—“Properties of Nanoparticle Populations” of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 “Techniques for characterizing size distribution of airborne nanoparticles”. Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2–46.6 nm and 80.2–89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.
Resumo:
Dwell time at the busway station has a significant effect on bus capacity and delay. Dwell time has conventionally been estimated using models developed on the basis of field survey data. However field survey is resource and cost intensive, so dwell time estimation based on limited observations can be somewhat inaccurate. Most public transport systems are now equipped with Automatic Passenger Count (APC) and/or Automatic Fare Collection (AFC) systems. AFC in particular reduces on-board ticketing time, driver’s work load and ultimately reduces bus dwell time. AFC systems can record all passenger transactions providing transit agencies with access to vast quantities of data. AFC data provides transaction timestamps, however this information differs from dwell time because passengers may tag on or tag off at times other than when doors open and close. This research effort contended that models could be developed to reliably estimate dwell time distributions when measured distributions of transaction times are known. Development of the models required calibration and validation using field survey data of actual dwell times, and an appreciation of another component of transaction time being bus time in queue. This research develops models for a peak period and off peak period at a busway station on the South East Busway (SEB) in Brisbane, Australia.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Application of near infrared (NIR) spectroscopy for determining the thickness of articular cartilage
Resumo:
The determination of the characteristics of articular cartilage such as thickness, stiffness and swelling, especially in the form that can facilitate real-time decisions and diagnostics is still a matter for research and development. This paper correlates near infrared spectroscopy with mechanically measured cartilage thickness to establish a fast, non-destructive, repeatable and precise protocol for determining this tissue property. Statistical correlation was conducted between the thickness of bovine cartilage specimens (n = 97) and regions of their near infrared spectra. Nine regions were established along the full absorption spectrum of each sample and were correlated with the thickness using partial least squares (PLS) regression multivariate analysis. The coefficient of determination (R2) varied between 53 and 93%, with the most predictive region (R2 = 93.1%, p < 0.0001) for cartilage thickness lying in the region (wavenumber) 5350–8850 cm−1. Our results demonstrate that the thickness of articular cartilage can be measured spectroscopically using NIR light. This protocol is potentially beneficial to clinical practice and surgical procedures in the treatment of joint disease such as osteoarthritis.
Resumo:
Railway is one of the most important, reliable and widely used means of transportation, carrying freight, passengers, minerals, grains, etc. Thus, research on railway tracks is extremely important for the development of railway engineering and technologies. The safe operation of a railway track is based on the railway track structure that includes rails, fasteners, pads, sleepers, ballast, subballast and formation. Sleepers are very important components of the entire structure and may be made of timber, concrete, steel or synthetic materials. Concrete sleepers were first installed around the middle of last century and currently are installed in great numbers around the world. Consequently, the design of concrete sleepers has a direct impact on the safe operation of railways. The "permissible stress" method is currently most commonly used to design sleepers. However, the permissible stress principle does not consider the ultimate strength of materials, probabilities of actual loads, and the risks associated with failure, all of which could lead to the conclusion of cost-ineffectiveness and over design of current prestressed concrete sleepers. Recently the limit states design method, which appeared in the last century and has been already applied in the design of buildings, bridges, etc, is proposed as a better method for the design of prestressed concrete sleepers. The limit states design has significant advantages compared to the permissible stress design, such as the utilisation of the full strength of the member, and a rational analysis of the probabilities related to sleeper strength and applied loads. This research aims to apply the ultimate limit states design to the prestressed concrete sleeper, namely to obtain the load factors of both static and dynamic loads for the ultimate limit states design equations. However, the sleepers in rail tracks require different safety levels for different types of tracks, which mean the different types of tracks have different load factors of limit states design equations. Therefore, the core tasks of this research are to find the load factors of the static component and dynamic component of loads on track and the strength reduction factor of the sleeper bending strength for the ultimate limit states design equations for four main types of tracks, i.e., heavy haul, freight, medium speed passenger and high speed passenger tracks. To find those factors, the multiple samples of static loads, dynamic loads and their distributions are needed. In the four types of tracks, the heavy haul track has the measured data from Braeside Line (A heavy haul line in Central Queensland), and the distributions of both static and dynamic loads can be found from these data. The other three types of tracks have no measured data from sites and the experimental data are hardly available. In order to generate the data samples and obtain their distributions, the computer based simulations were employed and assumed the wheel-track impacts as induced by different sizes of wheel flats. A valid simulation package named DTrack was firstly employed to generate the dynamic loads for the freight and medium speed passenger tracks. However, DTrack is only valid for the tracks which carry low or medium speed vehicles. Therefore, a 3-D finite element (FE) model was then established for the wheel-track impact analysis of the high speed track. This FE model has been validated by comparing its simulation results with the DTrack simulation results, and with the results from traditional theoretical calculations based on the case of heavy haul track. Furthermore, the dynamic load data of the high speed track were obtained from the FE model and the distributions of both static and dynamic loads were extracted accordingly. All derived distributions of loads were fitted by appropriate functions. Through extrapolating those distributions, the important parameters of distributions for the static load induced sleeper bending moment and the extreme wheel-rail impact force induced sleeper dynamic bending moments and finally, the load factors, were obtained. Eventually, the load factors were obtained by the limit states design calibration based on reliability analyses with the derived distributions. After that, a sensitivity analysis was performed and the reliability of the achieved limit states design equations was confirmed. It has been found that the limit states design can be effectively applied to railway concrete sleepers. This research significantly contributes to railway engineering and the track safety area. It helps to decrease the failure and risks of track structure and accidents; better determines the load range for existing sleepers in track; better rates the strength of concrete sleepers to support bigger impact and loads on railway track; increases the reliability of the concrete sleepers and hugely saves investments on railway industries. Based on this research, many other bodies of research can be promoted in the future. Firstly, it has been found that the 3-D FE model is suitable for the study of track loadings and track structure vibrations. Secondly, the equations for serviceability and damageability limit states can be developed based on the concepts of limit states design equations of concrete sleepers obtained in this research, which are for the ultimate limit states.