868 resultados para temperature-based models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

By proposing a numerical based method on PCA-ANFIS(Adaptive Neuro-Fuzzy Inference System), this paper is focusing on solving the problem of uncertain cycle of water injection in the oilfield. As the dimension of original data is reduced by PCA, ANFIS can be applied for training and testing the new data proposed by this paper. The correctness of PCA-ANFIS models are verified by the injection statistics data collected from 116 wells inside an oilfield, the average absolute error of testing is 1.80 months. With comparison by non-PCA based models which average error is 4.33 months largely ahead of PCA-ANFIS based models, it shows that the testing accuracy has been greatly enhanced by our approach. With the conclusion of the above testing, the PCA-ANFIS method is robust in predicting the effectiveness cycle of water injection which helps oilfield developers to design the water injection scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

International audience

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several models have been studied on predictive epidemics of arthropod vectored plant viruses in an attempt to bring understanding to the complex but specific relationship between the three cornered pathosystem (virus, vector and host plant), as well as their interactions with the environment. A large body of studies mainly focuses on weather based models as management tool for monitoring pests and diseases, with very few incorporating the contribution of vector's life processes in the disease dynamics, which is an essential aspect when mitigating virus incidences in a crop stand. In this study, we hypothesized that the multiplication and spread of tomato spotted wilt virus (TSWV) in a crop stand is strongly related to its influences on Frankliniella occidentalis preferential behavior and life expectancy. Model dynamics of important aspects in disease development within TSWV-F. occidentalis-host plant interactions were developed, focusing on F. occidentalis' life processes as influenced by TSWV. The results show that the influence of TSWV on F. occidentalis preferential behaviour leads to an estimated increase in relative acquisition rate of the virus, and up to 33% increase in transmission rate to healthy plants. Also, increased life expectancy; which relates to improved fitness, is dependent on the virus induced preferential behaviour, consequently promoting multiplication and spread of the virus in a crop stand. The development of vector-based models could further help in elucidating the role of tri-trophic interactions in agricultural disease systems. Use of the model to examine the components of the disease process could also boost our understanding on how specific epidemiological characteristics interact to cause diseases in crops. With this level of understanding we can efficiently develop more precise control strategies for the virus and the vector.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The East Asian Monsoon (EAM) is an active component of the global climate system and has a profound social and economic impact in East Asia and its surrounding countries. Its impact on regional hydrological processes may influence society through industrial water supplies, food productivity and energy use. In order to predict future rates of climate change, reliable and accurate reconstructions of regional temperature and rainfall are required from all over the world to test climate models and better predict future climate variability. Hokkaido is a region which has limited palaeo-climate data and is sensitive to climate change. Instrumental data show that the climate in Hokkaido is influenced by the East Asian Monsoon (EAM), however, instrumental data is limited to the past ~150 years. Therefore down-core climate reconstructions, prior to instrumental records, are required to provide a better understanding of the long-term behaviour of the climate drivers (e.g. the EAM, Westerlies, and teleconnections) in this region. The present study develops multi-proxy reconstructions to determine past climatic and hydrologic variability in Japan over the past 1000 years and aid in understanding the effects of the EAM and the Westerlies independently and interactively. A 250-cm long sediment core from Lake Toyoni, Hokkaido was retrieved to investigate terrestrial and aquatic input, lake temperature and hydrological changes over the past 1000-years within Lake Toyoni and its catchment using X-Ray Fluorescence (XRF) data, alkenone palaeothermometry, the molecular and hydrogen isotopic composition of higher plant waxes (δD(HPW)). Here, we conducted the first survey for alkenone biomarkers in eight lakes in the Hokkaido, Japan. We detected the occurrence of alkenones within the sediments of Lake Toyoni. We present the first lacustrine alkenone record from Japan, including genetic analysis of the alkenone producer. C37 alkenone concentrations in surface sediments are 18µg C37 g−1 of dry sediment and the dominant alkenone is C37:4. 18S rDNA analysis revealed the presence of a single alkenone producer in Lake Toyoni and thus a single calibration is used for reconstructing lake temperature based on alkenone unsaturation patterns. Temperature reconstructions over the past 1000 years suggest that lake water temperatures varies between 8 and 19°C which is in line with water temperature changes observed in the modern Lake Toyoni. The alkenone-based temperature reconstruction provides evidence for the variability of the EAM over the past 1000 years. The δD(HPW) suggest that the large fluctuations (∼40‰) represent changes in temperature and source precipitation in this region, which is ultimately controlled by the EAM system and therefore a proxy for the EAM system. In order to complement the biomarker reconstructions, the XRF data strengthen the lake temperature and hydrological reconstructions by providing information on past productivity, which is controlled by the East Asian Summer monsoon (EASM) and wind input into Lake Toyoni, which is controlled by the East Asian Winter Monsoon (EAWM) and the Westerlies. By combining the data generated from XRF, alkenone palaeothermometry and the δD(HPW) reconstructions, we provide valuable information on the EAM and the Westerlies, including; the timing of intensification and weakening, the teleconnections influencing them and the relationship between them. During the Medieval Warm Period (MWP), we find that the EASM dominated and the EAWM was suppressed, whereas, during the Little Ice Age (LIA), the influence of the EAWM dominated with time periods of increased EASM and Westerlies intensification. The El Niño Southern Oscillation (ENSO) significantly influenced the EAM; a strong EASM occurred during El Niño conditions and a strong EAWM occurred during La Niña. The North Atlantic Oscillation, on the other hand, was a key driver of the Westerlies intensification; strengthening of the Westerlies during a positive NAO phase and weakening of the Westerlies during a negative NAO phase. A key finding from this study is that our data support an anti-phase relationship between the EASM and the EAWM (e.g. the intensification of the EASM and weakening of the EAWM and vice versa) and that the EAWM and the Westerlies vary independently from each other, rather than coincide as previously suggested in other studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current approaches to managing and supporting staff and addressing turnover in child protection predominantly rely on deficit-based models that focus on limitations, shortcomings, and psychopathology. This article explores an alternative approach, drawing on models of resilience, which is an emerging field linked to trauma and adversity. To date, the concept of resilience has seen limited application to staff and employment issues. In child protection, staff typically face a range of adverse and traumatic experiences that have flow-on implications, creating difficulties for staff recruitment and retention and reduced service quality. This article commences with discussion of the multifactorial influences of the troubled state of contemporary child protection systems on staffing problems. Links between these and difficulties with the predominant deficit models are then considered. The article concludes with a discussion of the relevance and utility of resilience models in developing alternative approaches to child protection staffing issues.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Micropolar and RNG-based modelling of industrially relevant boundary layer and recirculating swirling flows is described. Both models contain a number of adjustable parameters and auxiliary conditions that must be either modelled or experimentally determined, and the effects of varying these on the resulting flow solutions is quantified. To these ends, the behaviour of the micropolar model for self-similar flow over a surface that is both stretching and transpiring is explored in depth. The simplified governing equations permit both analytic and numerical approaches to be adopted, and a number of closed form solutions (both exact and approximate) are obtained using perturbation and order of magnitude analyses. Results are compared with the corresponding Newtonian flow solution in order to highlight the differences between the micropolar and classical models, and significant new insights into the behaviour of the micropolar model are revealed for this flow. The behaviour of the RNG-bas based models for swirling flow with vortex breakdown zones is explored in depth via computational modelling of two experimental data sets and an idealised breakdown flow configuration. Meticulous modeling of upstream auxillary conditions is required to correctly assess the behavior of the models studied in this work. The novel concept of using the results to infer the role of turbulence in the onset and topology of the breakdown zone is employed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In silico experimental modeling of cancer involves combining findings from biological literature with computer-based models of biological systems in order to conduct investigations of hypotheses entirely in the computer laboratory. In this paper, we discuss the use of in silico modeling as a precursor to traditional clinical and laboratory research, allowing researchers to refine their experimental programs with an aim to reducing costs and increasing research efficiency. We explain the methodology of in silico experimental trials before providing an example of in silico modeling from the biomathematical literature with a view to promoting more widespread use and understanding of this research strategy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to tackle the growth of air travelers in airports worldwide, it is important to simulate and understand passenger flows to predict future capacity constraints and levels of service. We discuss the ability of agent-based models to understand complicated pedestrian movement in built environments. In this paper we propose advanced passenger traits to enable more detailed modelling of behaviors in terminal buildings, particularly in the departure hall around the check-in facilities. To demonstrate the concepts, we perform a series of passenger agent simulations in a virtual airport terminal. In doing so, we generate a spatial distribution of passengers within the departure hall to ancillary facilities such as cafes, information kiosks and phone booths as well as common check-in facilities, and observe the effects this has on passenger check-in and departure hall dwell times, and facility utilization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulating passenger flows within airports is very important as it can provide an indication of queue lengths, bottlenecks, system capacity and overall level of service. To date, visual simulation tools such as agent based models have focused on processing formalities such as check-in, and not incorporate discretionary activities such as duty-free shopping. As airport retail contributes greatly to airport revenue generation, but also has potentially detrimental effects on facilitation efficiency benchmarks, this study developed a simplistic simulation model which captures common duty-free purchasing opportunities, as well as high-level behaviours of passengers. It is argued that such a model enables more realistic simulation of passenger facilitation, and provides a platform for simulating real-time revenue generation as well as more complex passenger behaviours within the airport. Simulations are conducted to verify the suitability of the model for inclusion in the international arrivals process for assessing passenger flow and infrastructure utilization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background/aim In response to the high burden of disease associated with chronic heart failure (CHF), in particular the high rates of hospital admissions, dedicated CHF management programs (CHF-MP) have been developed. Over the past five years there has been a rapid growth of CHF-MPs in Australia. Given the apparent mismatch between the demand for, and availability of CHF-MPs, this paper has been designed to discuss the accessibility to and quality of current CHF-MPs in Australia. Methods The data presented in this report has been combined from the research of the co-authors, in particular a review of the inequities in access to chronic heart failure which utilised geographical information systems (GIS) and the survey of heterogeneity in quality and service provision in Australian. Results Of the 62 CHF-MPs surveyed in this study 93% (58) centres had been located areas that are rated as Highly Accessible. This result indicated that most of the CHF-MPs have been located in capital cities or large regional cities. Six percent (4 CHF-MPs) had been located in Accessible areas which were country towns or cities. No CHF-MPs had been established outside of cities to service the estimated 72,000 individuals with CHF living in rural and remote areas. 16% of programs recruited NYHA Class I patients and of these 20% lacked confirmation (echocardiogram) of their diagnosis. Conclusion Overall, these data highlight the urgent need to provide equitable access to CHF-MP's. When establishing CHF-MPs consideration of current evidence based models to ensure quality in practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent theoretical research has shown that ocean currents and wind interact to disperse seeds over long distances among isolated landmasses. Dispersal of seeds among isolated oceanic islands, by birds, oceans and man, is a well-known phenomenon, and many widespread island plants have traits that facilitate this process. Crucially, however, there have been no mechanistic vector-based models of long-distance dispersal for seeds among isolated oceanic islands based on empirical data. Here, we propose a plan to develop seed analogues, or pseudoseeds, fitted with wireless sensor technology that will enable high-fidelity tracking as they disperse across the ocean. The pseudoseeds will be precisely designed to mimic actual seed buoyancy and morphology enabling realistic and accurate, vector-based dispersal models of ocean seed dispersal over vast geographic scales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 1990 the Dispute Resolution Centres Act, 1990 (Qld) (the Act) was passed by the Queensland Parliament. In the second reading speech for the Dispute Resolution Centres Bill on May 1990 the Hon Dean Wells stated that the proposed legislation would make mediation services available “in a non-coercive, voluntary forum where, with the help of trained mediators, the disputants will be assisted towards their own solutions to their disputes, thereby ensuring that the result is acceptable to the parties” (Hansard, 1990, 1718). It was recognised at that time that a method for resolving disputes was necessary for which “the conventional court system is not always equipped to provide lasting resolution” (Hansard, 1990, 1717). In particular, the lasting resolution of “disputes between people in continuing relationships” was seen as made possible through the new legislation; for example, “domestic disputes, disputes between employees, and neighbourhood disputes relating to such issues as overhanging tree branches, dividing fences, barking dogs, smoke, noise and other nuisances are occurring continually in the community” (Hansard, 1990, 1717). The key features of the proposed form of mediation in the Act were articulated as follows: “attendance of both parties at mediation sessions is voluntary; a party may withdraw at any time; mediation sessions will be conducted with as little formality and technicality as possible; the rules of evidence will not apply; any agreement reached is not enforceable in any court; although it could be made so if the parties chose to proceed that way; and the provisions of the Act do not affect any rights or remedies that a party to a dispute has apart from the Act” (Hansard, 1990, 1718). Since the introduction of the Act, the Alternative Dispute Resolution Branch of the Queensland Department of Justice and Attorney General has offered mediation services through, first the Community Justice Program (CJP), and then the Dispute Resolution Centres (DRCs) for a range of family, neighbourhood, workplace and community disputes. These services have mirrored those available through similar government agencies in other states such as the Community Justice Centres of NSW and the Victorian Dispute Resolution Centres. Since 1990, mediation has become one of the fastest growing forms of alternative dispute resolution (ADR). Sourdin has commented that "In addition to the growth in court-based and community-based dispute resolution schemes, ADR has been institutionalised and has grown within Australia and overseas” (2005, 14). In Australia, in particular, the development of ADR service provision “has been assisted by the creation and growth of professional organisations such as the Leading Edge Alternative Dispute Resolvers (LEADR), the Australian Commercial Dispute Centres (ACDC), Australian Disputes Resolution Association (ADRA), Conflict Resolution Network, and the Institute of Arbitrators and Mediators Australia (IAMA)” (Sourdin, 2005, 14). The increased emphasis on the use of ADR within education contexts (particularly secondary and tertiary contexts) has “also led to an increasing acceptance and understanding of (ADR) processes” (Sourdin, 2005, 14). Proponents of the mediation process, in particular, argue that much of its success derives from the inherent flexibility and creativity of the agreements reached through the mediation process and that it is a relatively low cost option in many cases (Menkel-Meadow, 1997, 417). It is also accepted that one of the main reasons for the success of mediation can be attributed to the high level of participation by the parties involved and thus creating a sense of ownership of, and commitment to, the terms of the agreement (Boulle, 2005, 65). These characteristics are associated with some of the core values of mediation, particularly as practised in community-based models as found at the DRCs. These core values include voluntary participation, party self-determination and party empowerment (Boulle, 2005, 65). For this reason mediation is argued as being an effective approach to resolving disputes, that creates a lasting resolution of the issues. Evaluation of the mediation process, particularly in the context of the growth of ADR, has been an important aspect of the development of the process (Sourdin, 2008). Writing in 2005 for example, Boulle, states that “although there is a constant refrain for more research into mediation practice, there has been a not insignificant amount of mediation measurement, both in Australia and overseas” (Boulle, 2005, 575). The positive claims of mediation have been supported to a significant degree by evaluations of the efficiency and effectiveness of the process. A common indicator of the effectiveness of mediation is the settlement rate achieved. High settlement rates for mediated disputes have been found for Australia (Altobelli, 2003) and internationally (Alexander, 2003). Boulle notes that mediation agreement rates claimed by service providers range from 55% to 92% (Boulle, 2005, 590). The annual reports for the Alternative Dispute Resolution Branch of the Queensland Department of Justice and Attorney-General considered prior to the commencement of this study indicated generally achievement of an approximate settlement figure of 86% by the Queensland Dispute Resolution Centres. More recently, the 2008-2009 annual report states that of the 2291 civil dispute mediated in 2007-2008, 86% reached an agreement. Further, of the 2693 civil disputes mediated in 2008-2009, 73% reached an agreement. These results are noted in the report as indicating “the effectiveness of mediation in resolving disputes” and as reflecting “the high level of agreement achieved for voluntary mediations” (Annual Report, 2008-2009, online). Whilst the settlement rates for the DRCs are strong, parties are rarely contacted for long term follow-up to assess whether agreements reached during mediation lasted to the satisfaction of each party. It has certainly been the case that the Dispute Resolution Centres of Queensland have not been resourced to conduct long-term follow-up assessments of mediation agreements. As Wade notes, "it is very difficult to compare "success" rates” and whilst “politicians want the comparison studies (they) usually do not want the delay and expense of accurate studies" (1998, 114). To date, therefore, it is fair to say that the efficiency of the mediation process has been evaluated but not necessarily its effectiveness. Rather, the practice at the Queensland DRCs has been to evaluate the quality of mediation service provision and of the practice of the mediation process. This has occurred, for example, through follow-up surveys of parties' satisfaction rates with the mediation service. In most other respects it is fair to say that the Centres have relied on the high settlement rates of the mediation process as a sign of the effectiveness of mediation (Annual Reports 1991 - 2010). Research of the mediation literature conducted for the purpose of this thesis has also indicated that there is little evaluative literature that provides an in-depth analysis and assessment of the longevity of mediated agreements. Instead evaluative studies of mediation tend to assess how mediation is conducted, or compare mediation with other conflict resolution options, or assess the agreement rate of mediations, including parties' levels of satisfaction with the service provision of the dispute resolution service provider (Boulle, 2005, Chapter 16).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose - Thermo-magnetic convection and heat transfer of paramagnetic fluid placed in a micro-gravity condition (g = 0) and under a uniform vertical gradient magnetic field in an open square cavity with three cold sidewalls have been studied numerically. Design/methodology/approach - This magnetic force is proportional to the magnetic susceptibility and the gradient of the square of the magnetic induction. The magnetic susceptibility is inversely proportional to the absolute temperature based on Curie’s law. Thermal convection of a paramagnetic fluid can therefore take place even in zero-gravity environment as a direct consequence of temperature differences occurring within the fluid due to a constant internal heat generation placed within a magnetic field gradient. Findings - Effects of magnetic Rayleigh number, Ra, Prandtl number, Pr, and paramagnetic fluid parameter, m, on the flow pattern and isotherms as well as on the heat absorption are presented graphically. It is found that the heat transfer rate is suppressed in increased of the magnetic Rayleigh number and the paramagnetic fluid parameter for the present investigation. Originality/value - It is possible to control the buoyancy force by using the super conducting magnet. To the best knowledge of the author no literature related to magnetic convection for this configuration is available.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.