900 resultados para Sites of interest
Resumo:
Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.
Resumo:
Presbyopia affects individuals from the age of 45 years onwards, resulting in difficulty in accurately focusing on near objects. There are many optical corrections available including spectacles or contact lenses that are designed to enable presbyopes to see clearly at both far and near distances. However, presbyopic vision corrections also disturb aspects of visual function under certain circumstances. The impact of these changes on activities of daily living such as driving are, however, poorly understood. Therefore, the aim of this study was to determine which aspects of driving performance might be affected by wearing different types of presbyopic vision corrections. In order to achieve this aim, three experiments were undertaken. The first experiment involved administration of a questionnaire to compare the subjective driving difficulties experienced when wearing a range of common presbyopic contact lens and spectacle corrections. The questionnaire was developed and piloted, and included a series of items regarding difficulties experienced while driving under day and night-time conditions. Two hundred and fifty five presbyopic patients responded to the questionnaire and were categorised into five groups, including those wearing no vision correction for driving (n = 50), bifocal spectacles (BIF, n = 54), progressive addition lenses spectacles (PAL, n = 50), monovision (MV, n = 53) and multifocal contact lenses (MTF CL, n = 48). Overall, ratings of satisfaction during daytime driving were relatively high for all correction types. However, MV and MTF CL wearers were significantly less satisfied with aspects of their vision during night-time than daytime driving, particularly with regard to disturbances from glare and haloes. Progressive addition lens wearers noticed more distortion of peripheral vision, while BIF wearers reported more difficulties with tasks requiring changes in focus and those who wore no vision correction for driving reported problems with intermediate and near tasks. Overall, the mean level of satisfaction for daytime driving was quite high for all of the groups (over 80%), with the BIF wearers being the least satisfied with their vision for driving. Conversely, at night, MTF CL wearers expressed the least satisfaction. Research into eye and head movements has become increasingly of interest in driving research as it provides a means of understanding how the driver responds to visual stimuli in traffic. Previous studies have found that wearing PAL can affect eye and head movement performance resulting in slower eye movement velocities and longer times to stabilize the gaze for fixation. These changes in eye and head movement patterns may have implications for driving safety, given that the visual tasks for driving include a range of dynamic search tasks. Therefore, the second study was designed to investigate the influence of different presbyopic corrections on driving-related eye and head movements under standardized laboratory-based conditions. Twenty presbyopes (mean age: 56.1 ± 5.7 years) who had no experience of wearing presbyopic vision corrections, apart from single vision reading spectacles, were recruited. Each participant wore five different types of vision correction: single vision distance lenses (SV), PAL, BIF, MV and MTF CL. For each visual condition, participants were required to view videotape recordings of traffic scenes, track a reference vehicle and identify a series of peripherally presented targets while their eye and head movements were recorded using the faceLAB® eye and head tracking system. Digital numerical display panels were also included as near visual stimuli (simulating the visual displays of a vehicle speedometer and radio). The results demonstrated that the path length of eye movements while viewing and responding to driving-related traffic scenes was significantly longer when wearing BIF and PAL than MV and MTF CL. The path length of head movements was greater with SV, BIF and PAL than MV and MTF CL. Target recognition was less accurate when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze, regardless of vision correction type. The third experiment aimed to investigate the real world driving performance of presbyopes while wearing different vision corrections measured on a closed-road circuit at night-time. Eye movements were recorded using the ASL Mobile Eye, eye tracking system (as the faceLAB® system proved to be impractical for use outside of the laboratory). Eleven participants (mean age: 57.25 ± 5.78 years) were fitted with four types of prescribed vision corrections (SV, PAL, MV and MTF CL). The measures of driving performance on the closed-road circuit included distance to sign recognition, near target recognition, peripheral light-emitting-diode (LED) recognition, low contrast road hazards recognition and avoidance, recognition of all the road signs, time to complete the course, and driving behaviours such as braking, accelerating, and cornering. The results demonstrated that driving performance at night was most affected by MTF CL compared to PAL, resulting in shorter distances to read signs, slower driving speeds, and longer times spent fixating road signs. Monovision resulted in worse performance in the task of distance to read a signs compared to SV and PAL. The SV condition resulted in significantly more errors made in interpreting information from in-vehicle devices, despite spending longer time fixating on these devices. Progressive addition lenses were ranked as the most preferred vision correction, while MTF CL were the least preferred vision correction for night-time driving. This thesis addressed the research question of how presbyopic vision corrections affect driving performance and the results of the three experiments demonstrated that the different types of presbyopic vision corrections (e.g. BIF, PAL, MV and MTF CL) can affect driving performance in different ways. Distance-related driving tasks showed reduced performance with MV and MTF CL, while tasks which involved viewing in-vehicle devices were significantly hampered by wearing SV corrections. Wearing spectacles such as SV, BIF and PAL induced greater eye and head movements in the simulated driving condition, however this did not directly translate to impaired performance on the closed- road circuit tasks. These findings are important for understanding the influence of presbyopic vision corrections on vision under real world driving conditions. They will also assist the eye care practitioner to understand and convey to patients the potential driving difficulties associated with wearing certain types of presbyopic vision corrections and accordingly to support them in the process of matching patients to optical corrections which meet their visual needs.
Resumo:
The call to innovate is ubiquitous across the Australian educational policy context. The claims of innovative practices and environments that occur frequently in university mission statements, strategic plans and marketing literature suggest that this exhortation to innovate appears to have been taken up enthusiastically by the university sector. Throughout the history of universities, a range of reported deficiencies of higher education have worked to produce a notion of crisis. At present, it would seem that innovation is positioned as the solution to the notion of crisis. This thesis is an inquiry into how the insistence on innovation works to both enable and constrain teaching and learning practices in Australian universities. Alongside the interplay between innovation and crisis is the link between resistance and innovation, a link which remains largely unproblematized in the scholarly literature. This thesis works to locate and unsettle understandings of a relationship between innovation and Australian higher education. The aim of this inquiry is to generate new understandings of what counts as innovation within this context and how innovation is enacted. The thesis draws on a number of postmodernist theorists, whose works have informed firstly the research method, and then the analysis and findings. Firstly, there is an assumption that power is capillary and works through discourse to enact power relations which shape certain truths (Foucault, 1990). Secondly, this research scrutinised language practices which frame the capacity for individuals to act, alongside the language practices which encourage an individual to adopt certain attitudes and actions as one’s own (Foucault, 1988). Thirdly, innovation talk is read in this thesis as an example of needs talk, that is, as a medium through which what is considered domestic, political or economic is made and contested (Fraser, 1989). Fourthly, relationships between and within discourses were identified and analysed beyond cause and effect descriptions, and more productively considered to be in a constant state of becoming (Deleuze, 1987). Finally, the use of ironic research methods assisted in producing alternate configurations of innovation talk which are useful and new (Rorty, 1989). The theoretical assumptions which underpin this thesis inform a document analysis methodology, used to examine how certain texts work to shape the ways in which innovation is constructed. The data consisted of three Federal higher education funding policies selected on the rationale that these documents, as opposed to state or locally based policy and legislation, represent the only shared policy context for all Australian universities. The analysis first provided a modernist reading of the three documents, and this was followed by postmodernist readings of these same policy documents. The modernist reading worked to locate and describe the current truths about innovation. The historical context in which the policy was produced as well as the textual features of the document itself were important to this reading. In the first modernist reading, the binaries involved in producing proper and improper notions of innovation were described and analysed. In the process of the modernist analysis and the subsequent location of binary organisation, a number of conceptual collisions were identified, and these sites of struggle were revisited, through the application of a postmodernist reading. By applying the theories of Rorty (1989) and Fraser (1989) it became possible to not treat these sites as contradictory and requiring resolution, but rather as spaces in which binary tensions are necessary and productive. This postmodernist reading constructed new spaces for refusing and resisting dominant discourses of innovation which value only certain kinds of teaching and learning practices. By exploring a number of ironic language practices found within the policies, this thesis proposes an alternative way of thinking about what counts as innovation and how it happens. The new readings of innovation made possible through the work of this thesis were in response to a suite of enduring, inter-related questions – what counts as innovation?, who or what supports innovation?, how does innovation occur?, and who are the innovators?. The truths presented in response to these questions were treated as the language practices which constitute a dominant discourse of innovation talk. The collisions that occur within these truths were the contested sites which were of most interest for the analysis. The thesis concludes by presenting a theoretical blueprint which works to shift the boundaries of what counts as innovation and how it happens in a manner which is productive, inclusive and powerful. This blueprint forms the foundation upon which a number of recommendations are made for both my own professional practice and broader contexts. In keeping with the conceptual tone of this study, these recommendations are a suite of new questions which focus attention on the boundaries of innovation talk as an attempt to re-configure what is valued about teaching and learning at university.
Resumo:
An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.
Resumo:
The Silk Road Project was a practice-based research project investigating the potential of motion capture technology to inform perceptions of embodiment in dance performance. The project created a multi-disciplinary collaborative performance event using dance performance and real-time motion capture at Deakin University’s Deakin Motion Lab. Several new technological advances in producing real-time motion capture performance were produced, along with a performance event that examined the aesthetic interplay between a dancer’s movement and the precise mappings of its trajectories created by motion capture and real-time motion graphic visualisations.
Resumo:
During the resorbable-polymer-boom of the 1970s and 1980s, polycaprolactone (PCL) was used in the biomaterials field and a number of drug-delivery devices. Its popularity was soon superseded by faster resorbable polymers which had fewer perceived disadvantages associated with long term degradation (up to 3-4 years) and intracellular resorption pathways; consequently, PCL was almost forgotten for most of two decades. Recently, a resurgence of interest has propelled PCL back into the biomaterials-arena. The superior rheological and viscoelastic properties over many of its aliphatic polyester counterparts renders PCL easy to manufacture and manipulate into a large range of implants and devices. Coupled with relatively inexpensive production routes and FDA approval, this provides a promising platform for the production of longer-term degradable implants which may be manipulated physically, chemically and biologically to possess tailorable degradation kinetics to suit a specific anatomical site. This review will discuss the application of PCL as a biomaterial over the last two decades focusing on the advantages which have propagated its return into the spotlight with a particular focus on medical devices, drug delivery and tissue engineering.
Resumo:
Over recent decades there has been growing interest in the role of non-motorized modes in the overall transport system (especially walking and cycling for private purposes) and many government initiatives have been taken to encourage these active modes. However there has been relatively little research attention given to the paid form of non-motorized travel which can be called non-motorized public transport (NMPT). This involves cycle-powered vehicles which can carry several passengers (plus the driver) and a small amount of goods; and which provide flexible hail-and-ride services. Effectively they are non-motorized taxis. Common forms include cycle-rickshaw (Bangladesh, India), becak (Indonesia), cyclos (Vietnam, Cambodia), bicitaxi (Columbia, Cuba), velo-taxi (Germany, Netherland), and pedicabs (UK, Japan, USA). --------- The popularity of NMPT is widespread in developing countries, where it caters for a wide range of mobility needs. For instance in Dhaka, Bangladesh, rickshaws are the preferred mode for non-walk trips and have a higher mode share than cars or buses. Factors that underlie the continued existence and popularity of NMPT in many developing countries include positive contribution to social equity, micro-macro economic significance, employment creation, and suitability for narrow and crowded streets. Although top speeds are lower than motorized modes, NMPT is competitive and cost-effective for short distance door-to-door trips that make up the bulk of travel in many developing cities. In addition, NMPT is often the preferred mode for vulnerable groups such as females, children and elderly people. NMPT is more prominent in developing countries but its popularity and significance is also gradually increasing in several developed countries of Asia, Europe and parts of North America, where there is a trend for the NMPT usage pattern to broaden from tourism to public transport. This shift is due to a number of factors including the eco-sustainable nature of NMPT; its operating flexibility (such as in areas where motorized vehicle access is restricted or discouraged through pricing); and the dynamics that it adds to the urban fabric. Whereas NMPT may have been seen as a “dying” mode, in many cities it is maintaining or increasing its significance and with potential for further growth. --------- This paper will examine and analyze global trends in NMPT incorporating both developing and developed country contexts and issues such as usage patterns; NMPT policy and management practices; technological development; and operational integration of NMPT into the overall transport system. It will look at how NMPT policies, practices and usage have changed over time and the differing trends in developing and developed countries. In particular, it will use Dhaka, Bangladesh as a case study in recognition of its standing as the major NMPT city in the world. The aim is to highlight NMPT issues and trends and their significance for shaping future policy towards NMPT in developing and developed countries. The paper will be of interest to transport planners, traffic engineers, urban and regional planners, environmentalists, economists and policy makers.
Resumo:
Various piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes. Dimensional adjustments of adaptive polymer films depend on charge deposition and require a detailed understanding of the piezoelectric material responses which are expected to deteriorate owing to strong vacuum UV, � -, X-ray, energetic particles and atomic oxygen exposure. We have investigated the degradation of PVDF and its copolymers under various stress environments detrimental to reliable operation in space. Initial radiation aging studies have shown complex material changes with lowered Curie temperatures, complex material changes with lowered melting points, morphological transformations and significant crosslinking, but little influence on piezoelectric d33 constants. Complex aging processes have also been observed in accelerated temperature environments inducing annealing phenomena and cyclic stresses. The results suggest that poling and chain orientation are negatively affected by radiation and temperature exposure. A framework for dealing with these complex material qualification issues and overall system survivability predictions in low earth orbit conditions has been established. It allows for improved material selection, feedback for manufacturing and processing, material optimization/stabilization strategies and provides guidance on any alternative materials.
Resumo:
Piezoelectric polymers based on polyvinylidene flouride (PVDF) are of interest as adaptive materials for large aperture space-based telescopes. In this study, two piezoelectric polymers, PVDF and P(VDF-TrFE), were exposed to conditions simulating the thermal, radiative and atomic oxygen conditions of low Earth orbit. The degradation pathways were governed by a combination of chemical and physical degradation processes with the molecular changes primarily induced via radiative damage, and physical damage from temperature and atomic oxygen exposure, as evident from depoling, loss of orientation and surface erosion. The piezoelectric responsiveness of each polymer was strongly dependent on exposure temperature. Radiation and atomic oxygen exposure caused physical and chemical degradation, which would ultimately cause terminal damage of thin films, but did not adversely affect the piezoelectric properties.
Resumo:
Multipotent mesenchymal stem cells (MSCs), first identified in the bone marrow, have subsequently been found in many other tissues, including fat, cartilage, muscle, and bone. Adipose tissue has been identified as an alternative to bone marrow as a source for the isolation of MSCs, as it is neither limited in volume nor as invasive in the harvesting. This study compares the multipotentiality of bone marrow-derived mesenchymal stem cells (BMSCs) with that of adipose-derived mesenchymal stem cells (AMSCs) from 12 age- and sex-matched donors. Phenotypically, the cells are very similar, with only three surface markers, CD106, CD146, and HLA-ABC, differentially expressed in the BMSCs. Although colony-forming units-fibroblastic numbers in BMSCs were higher than in AMSCs, the expression of multiple stem cell-related genes, like that of fibroblast growth factor 2 (FGF2), the Wnt pathway effectors FRAT1 and frizzled 1, and other self-renewal markers, was greater in AMSCs. Furthermore, AMSCs displayed enhanced osteogenic and adipogenic potential, whereas BMSCs formed chondrocytes more readily than AMSCs. However, by removing the effects of proliferation from the experiment, AMSCs no longer out-performed BMSCs in their ability to undergo osteogenic and adipogenic differentiation. Inhibition of the FGF2/fibroblast growth factor receptor 1 signaling pathway demonstrated that FGF2 is required for the proliferation of both AMSCs and BMSCs, yet blocking FGF2 signaling had no direct effect on osteogenic differentiation. Disclosure of potential conflicts of interest is found at the end of this article.
Resumo:
High renewal and maintenance of multipotency of human adult stem cells (hSCs), are a prerequisite for experimental analysis as well as for potential clinical usages. The most widely used strategy for hSC culture and proliferation is using serum. However, serum is poorly defined and has a considerable degree of inter-batch variation, which makes it difficult for large-scale mesenchymal stem cells (MSCs) expansion in homogeneous culture conditions. Moreover, it is often observed that cells grown in serum-containing media spontaneously differentiate into unknown and/or undesired phenotypes. Another way of maintaining hSC development is using cytokines and/or tissue-specific growth factors; this is a very expensive approach and can lead to early unwanted differentiation. In order to circumvent these issues, we investigated the role of sphingosine-1-phosphate (S1P), in the growth and multipotency maintenance of human bone marrow and adipose tissue-derived MSCs. We show that S1P induces growth, and in combination with reduced serum, or with the growth factors FGF and platelet-derived growth factor-AB, S1P has an enhancing effect on growth. We also show that the MSCs cultured in S1P-supplemented media are able to maintain their differentiation potential for at least as long as that for cells grown in the usual serum-containing media. This is shown by the ability of cells grown in S1P-containing media to be able to undergo osteogenic as well as adipogenic differentiation. This is of interest, since S1P is a relatively inexpensive natural product, which can be obtained in homogeneous high-purity batches: this will minimize costs and potentially reduce the unwanted side effects observed with serum. Taken together, S1P is able to induce proliferation while maintaining the multipotency of different human stem cells, suggesting a potential for S1P in developing serum-free or serum-reduced defined medium for adult stem cell cultures.
Resumo:
Aim. The paper is a report of a study to demonstrate how the use of schematics can provide procedural clarity and promote rigour in the conduct of case study research. Background. Case study research is a methodologically flexible approach to research design that focuses on a particular case – whether an individual, a collective or a phenomenon of interest. It is known as the 'study of the particular' for its thorough investigation of particular, real-life situations and is gaining increased attention in nursing and social research. However, the methodological flexibility it offers can leave the novice researcher uncertain of suitable procedural steps required to ensure methodological rigour. Method. This article provides a real example of a case study research design that utilizes schematic representation drawn from a doctoral study of the integration of health promotion principles and practices into a palliative care organization. Discussion. The issues discussed are: (1) the definition and application of case study research design; (2) the application of schematics in research; (3) the procedural steps and their contribution to the maintenance of rigour; and (4) the benefits and risks of schematics in case study research. Conclusion. The inclusion of visual representations of design with accompanying explanatory text is recommended in reporting case study research methods.
Resumo:
Standardization is critical to scientists and regulators to ensure the quality and interoperability of research processes, as well as the safety and efficacy of the attendant research products. This is perhaps most evident in the case of “omics science,” which is enabled by a host of diverse high-throughput technologies such as genomics, proteomics, and metabolomics. But standards are of interest to (and shaped by) others far beyond the immediate realm of individual scientists, laboratories, scientific consortia, or governments that develop, apply, and regulate them. Indeed, scientific standards have consequences for the social, ethical, and legal environment in which innovative technologies are regulated, and thereby command the attention of policy makers and citizens. This article argues that standardization of omics science is both technical and social. A critical synthesis of the social science literature indicates that: (1) standardization requires a degree of flexibility to be practical at the level of scientific practice in disparate sites; (2) the manner in which standards are created, and by whom, will impact their perceived legitimacy and therefore their potential to be used; and (3) the process of standardization itself is important to establishing the legitimacy of an area of scientific research.
Resumo:
The impact of government policy can become a strong enabler for the use of e-portfolios to support learning and employability. E-portfolio policy and practice seeks to draw together the different elements of integrated education and learning, graduate attributes, employability skills, professional competencies and lifelong learning, ultimately to support an engaged and productive workforce. Drawing on and updating the research findings from a nationwide research study conducted as part of the Australian ePortfolio Project, the present chapter discusses two important areas of the e-portfolio environment, government policy and academic policy. The focus is on those jurisdictions where government and academic policy issues have had a significant impact on e-portfolio practice, such as the European Union, the Netherlands, Scandinavian countries and the United Kingdom, as well as in Australia and New Zealand. These jurisdictions are of interest as government policy discussions are currently focusing on the need for closer integration between the different education and employment sectors. Finally, issues to be considered as well as strategies for driving policy decision making are presented.
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.