959 resultados para Engineers.
Resumo:
Restoring old buildings to conform the current building policies and standards is a great challenge to engineers and architects. The restoration of the Brisbane City Hall, a heritage building listed by the State of Queensland in Australia, developed an innovative approach to upgrade the building using the method called ‘concrete overlay’ following the guidelines of both the International Council on Monuments and Sites and the Burra Charter of Australia. Concrete overlay is a new method of structural strengthening by drilling new reinforcement and placing new concrete on top of the existing structure, akin to a bone transplant or bone grafting in the case of a human being. This method is popularly used for newer bridges which have suffered load stresses. However, this method had never been used on any heritage buildings which were built on different conditions and standards. The compatibility of this method is currently being monitored. Most of the modern historic buildings are rapidly deteriorating and require immediate interventions in order to be saved. As most of these heritage buildings are on the stage of advanced deterioration, significant attempts are being made and several innovations are being applied to upgrade these structures to conform with the current building requirements. To date, the knowledge and literature in regarding ‘concrete cancer’ in relation to rehabilitating these reinforced concrete heritage structures is significantly lacking. It is hoped that the method of concrete overlay and the case study of Brisbane City Hall restoration will contribute to the development of restoration techniques and policies for Modern Heritage Buildings.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
Introduction: The motivation for developing megavoltage (and kilovoltage) cone beam CT (MV CBCT) capabilities in the radiotherapy treatment room was primarily based on the need to improve patient set-up accuracy. There has recently been an interest in using the cone beam CT data for treatment planning. Accurate treatment planning, however, requires knowledge of the electron density of the tissues receiving radiation in order to calculate dose distributions. This is obtained from CT, utilising a conversion between CT number and electron density of various tissues. The use of MV CBCT has particular advantages compared to treatment planning with kilovoltage CT in the presence of high atomic number materials and requires the conversion of pixel values from the image sets to electron density. Therefore, a study was undertaken to characterise the pixel value to electron density relationship for the Siemens MV CBCT system, MVision, and determine the effect, if any, of differing the number of monitor units used for acquisition. If a significant difference with number of monitor units was seen then pixel value to ED conversions may be required for each of the clinical settings. The calibration of the MV CT images for electron density offers the possibility for a daily recalculation of the dose distribution and the introduction of new adaptive radiotherapy treatment strategies. Methods: A Gammex Electron Density CT Phantom was imaged with the MVCB CT system. The pixel value for each of the sixteen inserts, which ranged from 0.292 to 1.707 relative electron density to the background solid water, was determined by taking the mean value from within a region of interest centred on the insert, over 5 slices within the centre of the phantom. These results were averaged and plotted against the relative electron densities of each insert with a linear least squares fit was preformed. This procedure was performed for images acquired with 5, 8, 15 and 60 monitor units. Results: The linear relationship between MVCT pixel value and ED was demonstrated for all monitor unit settings and over a range of electron densities. The number of monitor units utilised was found to have no significant impact on this relationship. Discussion: It was found that the number of MU utilised does not significantly alter the pixel value obtained for different ED materials. However, to ensure the most accurate and reproducible MV to ED calibration, one MU setting should be chosen and used routinely. To ensure accuracy for the clinical situation this MU setting should correspond to that which is used clinically. If more than one MU setting is used clinically then an average of the CT values acquired with different numbers of MU could be utilized without loss in accuracy. Conclusions: No significant differences have been shown between the pixel value to ED conversion for the Siemens MV CT cone beam unit with change in monitor units. Thus as single conversion curve could be utilised for MV CT treatment planning. To fully utilise MV CT imaging for radiotherapy treatment planning further work will be undertaken to ensure all corrections have been made and dose calculations verified. These dose calculations may be either for treatment planning purposes or for reconstructing the delivered dose distribution from transit dosimetry measurements made using electronic portal imaging devices. This will potentially allow the cumulative dose distribution to be determined through the patient’s multi-fraction treatment and adaptive treatment strategies developed to optimize the tumour response.
Resumo:
Nonthermal plasma (NTP) treatment of exhaust gas is a promising technology for both nitrogen oxides (NOX) and particulate matter (PM) reduction by introducing plasma into the exhaust gases. This paper considers the effect of NTP on PM mass reduction, PM size distribution, and PM removal efficiency. The experiments are performed on real exhaust gases from a diesel engine. The NTP is generated by applying high-voltage pulses using a pulsed power supply across a dielectric barrier discharge (DBD) reactor. The effects of the applied high-voltage pulses up to 19.44 kVpp with repetition rate of 10 kHz are investigated. In this paper, it is shown that the PM removal and PM size distribution need to be considered both together, as it is possible to achieve high PM removal efficiency with undesirable increase in the number of small particles. Regarding these two important factors, in this paper, 17 kVpp voltage level is determined to be an optimum point for the given configuration. Moreover, particles deposition on the surface of the DBD reactor is found to be a significant phenomenon, which should be considered in all plasma PM removal tests.
Resumo:
Adequate amount of graphene oxide (GO) was firstly prepared by oxidation of graphite and GO/epoxy nanocomposites were subsequently prepared by typical solution mixing technique. X-ray diffraction (XRD) pattern, X-ray photoelectron (XPS), Raman and Fourier transform infrared (FTIR) spectroscopy indicated the successful preparation of GO. Scanning electron microscopy (SEM) and Transmission electron microscopy (TEM) images of the graphite oxide showed that they consist of a large amount of graphene oxide platelets with a curled morphology containing of a thin wrinkled sheet like structure. AFM image of the exfoliated GO signified that the average thickness of GO sheets is ~1.0 nm which is very similar to GO monolayer. Mechanical properties of as prepared GO/epoxy nanocomposites were investigated. Significant improvements in both Young’s modulus and tensile strength were observed for the nanocomposites at very low level of GO loading. The Young’s modulus of the nanocomposites containing 0.5 wt% GO was 1.72 GPa, which was 35 % higher than that of the pure epoxy resin (1.28 GPa). The effective reinforcement of the GO based epoxy nanocomposites can be attributed to the good dispersion and the strong interfacial interactions between the GO sheets and the epoxy resin matrices.
Resumo:
Graphene-polymer nanocomposites have attracted considerable attention due to their unique properties, such as high thermal conductivity (~3000 W mK-1), mechanical stiffness (~ 1 TPa) and electronic transport properties. Relatively, the thermal performance of graphene-polymer composites has not been well investigated. The major technical challenge is to understand the interfacial thermal transport between graphene nanofiller and polymer matrix at small material length scale. To this end, we conducted molecular dynamics simulations to investigate the thermal transport in graphene-polyethylene nanocomposite. The influence of functionalization with hydrocarbon chains on the interfacial thermal conductivity was studied, taking into account of the effects of model size and thermal conductivity of graphene. The results are considered to contribute to development of new graphene-polymer nanocomposites with tailored thermal properties.
Resumo:
An effective IT infrastructure can support a business vision and strategy; a poor, decentralized one can break a company. More and more companies are turning to off-the-shelf ERP (enterprise resource planning) solutions for IT planning and legacy systems management. The authors have developed a framework to help managers successfully plan and implement an ERP project
Resumo:
Often voltage rise along low voltage (LV) networks limits their capacity to accommodate more renewable energy (RE) sources. This paper proposes a robust and effective approach to coordinate customers' resources and control voltage rise in LV networks, where photovoltaics (PVs) are considered as the RE sources. The proposed coordination algorithm includes both localized and distributed control strategies. The localized strategy determines the value of PV inverter active and reactive power, while the distributed strategy coordinates customers' energy storage units (ESUs). To verify the effectiveness of proposed approach, a typical residential LV network is used and simulated in the PSCAD-EMTC platform.
Resumo:
The purpose of this study is to identify the impact of individual differences on service channel selection for e-government services. In a comparative survey of citizens in Germany and Australia (n=1205), we investigate the impact of age, gender, and mobility issues on the selection of personal or mobile communication as channels for service consumption. The results suggest that Australians are more likely to want to use new technology-oriented channels as internet or mobile applications while Germans tend to use classical channels as telephone or in person. Moreover, differences with respect to age, gender, and mobility exist. Implications for practice and issues for future research are discussed.
Resumo:
The internationalization of construction companies has become of significant interest as the global construction market continues to be integrated into a more competitive and turbulent business environment. However, due to the complicated and multifaceted nature of international business and performance, there is as yet no consensus on how to evaluate the performance of international construction firms (ICFs). The purpose of this paper, therefore, is to develop a practical framework for measuring the performance of ICFs. Based on the balanced scorecard (BSC), a framework with detailed measures is developed, investigated, and tested using a three-step research design. In the first step, 27 measures under six dimensions (financial, market, customer, internal business processes, stakeholders, and learning and growth) are determined by literature review, interviews with academics, and seminar discussions. Subsequently, a questionnaire survey is conducted to investigate weights of these 27 performance measures. The questionnaire survey also supports the importance of measuring intangible aspects of international construction performance from the practitioner’s viewpoint. Additionally, a case study is described to test the framework’s robustness and usefulness. This is achieved by benchmarking the performance of a Chinese ICF with nine other counterparts worldwide. It is found that the framework provides an effective basis for benchmarking ICFs to effectively monitor their performance and support the development of strategies for improved competitiveness in the international arena. This paper is the first attempt to present a balanced and practically tested framework for evaluating the performance of ICFs. It contributes to the practice of performance measurement and related internationalization in the construction industry in general.
Resumo:
Plug-in electric vehicles (PEVs) are increasingly popular in the global trend of energy saving and environmental protection. However, the uncoordinated charging of numerous PEVs can produce significant negative impacts on the secure and economic operation of the power system concerned. In this context, a hierarchical decomposition approach is presented to coordinate the charging/discharging behaviors of PEVs. The major objective of the upper-level model is to minimize the total cost of system operation by jointly dispatching generators and electric vehicle aggregators (EVAs). On the other hand, the lower-level model aims at strictly following the dispatching instructions from the upper-level decision-maker by designing appropriate charging/discharging strategies for each individual PEV in a specified dispatching period. Two highly efficient commercial solvers, namely AMPL/IPOPT and AMPL/CPLEX, respectively, are used to solve the developed hierarchical decomposition model. Finally, a modified IEEE 118-bus testing system including 6 EVAs is employed to demonstrate the performance of the developed model and method.
Resumo:
Faculty and reference librarians at the Queensland University of Technology have collaborated in an attempt to improve the quality of literature reviews in civil engineering final -year research projects. This article describes the instructional program devised and the level of faculty support for the librarians' contribution, and presents survey results revealing how students could most benefit from BI and how the classroom collaboration affected student project work. The authors offer some observations about the possible impact of 81 in general, and on engineers in particular, which may provide a focus for future research.
Resumo:
Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.