364 resultados para Quantum-mechanical calculation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The texture of agricultural crops changes during harvesting, post harvesting and processing stages due to different loading processes. There are different source of loading that deform agricultural crop tissues and these include impact, compression, and tension. Scanning Electron Microscope (SEM) method is a common way of analysing cellular changes of materials before and after these loading operations. This paper examines the structural changes of pumpkin peel and flesh tissues under mechanical loading. Compression and indentation tests were performed on peel and flesh samples. Samples structure were then fixed and dehydrated in order to capture the cellular changes under SEM. The results were compared with the images of normal peel and flesh tissues. The findings suggest that normal flesh tissue had bigger size cells, while the cellular arrangement of peel was smaller. Structural damage was clearly observed in tissue structure after compression and indentation. However, the damages that resulted from the flat end indenter was much more severe than that from the spherical end indenter and compression test. An integrated deformed tissue layer was observed in compressed tissue, while the indentation tests shaped a deformed area under the indenter and left the rest of the tissue unharmed. There was an obvious broken layer of cells on the walls of the hole after the flat end indentations, whereas the spherical indenter created a squashed layer all around the hole. Furthermore, the influence of loading was lower on peel samples in comparison with the flesh samples. The experiments have shown that the rate of damage on tissue under constant rate of loading is highly dependent on the shape of equipment. This fact and observed structural changes after loading underline the significance of deigning post harvesting equipments to reduce the rate of damage on agricultural crop tissues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Understanding the mechanical properties of tendon is an important step to guiding the process of improving athletic performance, predicting injury and treating tendinopathies. The speed of sound in a medium is governed by the bulk modulus and density for fluids and isotropic materials. However, for tendon,which is a structural composite of fluid and collagen, there is some anisotropy requiring an adjustment for Poisson’s ratio. In this paper, these relationships are explored and modelled using data collected, in vivo, on human Achilles tendon. Estimates for elastic modulus and hysteresis based on speed of sound data are then compared against published values from in vitro mechanical tests. Methods: Measurements using clinical ultrasound imaging, inverse dynamics and acoustic transmission techniques were used to determine dimensions, loading conditions and longitudinal speed of sound for the Achilles tendon during a series of isometric plantar flexion exercises against body weight. Upper and lower bounds for speed of sound versus tensile stress in the tendon were then modelled and estimates derived for elastic modulus and hysteresis. Results: Axial speed of sound varied between 1850 to 2090 m.s−1 with a non-linear, asymptotic dependency on the level of tensile stress in the tendon 5–35 MPa. Estimates derived for the elastic modulus ranged between 1–2 GPa. Hysteresis derived from models of the stress-strain relationship, ranged from 3–11%. These values agree closely with those previously reported from direct measurements obtained via in vitro mechanical tensile tests on major weight bearing tendons. Discussion: There is sufficiently good agreement between these indirect (speed of sound derived) and direct (mechanical tensile test derived) measures of tendon mechanical properties to validate the use of this non-invasive acoustic transmission technique. This non-invasive method is suitable for monitoring changes in tendon properties as predictors of athletic performance, injury or therapeutic progression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that SiGe islands are transformed into nanoholes and rings by annealing treatments only and without Si capping. Rings are produced by a rapid flash heating at temperatures higher than the melting point of Ge, whereas nanoholes are produced by several minute annealing. The rings are markedly rich in Si with respect to the pristine islands, suggesting that the evolution path from islands to rings is driven by the selective dissolution of Ge occurring at high temperature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solution-phase photocatalytic reduction of graphene oxide to reduced graphene oxide (RGO) by titanium dioxide (TiO2) nanoparticles produces an RGO-TiO2 composite that possesses enhanced charge transport properties beyond those of pure TiO2 nanoparticle films. These composite films exhibit electron lifetimes up to four times longer than that of intrinsic TiO2 films due to RGO acting as a highly conducting intraparticle charge transport network within the film. The intrinsic UV-active charge generation (photocurrent) of pure TiO2 was enhanced by a factor of 10 by incorporating RGO; we attribute this to both the highly conductive nature of the RGO and to improved charge collection facilitated by the intimate contact between RGO and the TiO2, uniquely afforded by the solution-phase photocatalytic reduction method. Integrating RGO into nanoparticle films using this technique should improve the performance of photovoltaic devices that utilize nanoparticle films, such as dye-sensitized and quantum-dot-sensitized solar cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent experiments [F. E. Pinkerton, M. S. Meyer, G. P. Meisner, M. P. Balogh, and J. J. Vajo, J. Phys. Chem. C 111, 12881 (2007) and J. J. Vajo and G. L. Olson, Scripta Mater. 56, 829 (2007)] demonstrated that the recycling of hydrogen in the coupled LiBH4/MgH2 system is fully reversible. The rehydrogenation of MgB2 is an important step toward the reversibility. By using ab initio density functional theory calculations, we found that the activation barrier for the dissociation of H2 are 0.49 and 0.58 eV for the B and Mg-terminated MgB2(0001) surface, respectively. This implies that the dissociation kinetics of H2 on a MgB2 (0001) surface should be greatly improved compared to that in pure Mg materials. Additionally, the diffusion of dissociated H atom on the Mg-terminated MgB2(0001) surface is almost barrier-less. Our results shed light on the experimentally-observed reversibility and improved kinetics for the coupled LiBH4/MgH2 system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, ab initio spin-polarised Density Functional Theory (DFT) calculations are performed to study the interaction of a Ti atom with a NaAlH4(001) surface. We confirm that an interstitially located Ti atom in the NaAlH4 subsurface is the most energetically favoured configuration as recently reported (Chem. Comm. (17) 2006, 1822). On the NaAlH4(001) surface, the Ti atom is most stable when adsorbed between two sodium atoms with an AlH4 unit beneath. A Ti atom on top of an Al atom is also found to be an important structure at low temperatures. The diffusion of Ti from the Al-top site to the Na-bridging site has a low activation barrier of 0.20 eV and may be activated at the experimental temperatures (∼323 K). The diffusion of a Ti atom into the energetically favoured subsurface interstitial site occurs via the Na-bridging surface site and is essentially barrierless.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Density functional theory (DFT) is a powerful approach to electronic structure calculations in extended systems, but suffers currently from inadequate incorporation of long-range dispersion, or Van der Waals (VdW) interactions. VdW-corrected DFT is tested for interactions involving molecular hydrogen, graphite, single-walled carbon nanotubes (SWCNTs), and SWCNT bundles. The energy correction, based on an empirical London dispersion term with a damping function at short range, allows a reasonable physisorption energy and equilibrium distance to be obtained for H2 on a model graphite surface. The VdW-corrected DFT calculation for an (8, 8) nanotube bundle reproduces accurately the experimental lattice constant. For H2 inside or outside an (8, 8) SWCNT, we find the binding energies are respectively higher and lower than that on a graphite surface, correctly predicting the well known curvature effect. We conclude that the VdW correction is a very effective method for implementing DFT calculations, allowing a reliable description of both short-range chemical bonding and long-range dispersive interactions. The method will find powerful applications in areas of SWCNT research where empirical potential functions either have not been developed, or do not capture the necessary range of both dispersion and bonding interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is motivated by the need to efficiently machine the edges of ophthalmic polymer lenses for mounting in spectacle or instrument frames. The polymer materials used are required to have suitable optical characteristics such high refractive index and Abbe number, combined with low density and high scratch and impact resistance. Edge surface finish is an important aesthetic consideration; its quality is governed by the material removal operation and the physical properties of the material being processed. The wear behaviour of polymer materials is not as straightforward as for other materials due to their molecular and structural complexity, not to mention their time-dependent properties. Four commercial ophthalmic polymers have been studied in this work using nanoindentation techniques which are evaluated as tools for probing surface mechanical properties in order to better understand the grinding response of polymer materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we report the preparation and characterisation of nanometer-sized TiO2, CdO, and ZnO semiconductor particles trapped in zeolite NaY. Preparation of these particles was carried out via the traditional ion exchange method and subsequent calcination procedure. It was found that the smaller cations, i.e., Cd2+ and Zn2+ could be readily introduced into the SI′ and SII′ sites located in the sodalite cages, through ion exchange; while this is not the case for the larger Ti species, i.e., Ti monomer [TiO]2+ or dimer [Ti2O3]2+ which were predominantly dispersed on the external surface of zeolite NaY. The subsequent calcination procedure promoted these Ti species to migrate into the internal surface of the supercages. These semiconductor particles confined in NaY zeolite host exhibited a significant blue shift in the UV-VIS absorption spectra, in contrast to the respective bulk semiconductor materials, due to the quantum size effect (QSE). The particle sizes calculated from the UV-VIS optical absorption spectra using the effective mass approximation model are in good agreement with the atomic absorption data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Finite Element modelling of bone fracture fixation systems allows computational investigation of the deformation response of the bone to load. Once validated, these models can be easily adapted to explore changes in design or configuration of a fixator. The deformation of the tissue within the fracture gap determines its healing and is often summarised as the stiffness of the construct. FE models capable of reproducing this behaviour would provide valuable insight into the healing potential of different fixation systems. Current model validation techniques lack depth in 6D load and deformation measurements. Other aspects of the FE model creation such as the definition of interfaces between components have also not been explored. This project investigated the mechanical testing and FE modelling of a bone– plate construct for the determination of stiffness. In depth 6D measurement and analysis of the generated forces, moments and movements showed large out of plane behaviours which had not previously been characterised. Stiffness calculated from the interfragmentary movement was found to be an unsuitable summary parameter as the error propagation is too large. Current FE modelling techniques were applied in compression and torsion mimicking the experimental setup. Compressive stiffness was well replicated, though torsional stiffness was not. The out of plane behaviours prevalent in the experimental work were not replicated in the model. The interfaces between the components were investigated experimentally and through modification to the FE model. Incorporation of the interface modelling techniques into the full construct models had no effect in compression but did act to reduce torsional stiffness bringing it closer to that of the experiment. The interface definitions had no effect on out of plane behaviours, which were still not replicated. Neither current nor novel FE modelling techniques were able to replicate the out of plane behaviours evident in the experimental work. New techniques for modelling loads and boundary conditions need to be developed to mimic the effects of the entire experimental system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bulk amount of graphite oxide was prepared by oxidation of graphite using the modified Hummers method and its ultrasonication in organic solvents yielded graphene oxide (GO). X-ray diffraction (XRD) pattern, X-ray photoelectron (XPS), Raman and Fourier transform infrared (FTIR) spectroscopy indicated the successful preparation of GO. XPS survey spectrum of GO revealed the presence of 66.6 at% C and 30.4 at% O. Scanning electron microscopy (SEM) and Transmission electron microscopy (TEM) images of the graphene oxide showed that they consist of a large amount of graphene oxide platelets with a curled morphology containing of a thin wrinkled sheet like structure. AFM image of the exfoliated GO signified that the average thickness of GO sheets is ~1.0 nm which is very similar to GO monolayer. GO/epoxy nanocomposites were prepared by typical solution mixing technique and influence of GO on mechanical and thermal properties of nanocomposites were investigated. As for the mechanical behaviour of GO/epoxy nanocomposites, 0.5 wt% GO in the nanocomposite achieved the maximum increase in the elastic modulus (~35%) and tensile strength (~7%). The TEM analysis provided clear image of microstructure with homogeneous dispersion of GO in the polymer matrix. The improved strength properties of GO/epoxy nanocomposites can be attributed to inherent strength of GO, the good dispersion and the strong interfacial interactions between the GO sheets and the polymer matrix. However, incorporation of GO showed significant negative effect on composite glass transition temperature (Tg). This may arise due to the interference of GO on curing reaction of epoxy.