976 resultados para computational chemistry


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The controversy on how to interpret the ages of lunar highland breccias has recently been discussed by James [1]. Are the measured ages testimony of true events in lunar history; do they represent the age of the ancient crustal rocks, mixed ages of unequilibrated matrix-phenocryst relationships, or merely thermal events subsequent to the formational event ? It is certain from analyses of terrestrial impact melt breccias that the melt matrix of whole impact melt sheets is isotopically equilibrated due to the extensive mixing process of the early cratering stage [2,3]. It has been shown that isotopic equilibration takes place between impact melt matrix and target rock clasts therein, with the intensity of isotopic exchange depending on the degree of shock metamorphism, thermal metamorphism and the size of the clasts [4]. Therefore, impact melt breccias - if they are relatively clast-poor and mineralogically well studied - can be considered to be the most reliable source for information on the impact history of the lunar highland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Graphene nanoribbon (GNR) with free edges can exhibit non-flat morphologies due to pre-existing edge stress. Using molecular dynamics (MD) simulations, we investigate the free-edge effect on the shape transition in GNRs with different edge types, including regular (armchair and zigzag), armchair terminated with hydrogen and reconstructed armchair. The results show that initial edge stress and energy are dependent on the edge configurations. It is confirmed that pre-strain on the free edges is a possible way to limit the random shape transition of GNRs. In addition, the influence of surface attachment on the shape transition is also investigated in this work. It is found that surface attachment can lead to periodic ripples in GNRs, dependent on the initial edge configurations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hard biological materials such as bone possess superior material properties of high stiffness and toughness. Two unique characteristics of bone microstructure are a large aspect ratio of mineralized collagen fibrils (MCF), and an extremely thin and large area of extrafibrillar protein matrix located between the MCF. The objective of this study is to investigate the effects of: (1) MCF aspect ratio, and (2) energy dissipation in extrafibrillar protein matrix on the mechanical behaviour of MCF arrays. In this study, notched specimens of MCF arrays in extrafibrillar protein matrix are subjected to bending. Cohesive zone model was implemented to simulate the failure of extrafibrillar protein matrix. The study reveals that the MCF array with a higher MCF aspect ratio and the MCF array with a higher protein energy dissipation in the interface direction are able to sustain a higher bending force and dissipate higher energy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this chapter, we will present a contemporary review of the hitherto numerical characterization of nanowires (NWs). The bulk of the research reported in the literatures concern metallic NWs including Al, Cu, Au, Ag, Ni, and their alloys NWs. Research has also been reported for the investigation of some nonmetallic NWs, such as ZnO, GaN, SiC, SiO2. A plenty of researches have been conducted regarding the numerical investigation of NWs. Issues analyzed include structural changes under different loading situations, the formation and propagation of dislocations, and the effect of the magnitude of applied loading on deformation mechanics. Efforts have also been made to correlate simulation results with experimental measurements. However, direct comparisons are difficult since most simulations are carried out under conditions of extremely high strain/loading rates and small simulation samples due to computational limitations. Despite of the immense numerical studies of NWs, a significant work still lies ahead in terms of problem formulation, interpretation of results, identification and delineation of deformation mechanisms, and constitutive characterization of behavior. In this chapter, we present an introduction of the commonly adopted experimental and numerical approaches in studies of the deformation of NWs in Section 1. An overview of findings concerning perfect NWs under different loading situations, such as tension, compression, torsion, and bending are presented in Section 2. In Section 3, we will detail some recent results from the authors’ own work with an emphasis on the study of influences from different pre-existing defect on NWs. Some thoughts on future directions of the computational mechanics of NWs together with Conclusions will be given in the last section.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in computational geodynamics are applied to explore the link between Earth’s heat, its chemistry and its mechanical behavior. Computational thermal-mechanical solutions are now allowing us to understand Earth patterns by solving the basic physics of heat transfer. This approach is currently used to solve basic convection patterns of terrestrial planets. Applying the same methodology to smaller scales delivers promising similarities between observed and predicted structures which are often the site of mineral deposits. The new approach involves a fully coupled solution to the energy, momentum and continuity equations of the system at all scales, allowing the prediction of fractures, shear zones and other typical geological patterns out of a randomly perturbed initial state. The results of this approach are linking a global geodynamic mechanical framework over regional-scale mineral deposits down to the underlying micro-scale processes. Ongoing work includes the challenge of incorporating chemistry into the formulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curriculum developers and researchers have promoted context-based programmes to arrest waning student interest and participation in the enabling sciences at high school and university. Context-based programmes aim for student connections between scientific discourse and real-world contexts to elevate curricular relevance without diminishing conceptual understanding. This interpretive study explored the learning transactions in one 11th grade context-based chemistry classroom where the context was the local creek. The dialectic of agency/structure was used as a lens to examine how the practices in classroom interactions afforded students the agency for learning. The results suggest that first, fluid transitions were evident in the student–student interactions involving successful students; and second, fluid transitions linking concepts to context were evident in the students’ successful reports. The study reveals that the structures of writing and collaborating in groups enabled students’ agential and fluent movement between the field of the real-world creek and the field of the formal chemistry classroom. Furthermore, characteristics of academically successful students in context-based chemistry are highlighted. Research, teaching, and future directions for context-based science teaching are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Higher-order thinking has featured persistently in the reform agenda for science education. The intended curriculum in various countries sets out aspirational statements for the levels of higher-order thinking to be attained by students. This study reports the extent to which chemistry examinations from four Australian states align and facilitate the intended higher-order thinking skills stipulated in curriculum documents. Through content analysis, the curriculum goals were identified for each state and compared to the nature of question items in the corresponding examinations. Categories of higher-order thinking were adapted from the OECD’s PISA Science test to analyze question items. There was considerable variation in the extent to which the examinations from the states supported the curriculum intent of developing and assessing higher-order thinking. Generally, examinations that used a marks-based system tended to emphasize lower-order thinking, with a greater distribution of marks allocated for lower-order thinking questions. Examinations associated with a criterion-referenced examination tended to award greater credit for higher-order thinking questions. The level of complexity of chemistry was another factor that limited the extent to which examination questions supported higher-order thinking. Implications from these findings are drawn for the authorities responsible for designing curriculum and assessment procedures and for teachers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The adsorption of carbon dioxide and nitrogen molecules on aluminum nitride (AlN) nanostructures has been explored using first-principle computational methods. Optimized configurations corresponding to physisorption and, subsequentially, chemisorption of CO2 are identified, in contrast to N2, for which only a physisorption structure is found. Transition-state searches imply a low energy barrier between the physisorption and chemisorption states for CO2 such that the latter is accessible and thermodynamically favored at room temperature. The effective binding energy of the optimized chemisorption structure is apparently larger than those for other CO2 adsorptive materials, suggesting the potential for application of aluminum nitride nanostructures for carbon dioxide capture and storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ab initio density functional theory (DFT) calculations are performed to explore possible catalytic effects on the dissociative chemisorption of hydrogen on a Mg(0001) surface when carbon is incorporated into Mg materials. The computational results imply that a C atom located initially on a Mg(0001) surface can migrate into the subsurface and occupy an fcc interstitial site, with charge transfer to the C atom from neighboring Mg atoms. The effect of subsurface C on the dissociation of H2 on the Mg(0001) surface is found to be relatively marginal: a perfect sublayer of interstitial C is calculated to lower the barrier by 0.16 eV compared with that on a pure Mg(0001) surface. Further calculations reveal, however, that sublayer C may have a significant effect in enhancing the diffusion of atomic hydrogen into the sublayers through fcc channels. This contributes new physical understanding toward rationalizing the experimentally observed improvement in absorption kinetics of H2 when graphite or single walled carbon nanotubes (SWCNT) are introduced into the Mg powder during ball milling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.