967 resultados para Energy Intensity
Resumo:
In this paper, the collision of a C36, with D6h symmetry, on diamond (001)-(/2×1) surface was investigated using molecular dynamics (MD) simulation based on the semi-empirical Brenner potential. The incident kinetic energy of the C36 ranges from 20 to 150 eV per cluster. The collision dynamics was investigated as a function of impact energy Ein. The C36 cluster was first impacted towards the center of two dimers with a fixed orientation. It was found that when Ein was lower than 30 eV, C36 bounces off the surface without breaking up. Increasing Ein to 30-45 eV, bonds were formed between C36 and surface dimer atoms, and the adsorbed C36 retained its original free-cluster structure. Around 50-60 eV, the C36 rebounded from the surface with cage defects. Above 70 eV, fragmentation both in the cluster and on the surface was observed. Our simulation supported the experimental findings that during low-energy cluster beam deposition small fullerenes could keep their original structure after adsorption (i.e. the memory effect), if Ein is within a certain range. Furthermore, we found that the energy threshold for chemisorption is sensitive to the orientation of the incident C36 and its impact position on the asymmetric surface.
Resumo:
Purpose: Electronic Portal Imaging Devices (EPIDs) are available with most linear accelerators (Amonuk, 2002), the current technology being amorphous silicon flat panel imagers. EPIDs are currently used routinely in patient positioning before radiotherapy treatments. There has been an increasing interest in using EPID technology tor dosimetric verification of radiotherapy treatments (van Elmpt, 2008). A straightforward technique involves the EPID panel being used to measure the fluence exiting the patient during a treatment which is then compared to a prediction of the fluence based on the treatment plan. However, there are a number of significant limitations which exist in this Method: Resulting in a limited proliferation ot this technique in a clinical environment. In this paper, we aim to present a technique of simulating IMRT fields using Monte Carlo to predict the dose in an EPID which can then be compared to the measured dose in the EPID. Materials: Measurements were made using an iView GT flat panel a-SI EPfD mounted on an Elekta Synergy linear accelerator. The images from the EPID were acquired using the XIS software (Heimann Imaging Systems). Monte Carlo simulations were performed using the BEAMnrc and DOSXVZnrc user codes. The IMRT fieids to be delivered were taken from the treatment planning system in DICOMRT format and converted into BEAMnrc and DOSXYZnrc input files using an in-house application (Crowe, 2009). Additionally. all image processing and analysis was performed using another in-house application written using the Interactive Data Language (IDL) (In Visual Information Systems). Comparison between the measured and Monte Carlo EPID images was performed using a gamma analysis (Low, 1998) incorporating dose and distance to agreement criteria. Results: The fluence maps recorded by the EPID were found to provide good agreement between measured and simulated data. Figure 1 shows an example of measured and simulated IMRT dose images and profiles in the x and y directions. "A technique for the quantitative evaluation of dose distributions", Med Phys, 25(5) May 1998 S. Crowe, 1. Kairn, A. Fielding, "The Development of a Monte Carlo system to verify Radiotherapy treatment dose calculations", Radiotherapy & Oncology, Volume 92, Supplement 1, August 2009, Pages S71-S71.
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Background Low levels of physical activity and high levels of sedentary behavior (SB) are major public health concerns. This study was designed to develop and validate the 7-day Sedentary (S) and Light Intensity Physical Activity (LIPA) Log (7-day SLIPA Log), a self-report measure of specific daily behaviors. Method To develop the log, 62 specific SB and LIPA behaviors were chosen from the Compendium of Physical Activities. Face-to-face interviews were conducted with 32 sedentary volunteers to identify domains and behaviors of SB and LIPA. To validate the log, a further 22 sedentary adults were recruited to wear the GT3X for 7 consecutive days and nights. Results Pearson correlations (r) between the 7-day SLIPA Log and GT3X were significant for sedentary (r =.86, p < 0.001), for LIPA (r =.80, p < 0.001). Lying and sitting postures were positively correlated with GT3X output (r =.60 and r =.64, p < 0.001, respectively). No significant correlation was found for standing posture (r =.14, p = 0.53).The kappa values between the 7-day SLIPA Log and GT3X variables ranged from 0.09–0.61, indicating poor to good agreement. Conclusion The 7-day SLIPA Log is a valid self-report measure of SB and LIPA in specific behavioral domains.
Resumo:
Since the first oil crisis in 1974, economic reasons placed energy saving among the top priorities in most industrialised countries. In the decades that followed, another, equally strong driver for energy saving emerged: climate change caused by anthropogenic emissions, a large fraction of which result from energy generation. Intrinsically linked to energy consumption and its related emissions is another problem: indoor air quality. City dwellers in industrialised nations spend over 90% of their time indoors and exposure to indoor pollutants contributes to ~2.6% of global burden of disease and nearly 2 million premature deaths per year1. Changing climate conditions, together with human expectations of comfortable thermal conditions, elevates building energy requirements for heating, cooling, lighting and the use of other electrical equipment. We believe that these changes elicit a need to understand the nexus between energy consumption and its consequent impact on indoor air quality in urban buildings. In our opinion the key questions are how energy consumption is distributed between different building services, and how the resulting pollution affects indoor air quality. The energy-pollution nexus has clearly been identified in qualitative terms; however the quantification of such a nexus to derive emissions or concentrations per unit energy consumption is still weak, inconclusive and requires forward thinking. Of course, various aspects of energy consumption and indoor air quality have been studied in detail separately, but in-depth, integrated studies of the energy-pollution nexus are hard to come by. We argue that such studies could be instrumental in providing sustainable solutions to maintain the trade-off between the energy efficiency of buildings and acceptable levels of air pollution for healthy living.
Resumo:
Electricity cost has become a major expense for running data centers and server consolidation using virtualization technology has been used as an important technology to improve the energy efficiency of data centers. In this research, a genetic algorithm and a simulation-annealing algorithm are proposed for the static virtual machine placement problem that considers the energy consumption in both the servers and the communication network, and a trading algorithm is proposed for dynamic virtual machine placement. Experimental results have shown that the proposed methods are more energy efficient than existing solutions.
Resumo:
This thesis is a study of whether the Australian Clean Energy Package complies with the rules of the World Trade Organization. It examines the legal framework for the Australian carbon pricing mechanism and related arrangements, using World Trade Organization law as the framework for analysis. In doing so, this thesis deconstructs the Clean Energy Package by considering the legal properties of eligible emissions units, the assistance measures introduced by the Package and the liabilities created by the carbon pricing mechanism.
Resumo:
This paper proposes a distributed control approach to coordinate multiple energy storage units (ESUs) to avoid violation of voltage and thermal constraints, which are some of the main power quality challenges for future distribution networks. ESUs usually are connected to a network through voltage source converters. In this paper, both ESU converters active and reactive power are used to deal with the above mentioned power quality issues. ESUs' reactive power is proposed to be used for voltage support, while the active power is to be utilized in managing network loading. Two typical distribution networks are used to apply the proposed method, and the simulated results are illustrated in this paper to show the effectiveness of this approach.
Resumo:
Severe power quality problems can arise when a large number of single-phase distributed energy resources (DERs) are connected to a low-voltage power distribution system. Due to the random location and size of DERs, it may so happen that a particular phase generates excess power than its load demand. In such an event, the excess power will be fed back to the distribution substation and will eventually find its way to the transmission network, causing undesirable voltage-current unbalance. As a solution to this problem, the article proposes the use of a distribution static compensator (DSTATCOM), which regulates voltage at the point of common coupling (PCC), thereby ensuring balanced current flow from and to the distribution substation. Additionally, this device can also support the distribution network in the absence of the utility connection, making the distribution system work as a microgrid. The proposals are validated through extensive digital computer simulation studies using PSCADTM
Resumo:
Background: Random Breath Testing (RBT) is the main drink driving law enforcement tool used throughout Australia. International comparative research considers Australia to have the most successful RBT program compared to other countries in terms of crash reductions (Erke, Goldenbeld, & Vaa, 2009). This success is attributed to the programs high intensity (Erke et al., 2009). Our review of the extant literature suggests that there is no research evidence that indicates an optimal level of alcohol breath testing. That is, we suggest that no research exists to guide policy regarding whether or not there is a point at which alcohol related crashes reach a point of diminishing returns as a result of either saturated or targeted RBT testing. Aims: In this paper we first provide an examination of RBTs and alcohol related crashes across Australian jurisdictions. We then address the question of whether or not an optimal level of random breath testing exists by examining the relationship between the number of RBTs conducted and the occurrence of alcohol-related crashes over time, across all Australian states. Method: To examine the association between RBT rates and alcohol related crashes and to assess whether an optimal ratio of RBT tests per licenced drivers can be determined we draw on three administrative data sources form each jurisdiction. Where possible data collected spans January 1st 2000 to September 30th 2012. The RBT administrative dataset includes the number of Random Breath Tests (RBTs) conducted per month. The traffic crash administrative dataset contains aggregated monthly count of the number of traffic crashes where an individual’s recorded BAC reaches or exceeds 0.05g/ml of alcohol in blood. The licenced driver data were the monthly number of registered licenced drivers spanning January 2000 to December 2011. Results: The data highlights that the Australian story does not reflective of all States and territories. The stable RBT to licenced driver ratio in Queensland (of 1:1) suggests a stable rate of alcohol related crash data of 5.5 per 100,000 licenced drivers. Yet, in South Australia were a relative stable rate of RBT to licenced driver ratio of 1:2 is maintained the rate of alcohol related traffic crashes is substantially less at 3.7 per 100,000. We use joinpoint regression techniques and varying regression models to fit the data and compare the different patterns between jurisdictions. Discussion: The results of this study provide an updated review and evaluation of RBTs conducted in Australia and examines the association between RBTs and alcohol related traffic crashes. We also present an evidence base to guide policy decisions for RBT operations.
Resumo:
Energy prices are highly volatile and often feature unexpected spikes. It is the aim of this paper to examine whether the occurrence of these extreme price events displays any regularities that can be captured using an econometric model. Here we treat these price events as point processes and apply Hawkes and Poisson autoregressive models to model the dynamics in the intensity of this process.We use load and meteorological information to model the time variation in the intensity of the process. The models are applied to data from the Australian wholesale electricity market, and a forecasting exercise illustrates both the usefulness of these models and their limitations when attempting to forecast the occurrence of extreme price events.
Resumo:
This thesis studied the source of instability in optical phase modulators used in high accuracy laser measurement systems. The nonlinear origin of the amplitude noise helped further reducing this instability in applications that rely on phase modulators to function. This outcome will have positive impacts on the development of new methods in the amplitude noise suppression.
Resumo:
The objective of exercise training is to initiate desirable physiological adaptations that ultimately enhance physical work capacity. Optimal training prescription requires an individualized approach, with an appropriate balance of training stimulus and recovery and optimal periodization. Recovery from exercise involves integrated physiological responses. The cardiovascular system plays a fundamental role in facilitating many of these responses, including thermoregulation and delivery/removal of nutrients and waste products. As a marker of cardiovascular recovery, cardiac parasympathetic reactivation following a training session is highly individualized. It appears to parallel the acute/intermediate recovery of the thermoregulatory and vascular systems, as described by the supercompensation theory. The physiological mechanisms underlying cardiac parasympathetic reactivation are not completely understood. However, changes in cardiac autonomic activity may provide a proxy measure of the changes in autonomic input into organs and (by default) the blood flow requirements to restore homeostasis. Metaboreflex stimulation (e.g. muscle and blood acidosis) is likely a key determinant of parasympathetic reactivation in the short term (0–90 min post-exercise), whereas baroreflex stimulation (e.g. exercise-induced changes in plasma volume) probably mediates parasympathetic reactivation in the intermediate term (1–48 h post-exercise). Cardiac parasympathetic reactivation does not appear to coincide with the recovery of all physiological systems (e.g. energy stores or the neuromuscular system). However, this may reflect the limited data currently available on parasympathetic reactivation following strength/resistance-based exercise of variable intensity. In this review, we quantitatively analyse post-exercise cardiac parasympathetic reactivation in athletes and healthy individuals following aerobic exercise, with respect to exercise intensity and duration, and fitness/training status. Our results demonstrate that the time required for complete cardiac autonomic recovery after a single aerobic-based training session is up to 24 h following low-intensity exercise, 24–48 h following threshold-intensity exercise and at least 48 h following high-intensity exercise. Based on limited data, exercise duration is unlikely to be the greatest determinant of cardiac parasympathetic reactivation. Cardiac autonomic recovery occurs more rapidly in individuals with greater aerobic fitness. Our data lend support to the concept that in conjunction with daily training logs, data on cardiac parasympathetic activity are useful for individualizing training programmes. In the final sections of this review, we provide recommendations for structuring training microcycles with reference to cardiac parasympathetic recovery kinetics. Ultimately, coaches should structure training programmes tailored to the unique recovery kinetics of each individual.