920 resultados para SQUARE RESONATORS
Resumo:
This paper presents two novel nonlinear models of u-shaped anti-roll tanks for ships, and their linearizations. In addition, a third simplified nonlinear model is presented. The models are derived using Lagrangian mechanics. This formulation not only simplifies the modeling process, but also allows one to obtain models that satisfy energy-related physical properties. The proposed nonlinear models and their linearizations are validated using model-scale experimental data. Unlike other models in the literature, the nonlinear models in this paper are valid for large roll amplitudes. Even at moderate roll angles, the nonlinear models have three orders of magnitude lower mean square error relative to experimental data than the linear models.
Resumo:
Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.
Resumo:
Introduction Total scatter factor (or output factor) in megavoltage photon dosimetry is a measure of relative dose relating a certain field size to a reference field size. The use of solid phantoms has been well established for output factor measurements, however to date these phantoms have not been tested with small fields. In this work, we evaluate the water equivalency of a number of solid phantoms for small field output factor measurements using the EGSnrc Monte Carlo code. Methods The following small square field sizes were simulated using BEAMnrc: 5, 6, 7, 8, 10 and 30 mm. Each simulated phantom geometry was created in DOSXYZnrc and consisted of a silicon diode (of length and width 1.5 mm and depth 0.5 mm) submersed in the phantom at a depth of 5 g/cm2. The source-to-detector distance was 100 cm for all simulations. The dose was scored in a single voxel at the location of the diode. Interaction probabilities and radiation transport parameters for each material were created using custom PEGS4 files. Results A comparison of the resultant output factors in the solid phantoms, compared to the same factors in a water phantom are shown in Fig. 1. The statistical uncertainty in each point was less than or equal to 0.4 %. The results in Fig. 1 show that the density of the phantoms affected the output factor results, with higher density materials (such as PMMA) resulting in higher output factors. Additionally, it was also calculated that scaling the depth for equivalent path length had negligible effect on the output factor results at these field sizes. Discussion and conclusions Electron stopping power and photon mass energy absorption change minimally with small field size [1]. Also, it can be seen from Fig. 1 that the difference from water decreases with increasing field size. Therefore, the most likely cause for the observed discrepancies in output factors is differing electron disequilibrium as a function of phantom density. When measuring small field output factors in a solid phantom, it is important that the density is very close to that of water.
Resumo:
Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.
Resumo:
The purpose of this study was to investigate the effect of very small air gaps (less than 1 mm) on the dosimetry of small photon fields used for stereotactic treatments. Measurements were performed with optically stimulated luminescent dosimeters (OSLDs) for 6 MV photons on a Varian 21iX linear accelerator with a Brainlab lMLC attachment for square field sizes down to 6 mm 9 6 mm. Monte Carlo simulations were performed using EGSnrc C++ user code cavity. It was found that the Monte Carlo model used in this study accurately simulated the OSLD measurements on the linear accelerator. For the 6 mm field size, the 0.5 mm air gap upstream to the active area of the OSLD caused a 5.3 % dose reduction relative to a Monte Carlo simulation with no air gap...
Resumo:
Numerical simulations of thermomagnetic convection of paramagnetic fluids placed in a micro-gravity condition (g nearly 0) and under a uniform vertical gradient magnetic field in an open ended square enclosure with ramp heating temperature condition applied on a vertical wall is investigated in this study. In presence of the strong magnetic gradient field thermal convection of the paramagnetic fluid might take place even in a zero-gravity environment as a direct consequence of temperature differences occurring within the fluid. The thermal boundary layer develops adjacent to the hot wall as soon as the ramp temperature condition is applied on it. There are two scenario that can be observed based on the ramp heating time. The steady state of the thermal boundary layer can be reached before the ramp time is finished or vice versa. If the ramp time is larger than the quasi-steady time then the thermal boundary layer is in a quasi-steady mode with convection balancing conduction after the quasi-steady time. Further increase of the heat input simply accelerates the flow to maintain the proper thermal balance. Finally, the boundary layer becomes completely steady state when the ramp time is finished. Effects of magnetic Rayleigh number, Prandtl number and paramagnetic fluid parameter on the flow pattern and heat transfer are presented.
Resumo:
Child care centers differ systematically with respect to the quality and quantity of physical activity they provide, suggesting that center-level policies and practices, as well as the center's physical environment, are important influences on children's physical activity behavior. Purpose To summarize and critically evaluate the extant peer-reviewed literature on the influence of child care policy and environment on physical activity in preschool-aged children. Methods A computer database search identified seven relevant studies that were categorized into three broad areas: cross-sectional studies investigating the impact of selected center-level policies and practices on moderate-to-vigorous physical activity (MVPA), studies correlating specific attributes of the outdoor play environment with the level and intensity of MVPA, and studies in which a specific center-level policy or environmental attribute was experimentally manipulated and evaluated for changes in MVPA. Results Staff education and training, as well as staff behavior on the playground, seem to be salient influences on MVPA in preschoolers. Lower playground density (less children per square meter) and the presence of vegetation and open play areas also seem to be positive influences on MVPA. However, not all studies found these attributes to be significant. The availability and quality of portable play equipment, not the amount or type of fixed play equipment, significantly influenced MVPA levels. Conclusions Emerging evidence suggests that several policy and environmental factors contribute to the marked between-center variability in physical activity and sedentary behavior. Intervention studies targeting these factors are thus warranted.
Resumo:
Previous studies have demonstrated that pattern recognition approaches to accelerometer data reduction are feasible and moderately accurate in classifying activity type in children. Whether pattern recognition techniques can be used to provide valid estimates of physical activity (PA) energy expenditure in youth remains unexplored in the research literature. Purpose: The objective of this study is to develop and test artificial neural networks (ANNs) to predict PA type and energy expenditure (PAEE) from processed accelerometer data collected in children and adolescents. Methods: One hundred participants between the ages of 5 and 15 yr completed 12 activity trials that were categorized into five PA types: sedentary, walking, running, light-intensity household activities or games, and moderate-to-vigorous intensity games or sports. During each trial, participants wore an ActiGraph GTIM on the right hip, and (V) Over dotO(2) was measured using the Oxycon Mobile (Viasys Healthcare, Yorba Linda, CA) portable metabolic system. ANNs to predict PA type and PAEE (METs) were developed using the following features: 10th, 25th, 50th, 75th, and 90th percentiles and the lag one autocorrelation. To determine the highest time resolution achievable, we extracted features from 10-, 15-, 20-, 30-, and 60-s windows. Accuracy was assessed by calculating the percentage of windows correctly classified and root mean square en-or (RMSE). Results: As window size increased from 10 to 60 s, accuracy for the PA-type ANN increased from 81.3% to 88.4%. RMSE for the MET prediction ANN decreased from 1.1 METs to 0.9 METs. At any given window size, RMSE values for the MET prediction ANN were 30-40% lower than the conventional regression-based approaches. Conclusions: ANNs can be used to predict both PA type and PAEE in children and adolescents using count data from a single waist mounted accelerometer.
Resumo:
Objective The present study aimed to develop accelerometer cut points to classify physical activities (PA) by intensity in preschoolers and to investigate discrepancies in PA levels when applying various accelerometer cut points. Methods To calibrate the accelerometer, 18 preschoolers (5.8 +/- 0.4 years) performed eleven structured activities and one free play session while wearing a GT1M ActiGraph accelerometer using 15 s epochs. The structured activities were chosen based on the direct observation system Children's Activity Rating Scale (CARS) while the criterion measure of PA intensity during free play was provided using a second-by-second observation protocol (modified CARS). Receiver Operating Characteristic (ROC) curve analyses were used to determine the accelerometer cut points. To examine the classification differences, accelerometer data of four consecutive days from 114 preschoolers (5.5 +/- 0.3 years) were classified by intensity according to previously published and the newly developed accelerometer cut points. Differences in predicted PA levels were evaluated using repeated measures ANOVA and Chi Square test. Results Cut points were identified at 373 counts/15 s for light (sensitivity: 86%; specificity: 91%; Area under ROC curve: 0.95), 585 counts/15 s for moderate (87%; 82%; 0.91) and 881 counts/15 s for vigorous PA (88%; 91%; 0.94). Further, applying various accelerometer cut points to the same data resulted in statistically and biologically significant differences in PA. Conclusions Accelerometer cut points were developed with good discriminatory power for differentiating between PA levels in preschoolers and the choice of accelerometer cut points can result in large discrepancies.
Resumo:
It is debated that for sustainable STEM education and knowledge investment, human centered learning design approach is critical and important. Sustainability in this context is enduring maintenance of technological trajectories for productive economical and social interactions by demonstrating life critical scenarios through life critical system development and life experiences. Technology influences way of life and the learning and teaching process. Social software application development is more than learning of how to program a software application and extracting information from the Internet. Hence, our research challenge is, how do we attract learners to STEM social software application development? Our realisation processes begin with comparing Science and Technology education in developed (e.g., Australia) and developing (e.g., Sri Lanka) countries with distinction on final year undergraduates’ industry ready training programmes. Principal components analysis was performed to separate patterns of important factors. To measure behavioural intention of perceived usefulness and attitudes of the training, the measurement model was analysed to test its validity and reliability using partial least square (PLS) analysis of structural equation modelling (SEM). Our observation is that the relationship is more complex than we argue for. Our initial conclusions were that life critical system development and life experience trajectories as determinant factors while technological influences were unavoidable. A further investigation should involve correlations between human centered learning design approach and economical development in the long run.
Resumo:
Based on its enticing properties, graphene has been envisioned with applications in the area of electronics, photonics, sensors, bioapplications and others. To facilitate various applications, doping has been frequently used to manipulate the properties of graphene. Despite a number of studies conducted on doped graphene regarding its electrical and chemical properties, the impact of doping on the mechanical properties of graphene has been rarely discussed. A systematic study of the vibrational properties of graphene doped with nitrogen and boron is performed by means of a molecular dynamics simulation. The influence from different density or species of dopants has been assessed. It is found that the impacts on the quality factor, Q, resulting from different densities of dopants vary greatly, while the influence on the resonance frequency is insignificant. The reduction of the resonance frequency caused by doping with boron only is larger than the reduction caused by doping with both boron and nitrogen. This study gives a fundamental understanding of the resonance of graphene with different dopants, which may benefit their application as resonators.
Resumo:
Using Monte Carlo simulation technique, we have calculated the distribution of ion current extracted from low-temperature plasmas and deposited onto the substrate covered with a nanotube array. We have shown that a free-standing carbon nanotube is enclosed in a circular bead of the ion current, whereas in square and hexagonal nanotube patterns, the ion current is mainly concentrated along the lines connecting the nearest nanotubes. In a very dense array (with the distance between nanotubes/nanotube-height ratio less than 0.05), the ions do not penetrate to the substrate surface and deposit on side surfaces of the nanotubes.
Resumo:
The results of a hybrid numerical simulation of the growth kinetics of carbon nanowall-like nanostructures in the plasma and neutral gas synthesis processes are presented. The low-temperature plasma-based process was found to have a significant advantage over the purely neutral flux deposition in providing the uniform size distribution of the nanostructures. It is shown that the nanowall width uniformity is the best (square deviations not exceeding 1.05) in high-density plasmas of 3.0× 1018 m-3, worsens in lower-density plasmas (up to 1.5 in 1.0× 1017 m-3 plasmas), and is the worst (up to 1.9) in the neutral gas-based process. This effect has been attributed to the focusing of ion fluxes by irregular electric field in the vicinity of plasma-grown nanostructures on substrate biased with -20 V potential, and differences in the two-dimensional adatom diffusion fluxes in the plasma and neutral gas-based processes. The results of our numerical simulations are consistent with the available experimental reports on the effect of the plasma process parameters on the sizes and shapes of relevant nanostructures.
Resumo:
Slippage in the contact roller-races has always played a central role in the field of diagnostics of rolling element bearings. Due to this phenomenon, vibrations triggered by a localized damage are not strictly periodic and therefore not detectable by means of common spectral functions as power spectral density or discrete Fourier transform. Due to the strong second order cyclostationary component, characterizing these signals, techniques such as cyclic coherence, its integrated form and square envelope spectrum have proven to be effective in a wide range of applications. An expert user can easily identify a damage and its location within the bearing components by looking for particular patterns of peaks in the output of the selected cyclostationary tool. These peaks will be found in the neighborhood of specific frequencies, that can be calculated in advance as functions of the geometrical features of the bearing itself. Unfortunately the non-periodicity of the vibration signal is not the only consequence of the slippage: often it also involves a displacement of the damage characteristic peaks from the theoretically expected frequencies. This issue becomes particularly important in the attempt to develop highly automated algorithms for bearing damage recognition, and, in order to correctly set thresholds and tolerances, a quantitative description of the magnitude of the above mentioned deviations is needed. This paper is aimed at identifying the dependency of the deviations on the different operating conditions. This has been possible thanks to an extended experimental activity performed on a full scale bearing test rig, able to reproduce realistically the operating and environmental conditions typical of an industrial high power electric motor and gearbox. The importance of load will be investigated in detail for different bearing damages. Finally some guidelines on how to cope with such deviations will be given, accordingly to the expertise obtained in the experimental activity.
Resumo:
We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.