130 resultados para geometric mean diameter


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, mean-variance analysis has been proposed as a novel paradigm to model document ranking in Information Retrieval. The main merit of this approach is that it diversifies the ranking of retrieved documents. In its original formulation, the strategy considers both the mean of relevance estimates of retrieved documents and their variance. How- ever, when this strategy has been empirically instantiated, the concepts of mean and variance are discarded in favour of a point-wise estimation of relevance (to replace the mean) and of a parameter to be tuned or, alternatively, a quantity dependent upon the document length (to replace the variance). In this paper we revisit this ranking strategy by going back to its roots: mean and variance. For each retrieved document, we infer a relevance distribution from a series of point-wise relevance estimations provided by a number of different systems. This is used to compute the mean and the variance of document relevance estimates. On the TREC Clueweb collection, we show that this approach improves the retrieval performances. This development could lead to new strategies to address the fusion of relevance estimates provided by different systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plant based dried food products are popular commodities in global market where much research is focused to improve the products and processing techniques. In this regard, numerical modelling is highly applicable and in this work, a coupled meshfree particle-based two-dimensional (2-D) model was developed to simulate micro-scale deformations of plant cells during drying. Smoothed Particle Hydrodynamics (SPH) was used to model the viscous cell protoplasm (cell fluid) by approximating it to an incompressible Newtonian fluid. The visco-elastic characteristic of the cell wall was approximated to a Neo-Hookean solid material augmented with a viscous term and modelled with a Discrete Element Method (DEM). Compared to a previous work [H. C. P. Karunasena, W. Senadeera, Y. T. Gu and R. J. Brown, Appl. Math. Model., 2014], this study proposes three model improvements: linearly decreasing positive cell turgor pressure during drying, cell wall contraction forces and cell wall drying. The improvements made the model more comparable with experimental findings on dried cell morphology and geometric properties such as cell area, diameter, perimeter, roundness, elongation and compactness. This single cell model could be used as a building block for advanced tissue models which are highly applicable for product and process optimizations in Food Engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper seeks to explain how the selective securitization of infectious disease arose, and to analyze the policy successes from this move. It is argued that despite some success, such as the revised International Health Regulations (IHR) in 2005, there remain serious deficiencies in the political outputs from the securitization of infectious disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precisely controlled reactive chemical vapor synthesis of highly uniform, dense arrays of vertically aligned single-walled carbon nanotubes (SWCNTs) using tailored trilayered Fe/Al2O3/SiO2 catalyst is demonstrated. More than 90% population of thick nanotubes (>3 nm in diameter) can be produced by tailoring the thickness and microstructure of the secondary catalyst supporting SiO2 layer, which is commonly overlooked. The proposed model based on the atomic force microanalysis suggests that this tailoring leads to uniform and dense arrays of relatively large Fe catalyst nanoparticles on which the thick SWCNTs nucleate, while small nanotubes and amorphous carbon are effectively etched away. Our results resolve a persistent issue of selective (while avoiding multiwalled nanotubes and other carbon nanostructures) synthesis of thick vertically aligned SWCNTs whose easily switchable thickness-dependent electronic properties enable advanced applications in nanoelectronic, energy, drug delivery, and membrane technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses of the advanced computational technique of steel structures for both simulation capacities simultaneously; specifically, they are the higher-order element formulation with element load effect (geometric nonlinearities) as well as the refined plastic hinge method (material nonlinearities). This advanced computational technique can capture the real behaviour of a whole second-order inelastic structure, which in turn ensures the structural safety and adequacy of the structure. Therefore, the emphasis of this paper is to advocate that the advanced computational technique can replace the traditional empirical design approach. In the meantime, the practitioner should be educated how to make use of the advanced computational technique on the second-order inelastic design of a structure, as this approach is the future structural engineering design. It means the future engineer should understand the computational technique clearly; realize the behaviour of a structure with respect to the numerical analysis thoroughly; justify the numerical result correctly; especially the fool-proof ultimate finite element is yet to come, of which is competent in modelling behaviour, user-friendly in numerical modelling and versatile for all structural forms and various materials. Hence the high-quality engineer is required, who can confidently manipulate the advanced computational technique for the design of a complex structure but not vice versa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a discrete agent-based model on a one-dimensional lattice, where each agent occupies L sites and attempts movements over a distance of d lattice sites. Agents obey a strict simple exclusion rule. A discrete-time master equation is derived using a mean-field approximation and careful probability arguments. In the continuum limit, nonlinear diffusion equations that describe the average agent occupancy are obtained. Averaged discrete simulation data are generated and shown to compare very well with the solution to the derived nonlinear diffusion equations. This framework allows us to approach a lattice-free result using all the advantages of lattice methods. Since different cell types have different shapes and speeds of movement, this work offers insight into population-level behavior of collective cellular motion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study compared proximal femoral morphology in patients living in soft and hard water regions. The proximal femoral morphology of two groups of 70 patients living in hard and soft water regions with a mean age of 72.3 (range 50 to 87 years) were measured using an antero-posterior radiograph of the non-operated hip with magnification adjusted. The medullary canal diameter at the level of the lesser trochanter (LT) was significantly wider in patients living in the hard water region (mean width 1.9 mm wider; p= 0.003). No statistical significant difference was found in the medullary canal width at 10 cm below the level of LT, Dorr index, or Canal Bone Ratio (CBR). In conclusion, the proximal femoral morphology does differ in patients living in soft and hard water areas. These results may have an important clinical bearing in patients undergoing total hip replacement surgery. Further research is needed to determine whether implant survivorship is affected in patients living in hard and soft water regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose There is a suggestion that the long wavelength-sensitive (LWS)-to-middle wavelength-sensitive (MWS) cone ratio in the retina is associated with myopia. The aim was to measure the LWS/MWS amplitude modulation ratio, an estimate of the LWS/MWS cone ratio, in young adult emmetropes and myopes. Methods Multifocal visual evoked potentials were measured when the LWS and MWS cone systems were excited separately using the method of silent substitution. The 30 young adult participants (22 to 33 years) included 10 emmetropes (mean [±SD] refraction, +0.3 [±0.4] diopters [D]) and 20 myopes (mean [±SD] refraction, -3.4 [±1.7] D). Results The LWS/MWS amplitude modulation ratios ranged from 0.56 to 1.80 in the central 3- to 13-degree diameter ring and from 0.94 to 1.91 in the peripheral 13- to 30-degree diameter ring. Within the central ring, the mean (±SD) ratios were 1.20 (±0.26) and 1.20 (±0.33) for the emmetropic and the myopic groups, respectively. For the peripheral ring, the mean (±SD) ratios were 1.48 (±0.27) and 1.30 (±0.27), respectively. There were no significant differences in the ratios between the emmetropic and myopic groups for either the central (p = 0.99) or peripheral (p = 0.08) rings. For the latter, more myopic refractive error was associated with lower LWS/MWS amplitude modulation ratio; the refraction explained 16% (p = 0.02) of variation in ratio. Conclusions The relationship between the LWS/MWS amplitude modulation ratios and refraction at 13 to 30 degrees indicates that a large longitudinal study of changes in refraction in persons with known cone ratio is required to determine if a low LWS/MWS cone ratio is associated with myopia development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The estimation of the critical gap has been an issue since the 1970s, when gap acceptance was introduced to evaluate the capacity of unsignalized intersections. The critical gap is the shortest gap that a driver is assumed to accept. A driver’s critical gap cannot be measured directly and a number of techniques have been developed to estimate the mean critical gaps of a sample of drivers. This paper reviews the ability of the Maximum Likelihood technique and the Probability Equilibrium Method to predict the mean and standard deviation of the critical gap with a simulation of 100 drivers, repeated 100 times for each flow condition. The Maximum Likelihood method gave consistent and unbiased estimates of the mean critical gap. Whereas the probability equilibrium method had a significant bias that was dependent on the flow in the priority stream. Both methods were reasonably consistent, although the Maximum Likelihood Method was slightly better. If drivers are inconsistent, then again the Maximum Likelihood method is superior. A criticism levelled at the Maximum Likelihood method is that a distribution of the critical gap has to be assumed. It was shown that this does not significantly affect its ability to predict the mean and standard deviation of the critical gaps. Finally, the Maximum Likelihood method can predict reasonable estimates with observations for 25 to 30 drivers. A spreadsheet procedure for using the Maximum Likelihood method is provided in this paper. The PEM can be improved if the maximum rejected gap is used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To determine the extent to which the accuracy of magnetic resonance imaging (MRI) based virtual 3-dimensional (3D) models of the intact orbit can approach that of the gold standard, computed tomography (CT) based models. The goal was to determine whether MRI is a viable alternative to CT scans in patients with isolated orbital fractures and penetrating eye injuries, pediatric patients, and patients requiring multiple scans in whom radiation exposure is ideally limited. Materials and Methods: Patients who presented with unilateral orbital fractures to the Royal Brisbane and Women’s Hospital from March 2011 to March 2012 were recruited to participate in this cross-sectional study. The primary predictor variable was the imaging technique (MRI vs CT). The outcome measurements were orbital volume (primary outcome) and geometric intraorbital surface deviations (secondary outcome)between the MRI- and CT-based 3D models. Results: Eleven subjects (9 male) were enrolled. The patients’ mean age was 30 years. On average, the MRI models underestimated the orbital volume of the CT models by 0.50 0.19 cm3 . The average intraorbital surface deviation between the MRI and CT models was 0.34 0.32 mm, with 78 2.7% of the surface within a tolerance of 0.5 mm. Conclusions: The volumetric differences of the MRI models are comparable to reported results from CT models. The intraorbital MRI surface deviations are smaller than the accepted tolerance for orbital surgical reconstructions. Therefore, the authors believe that MRI is an accurate radiation-free alternative to CT for the primary imaging and 3D reconstruction of the bony orbit. �

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamic nature of tissue temperature and the subcutaneous properties, such as blood flow, fatness, and metabolic rate, leads to variation in local skin temperature. Therefore, we investigated the effects of using multiple regions of interest when calculating weighted mean skin temperature from four local sites. Twenty-six healthy males completed a single trial in a thermonetural laboratory (mean ± SD): 24.0 (1.2) °C; 56 (8%) relative humidity; < 0.1 m/s air speed). Mean skin temperature was calculated from four local sites (neck, scapula, hand and shin) in accordance with International Standards using digital infrared thermography. A 50 x 50 mm square, defined by strips of aluminium tape, created six unique regions of interest, top left quadrant, top right quadrant, bottom left quadrant, bottom right quadrant, centre quadrant and the entire region of interest, at each of the local sites. The largest potential error in weighted mean skin temperature was calculated using a combination of a) the coolest and b) the warmest regions of interest at each of the local sites. Significant differences between the six regions interest were observed at the neck (P < 0.01), scapula (P < 0.001) and shin (P < 0.05); but not at the hand (P = 0.482). The largest difference (± SEM) at each site was as follows: neck 0.2 (0.1) °C; scapula 0.2 (0.0) °C; shin 0.1 (0.0) °C and hand 0.1 (0.1) °C. The largest potential error (mean ± SD) in weighted mean skin temperature was 0.4 (0.1) °C (P < 0.001) and the associated 95% limits of agreement for these differences was 0.2 to 0.5 °C. Although we observed differences in local and mean skin temperature based on the region of interest employed, these differences were minimal and are not considered physiologically meaningful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Skin temperature assessment has historically been undertaken with conductive devices affixed to the skin. With the development of technology, infrared devices are increasingly utilised in the measurement of skin temperature. Therefore, our purpose was to evaluate the agreement between four skin temperature devices at rest, during exercise in the heat, and recovery. Methods: Mean skin temperature (T̅sk) was assessed in thirty healthy males during 30 min rest (24.0± 1.2°C, 56 ± 8%), 30 min cycle in the heat (38.0 ± 0.5°C, 41 ± 2%), and 45 min recovery(24.0 ± 1.3°C, 56 ± 9%). T̅sk was assessed at four sites using two conductive devices(thermistors, iButtons) and two infrared devices (infrared thermometer, infrared camera). Results: Bland–Altman plots demonstrated mean bias ± limits of agreement between the thermistors and iButtons as follows (rest, exercise, recovery): -0.01 ± 0.04, 0.26 ± 0.85, -0.37 ± 0.98°C; thermistors and infrared thermometer: 0.34 ± 0.44, -0.44 ± 1.23, -1.04 ± 1.75°C; thermistors and infrared camera (rest, recovery): 0.83 ± 0.77, 1.88 ± 1.87°C. Pairwise comparisons of T̅sk found significant differences (p < 0.05) between thermistors and both infrared devices during resting conditions, and significant differences between the thermistors and all other devices tested during exercise in the heat and recovery. Conclusions: These results indicate poor agreement between conductive and infrared devices at rest, during exercise in the heat, and subsequent recovery. Infrared devices may not be suitable for monitoring T̅sk in the presence of, or following, metabolic and environmental induced heat stress.