406 resultados para Trivariate Normal Distribution


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The component modules in the standard BEAMnrc distribution may appear to be insufficient to model micro-multileaf collimators that have tri-faceted leaf ends and complex leaf profiles. This note indicates, however, that accurate Monte Carlo simulations of radiotherapy beams defined by a complex collimation device can be completed using BEAMnrc's standard VARMLC component module.---------- Methods: That this simple collimator model can produce spatially and dosimetrically accurate micro-collimated fields is illustrated using comparisons with ion chamber and film measurements of the dose deposited by square and irregular fields incident on planar, homogeneous water phantoms.---------- Results: Monte Carlo dose calculations for on- and off-axis fields are shown to produce good agreement with experimental values, even upon close examination of the penumbrae.--------- Conclusions: The use of a VARMLC model of the micro-multileaf collimator, along with a commissioned model of the associated linear accelerator, is therefore recommended as an alternative to the development or use of in-house or third-party component modules for simulating stereotactic radiotherapy and radiosurgery treatments. Simulation parameters for the VARMLC model are provided which should allow other researchers to adapt and use this model to study clinical stereotactic radiotherapy treatments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Local climate is a critical element in the design of energy efficient buildings. In this paper, ten years of historical weather data in Australia's eight capital cities were profiled and analysed to characterize the variations of climatic variables in Australia. The method of descriptive statistics was employed. Either the pattern of cumulative distribution and/or the profile of percentage distribution are presented. It was found that although weather variables vary with different locations, there is often a good, nearly linear relation between a weather variable and its cumulative percentage for the majority of middle part of the cumulative curves. By comparing the slopes of these distribution profiles, it may be possible to determine the relative range of changes of the particular weather variables for a given city. The implications of these distribution profiles of key weather variables on energy efficient building design are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On-axis monochromatic higher-order aberrations increase with age. Few studies have been made of peripheral refraction along the horizontal meridian of older eyes, and none of their off-axis higher-order aberrations. We measured wave aberrations over the central 42°x32° visual field for a 5mm pupil in 10 young and 7 older emmetropes. Patterns of peripheral refraction were similar in the two groups. Coma increased linearly with field angle at a significantly higher rate in older than in young emmetropes (−0.018±0.007 versus −0.006±0.002 µm/deg). Spherical aberration was almost constant over the measured field in both age groups and mean values across the field were significantly higher in older than in young emmetropes (+0.08±0.05 versus +0.02±0.04 µm). Total root-mean-square and higher-order aberrations increased more rapidly with field angle in the older emmetropes. However, the limits to monochromatic peripheral retinal image quality are largely determined by the second-order aberrations, which do not change markedly with age, and under normal conditions the relative importance of the increased higher-order aberrations in older eyes is lessened by the reduction in pupil diameter with age. Therefore it is unlikely that peripheral visual performance deficits observed in normal older individuals are primarily attributable to the increased impact of higher-order aberration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The existence of any film genre depends on the effective operation of distribution networks. Contingencies of distribution play an important role in determining the content of individual texts and the characteristics of film genres; they enable new genres to emerge at the same time as they impose limits on generic change. This article sets out an alternative way of doing genre studies, based on an analysis of distributive circuits rather than film texts or generic categories. Our objective is to provide a conceptual framework that can account for the multiple ways in which distribution networks leave their traces on film texts and audience expectations, with specific reference to international horror networks, and to offer some preliminary suggestions as to how distribution analysis can be integrated into existing genre studies methodologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A plethora of literature exists on irrigation development. However, only a few studies analyse the distributional issues associated with irrigation induced technological changes (IITC) in the context of commodity markets. Furthermore, these studies deal with only the theoretical arguments and to date no proper investigation has been conducted to examine the long-term benefits of adopting modern irrigation technology. This study investigates the long-term benefit changes of irrigation induced technological changes using data from Sri Lanka with reference to rice farming. The results show that (1) adopting modern technology on irrigation increases the overall social welfare through consumption of a larger quantity at a lower cost (2) the magnitude, sensitivity and distributional gains depend on the price elasticity of demand and supply as well as the size of the marketable surplus (3) non-farm sector gains are larger than farm sector gains (4) the distribution of the benefits among different types of producers depend on the magnitude of the expansion of the irrigated areas as well as the competition faced by traditional farmers (5) selective technological adoption and subsidies have a detrimental effect on the welfare of other producers who do not enjoy the same benefits (6) the short-term distributional effects are more severe than the long-term effects among different groups of farmers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the optimal allocation and sizing of distributed generators (DGs) in a distribution system is studied. To achieve this goal, an optimization problem should be solved in which the main objective is to minimize the DGs cost and to maximise the reliability simultaneously. The active power balance between loads and DGs during the isolation time is used as a constraint. Another point considered in this process is the load shedding. It means that if the summation of DGs active power in a zone, isolated by the sectionalizers because of a fault, is less than the total active power of loads located in that zone, the program start shedding the loads in one-by-one using the priority rule still the active power balance is satisfied. This assumption decreases the reliability index, SAIDI, compared with the case loads in a zone are shed when total DGs power is less than the total load power. To validate the proposed method, a 17-bus distribution system is employed and the results are analysed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As network capacity has increased over the past decade, individuals and organisations have found it increasingly appealing to make use of remote services in the form of service-oriented architectures and cloud computing services. Data processed by remote services, however, is no longer under the direct control of the individual or organisation that provided the data, leaving data owners at risk of data theft or misuse. This paper describes a model by which data owners can control the distribution and use of their data throughout a dynamic coalition of service providers using digital rights management technology. Our model allows a data owner to establish the trustworthiness of every member of a coalition employed to process data, and to communicate a machine-enforceable usage policy to every such member.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the issue of incorporating uncertainty for environmental modelling informed by imagery is explored by considering uncertainty in deterministic modelling, measurement uncertainty and uncertainty in image composition. Incorporating uncertainty in deterministic modelling is extended for use with imagery using the Bayesian melding approach. In the application presented, slope steepness is shown to be the main contributor to total uncertainty in the Revised Universal Soil Loss Equation. A spatial sampling procedure is also proposed to assist in implementing Bayesian melding given the increased data size with models informed by imagery. Measurement error models are another approach to incorporating uncertainty when data is informed by imagery. These models for measurement uncertainty, considered in a Bayesian conditional independence framework, are applied to ecological data generated from imagery. The models are shown to be appropriate and useful in certain situations. Measurement uncertainty is also considered in the context of change detection when two images are not co-registered. An approach for detecting change in two successive images is proposed that is not affected by registration. The procedure uses the Kolmogorov-Smirnov test on homogeneous segments of an image to detect change, with the homogeneous segments determined using a Bayesian mixture model of pixel values. Using the mixture model to segment an image also allows for uncertainty in the composition of an image. This thesis concludes by comparing several different Bayesian image segmentation approaches that allow for uncertainty regarding the allocation of pixels to different ground components. Each segmentation approach is applied to a data set of chlorophyll values and shown to have different benefits and drawbacks depending on the aims of the analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports the distribution of Polycyclic Aromatic Hydrocarbons (PAHs) in wash-off in urban stormwater in Gold Coast, Australia. Runoff samples collected from residential, industrial and commercial sites were separated into a dissolved fraction (<0.45µm), and three particulate fractions (0.45-75µm, 75-150µm and >150µm). Patterns in the distribution of PAHs in the fractions were investigated using Principal Component Analysis. Regardless of the land use and particle size fraction characteristics, the presence of organic carbon plays a dominant role in the distribution of PAHs. The PAHs concentrations were also found to decrease with rainfall duration. Generally, the 1- and 2-year average recurrence interval rainfall events were associated with the majority of the PAHs and the wash-off was a source limiting process. In the context of stormwater quality mitigation, targeting the initial part of the rainfall event is the most effective treatment strategy. The implications of the study results for urban stormwater quality management are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To assess the effect of graded increases in exercised-induced energy expenditure (EE) on appetite, energy intake (EI), total daily EE and body weight in men living in their normal environment and consuming their usual diets. Design: Within-subject, repeated measures design. Six men (mean (s.d.) age 31.0 (5.0) y; weight 75.1 (15.96) kg; height 1.79 (0.10) m; body mass index (BMI) 23.3(2.4) kg/m2), were each studied three times during a 9 day protocol, corresponding to prescriptions of no exercise, (control) (Nex; 0 MJ/day), medium exercise level (Mex; ~1.6 MJ/day) and high exercise level (Hex; ~3.2 MJ/day). On days 1-2 subjects were given a medium fat (MF) maintenance diet (1.6 ´ resting metabolic rate (RMR)). Measurements: On days 3-9 subjects self-recorded dietary intake using a food diary and self-weighed intake. EE was assessed by continual heart rate monitoring, using the modified FLEX method. Subjects' HR (heart rate) was individually calibrated against submaximal VO2 during incremental exercise tests at the beginning and end of each 9 day study period. Respiratory exchange was measured by indirect calorimetry. Subjects completed hourly hunger ratings during waking hours to record subjective sensations of hunger and appetite. Body weight was measured daily. Results: EE amounted to 11.7, 12.9 and 16.8 MJ/day (F(2,10)=48.26; P<0.001 (s.e.d=0.55)) on the Nex, Mex and Hex treatments, respectively. The corresponding values for EI were 11.6, 11.8 and 11.8 MJ/day (F(2,10)=0.10; P=0.910 (s.e.d.=0.10)), respectively. There were no treatment effects on hunger, appetite or body weight, but there was evidence of weight loss on the Hex treatment. Conclusion: Increasing EE did not lead to compensation of EI over 7 days. However, total daily EE tended to decrease over time on the two exercise treatments. Lean men appear able to tolerate a considerable negative energy balance, induced by exercise, over 7 days without invoking compensatory increases in EI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ROLE OF LOW AFFINITY β1-ADRENERGIC RECEPTOR IN NORMAL AND DISEASED HEARTS Background: The β1-adrenergic receptor (AR) has at least two binding sites, 1HAR and 1LAR (high and low affinity site of the 1AR respectively) which cause cardiostimulation. Some β-blockers, for example (-)-pindolol and (-)-CGP 12177 can activate β1LAR at higher concentrations than those required to block β1HAR. While β1HAR can be blocked by all clinically used β-blockers, β1LAR is relatively resistant to blockade. Thus, chronic β1LAR activation may occur in the setting of β-blocker therapy, thereby mediating persistent βAR signaling. Thus, it is important to determine the potential significance of β1LAR in vivo, particularly in disease settings. Method and result: C57Bl/6 male mice were used. Chronic (4 weeks) β1LAR activation was achieved by treatment with (-)-CGP12177 via osmotic minipump. Cardiac function was assessed by echocardiography and catheterization. (-)-CGP12177 treatment in healthy mice increased heart rate and left ventricular (LV) contractility without detectable LV remodelling or hypertrophy. In mice subjected to an 8-week period of aorta banding, (-)-CGP12177 treatment given during 4-8 weeks led to a positive inotropic effect. (-)-CGP12177 treatment exacerbated LV remodelling indicated by a worsening of LV hypertrophy by ??% (estimated by weight, wall thickness, cardiomyocyte size) and interstitial/perivascular fibrosis (by histology). Importantly, (-)-CGP12177 treatment to aorta banded mice exacerbated cardiac expression of hypertrophic, fibrogenic and inflammatory genes (all p<0.05 vs. non-treated control with aorta banding).. Conclusion: β1LAR activation provides functional support to the heart, in both normal and diseased (pressure overload) settings. Sustained β1LAR activation in the diseased heart exacerbates LV remodelling and therefore may promote disease progression from compensatory hypertrophy to heart failure. Word count: 270

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At the Mater Children’s Hospital, approximately 80% of patients presenting with Adolescent Idiopathic Scoliosis requiring corrective surgery receive a fulcrum bending radiograph. The fulcrum bending radiograph provides a measurement of spine flexibility and a better indication of achievable surgical correction than lateral-bending radiographs (Cheung and Luk, 1997; Hay et al 2008). The magnitude and distribution of the corrective force exerted by the bolster on the patient’s body is unknown. The objective of this pilot study was to measure, for the first time, the forces transmitted to the patient’s ribs through the bolster during the fulcrum bending radiograph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, both Distributed Generators (DG) and capacitors are allocated and sized optimally for improving line loss and reliability. The objective function is composed of the investment cost of DGs and capacitors along with loss and reliability which are converted to the genuine dollar. The bus voltage and line current are considered as constraints which should be satisfied during the optimization procedure. Hybrid Particle Swarm Optimization as a heuristic based technique is used as the optimization method. The IEEE 69-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate that the lowest cost planning is found by optimizing both DGs and capacitors in distribution networks.