920 resultados para probability distribution
Resumo:
In the decision-making of multi-area ATC (Available Transfer Capacity) in electricity market environment, the existing resources of transmission network should be optimally dispatched and coordinately employed on the premise that the secure system operation is maintained and risk associated is controllable. The non-sequential Monte Carlo simulation is used to determine the ATC probability density distribution of specified areas under the influence of several uncertainty factors, based on which, a coordinated probabilistic optimal decision-making model with the maximal risk benefit as its objective is developed for multi-area ATC. The NSGA-II is applied to calculate the ATC of each area, which considers the risk cost caused by relevant uncertainty factors and the synchronous coordination among areas. The essential characteristics of the developed model and the employed algorithm are illustrated by the example of IEEE 118-bus test system. Simulative result shows that, the risk of multi-area ATC decision-making is influenced by the uncertainties in power system operation and the relative importance degrees of different areas.
Resumo:
In this paper we report a new neutron Compton scattering (NCS) measurement of the ground state single atom kinetic energy of polycrystalline beryllium at momentum transfers in the range 27}104 As ~1 and temperatures in the range 110}1150 K. The measurements have been made with the electron Volt spectrometer (eVS) at the ISIS facility and the measured kinetic energies are shown to be &10% higher than calculations made in the harmonic approximation.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
Dehydration of food materials requires water removal from it. This removal of moisture prevents the growth and reproduction of microorganisms that cause decay and minimizes many of the moisture-driven deterioration reactions (Brennan, 1994). However, during food drying, many other changes occur simultaneously resulting in a modified overall quality (Kompany et al., 1993). Among the physical attributes of dried food material porosity and microstructure are the important ones that can dominant other quality of dried foods (Aguilera et al., 2000). In addition, this two concerned quality attributes affected by process conditions, material components and raw structure of food stuff. In this work, temperature moisture distribution within food materials during microwave drying will be taken into consideration to observe its participation on the microstructure and porosity of the finished product. Apple is the selective materials for this work. Generally, most of the food materials are found in non-uniformed moisture contained condition. To develop non uniform temperature distribution, food materials have been dried in a microwave oven with different power levels (Chua et al., 2000). First of all, temperature and moisture model is simulated by COMSOL Multiphysics. Later on, digital imaging camera and Image Pro Premier software have been deployed to observation moisture distribution and thermal imaging camera for temperature distribution. Finally, Microstructure and porosity of the food materials are obtained from scanning electron microscope and porosity measuring devices respectively . Moisture distribution and temperature during drying influence the microstructure and porosity significantly. Specially, High temperature and moisture contained regions show less porosity and more rupture. These findings support other literatures of Halder et al. (2011) and Rahman et al (1990). On the other hand, low temperature and moisture regions depict uniform microstructure and high porosity. This work therefore assists in better understanding of the role of moisture and temperature distribution to a prediction of micro structure and porosity of dried food materials.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
Like music and the news media before it, the film and television business is now facing its time of digital disruption. Major changes are being brought about in global online distribution of film and television by new players, such as Google/YouTube, Apple, Amazon, Yahoo!, Facebook, Netflix and Hulu, some of whom massively outrank in size and growth the companies that run film and television today. Content, Hollywood has always asserted, is King. But the power and profitability in screen industries have always resided in distribution. Incumbents in the screen industries tried to control the emerging dynamics of online distribution, but failed. The new, born digital, globally focused, players are developing TV network-like strategies, including commissioning content that has widened the net of what counts as television. Content may be King, but these new players may become the King Kongs of the online world.
Resumo:
Groundwater flow models are usually characterized as being either transient flow models or steady state flow models. Given that steady state groundwater flow conditions arise as a long time asymptotic limit of a particular transient response, it is natural for us to seek a finite estimate of the amount of time required for a particular transient flow problem to effectively reach steady state. Here, we introduce the concept of mean action time (MAT) to address a fundamental question: How long does it take for a groundwater recharge process or discharge processes to effectively reach steady state? This concept relies on identifying a cumulative distribution function, $F(t;x)$, which varies from $F(0;x)=0$ to $F(t;x) \to \infty$ as $t\to \infty$, thereby providing us with a measurement of the progress of the system towards steady state. The MAT corresponds to the mean of the associated probability density function $f(t;x) = \dfrac{dF}{dt}$, and we demonstrate that this framework provides useful analytical insight by explicitly showing how the MAT depends on the parameters in the model and the geometry of the problem. Additional theoretical results relating to the variance of $f(t;x)$, known as the variance of action time (VAT), are also presented. To test our theoretical predictions we include measurements from a laboratory–scale experiment describing flow through a homogeneous porous medium. The laboratory data confirms that the theoretical MAT predictions are in good agreement with measurements from the physical model.
Resumo:
This paper proposes a distributed control approach to coordinate multiple energy storage units (ESUs) to avoid violation of voltage and thermal constraints, which are some of the main power quality challenges for future distribution networks. ESUs usually are connected to a network through voltage source converters. In this paper, both ESU converters active and reactive power are used to deal with the above mentioned power quality issues. ESUs' reactive power is proposed to be used for voltage support, while the active power is to be utilized in managing network loading. Two typical distribution networks are used to apply the proposed method, and the simulated results are illustrated in this paper to show the effectiveness of this approach.
Resumo:
Background Falls are one of the most frequently occurring adverse events that impact upon the recovery of older hospital inpatients. Falls can threaten both immediate and longer-term health and independence. There is need to identify cost-effective means for preventing falls in hospitals. Hospital-based falls prevention interventions tested in randomized trials have not yet been subjected to economic evaluation. Methods Incremental cost-effectiveness analysis was undertaken from the health service provider perspective, over the period of hospitalization (time horizon) using the Australian Dollar (A$) at 2008 values. Analyses were based on data from a randomized trial among n = 1,206 acute and rehabilitation inpatients. Decision tree modeling with three-way sensitivity analyses were conducted using burden of disease estimates developed from trial data and previous research. The intervention was a multimedia patient education program provided with trained health professional follow-up shown to reduce falls among cognitively intact hospital patients. Results The short-term cost to a health service of one cognitively intact patient being a faller could be as high as A$14,591 (2008). The education program cost A$526 (2008) to prevent one cognitively intact patient becoming a faller and A$294 (2008) to prevent one fall based on primary trial data. These estimates were unstable due to high variability in the hospital costs accrued by individual patients involved in the trial. There was a 52% probability the complete program was both more effective and less costly (from the health service perspective) than providing usual care alone. Decision tree modeling sensitivity analyses identified that when provided in real life contexts, the program would be both more effective in preventing falls among cognitively intact inpatients and cost saving where the proportion of these patients who would otherwise fall under usual care conditions is at least 4.0%. Conclusions This economic evaluation was designed to assist health care providers decide in what circumstances this intervention should be provided. If the proportion of cognitively intact patients falling on a ward under usual care conditions is 4% or greater, then provision of the complete program in addition to usual care will likely both prevent falls and reduce costs for a health service.
Resumo:
In the electricity market environment, coordination of system reliability and economics of a power system is of great significance in determining the available transfer capability (ATC). In addition, the risks associated with uncertainties should be properly addressed in the ATC determination process for risk-benefit maximization. Against this background, it is necessary that the ATC be optimally allocated and utilized within relative security constraints. First of all, the non-sequential Monte Carlo stimulation is employed to derive the probability density distribution of ATC of designated areas incorporating uncertainty factors. Second, on the basis of that, a multi-objective optimization model is formulated to determine the multi-area ATC so as to maximize the risk-benefits. Then, the solution to the developed model is achieved by the fast non-dominated sorting (NSGA-II) algorithm, which could decrease the risk caused by uncertainties while coordinating the ATCs of different areas. Finally, the IEEE 118-bus test system is served for demonstrating the essential features of the developed model and employed algorithm.
Resumo:
As a good solution to the shortage and environmental unfriendliness of fossil fuels, plug-in electric vehicles (PEVs) attract much interests of the public. To investigate the problems caused by the integration of numerous PEVs, a lot of research work has been done on the grid impacts of PEVs in aspects including thermal loading, voltage regulation, transformer loss of life, unbalance, losses, and harmonic distortion levels. This paper surveys the-state-of-the-art of the research in this area and outline three possible measures for a power grid company to make full use of PEVs.
Resumo:
This paper details the processes and challenges involved in collecting inventory data from smallholder and community woodlots on Leyte Island, Philippines. Over the period from 2005 through to 2012, 253 woodlots at 170 sites were sampled as part of a large multidisciplinary project, resulting in a substantial timber inventory database. The inventory was undertaken to provide information for three separate but interrelated studies, namely (1) tree growth, performance and timber availability from private smallholder woodlots on Leyte Island; (2) tree growth and performance of mixed-species plantings of native species; and (3) the assessment of reforestation outcomes from various forms of reforestation. A common procedure for establishing plots within each site was developed and applied in each study, although the basis of site selection varied. A two-stage probability proportion to size sampling framework was developed to select smallholder woodlots for inclusion in the inventory. In contrast, community-based forestry woodlots were selected using stratified random sampling. Challenges encountered in undertaking the inventory were mostly associated with the need to consult widely before the commencement of the inventory and problems in identifying woodlots for inclusion. Most smallholder woodlots were only capable of producing merchantable volumes of less than 44 % of the site potential due to a lack of appropriate silviculture. There was a clear bimodal distribution of proportion that the woodlots comprised of the total smallholding area. This bimodality reflects two major motivations for smallholders to establish woodlots, namely timber production and to secure land tenure.
Resumo:
OBJECTIVE: The objective of this study was to describe the distribution of conjunctival ultraviolet autofluorescence (UVAF) in an adult population. METHODS: We conducted a cross-sectional, population-based study in the genetic isolate of Norfolk Island, South Pacific Ocean. In all, 641 people, aged 15 to 89 years, were recruited. UVAF and standard (control) photographs were taken of the nasal and temporal interpalpebral regions bilaterally. Differences between the groups for non-normally distributed continuous variables were assessed using the Wilcoxon-Mann-Whitney ranksum test. Trends across categories were assessed using Cuzick's non-parametric test for trend or Kendall's rank correlation τ. RESULTS: Conjunctival UVAF is a non-parametric trait with a positively skewed distribution. Median amount of conjunctival UVAF per person (sum of four measurements; right nasal/temporal and left nasal/temporal) was 28.2 mm(2) (interquartile range 14.5-48.2). There was an inverse, linear relationship between UVAF and advancing age (P<0.001). Males had a higher sum of UVAF compared with females (34.4 mm(2) vs 23.2 mm(2), P<0.0001). There were no statistically significant differences in area of UVAF between right and left eyes or between nasal and temporal regions. CONCLUSION: We have provided the first quantifiable estimates of conjunctival UVAF in an adult population. Further data are required to provide information about the natural history of UVAF and to characterise other potential disease associations with UVAF. UVR protective strategies should be emphasised at an early age to prevent the long-term adverse effects on health associated with excess UVR.
Resumo:
Aim: To describe the recruitment, ophthalmic examination methods and distribution of ocular biometry of participants in the Norfolk Island Eye Study, who were individuals descended from the English Bounty mutineers and their Polynesian wives. Methods: All 1,275 permanent residents of Norfolk Island aged over 15 years were invited to participate, including 602 individuals involved in a 2001 cardiovascular disease study. Participants completed a detailed questionnaire and underwent a comprehensive eye assessment including stereo disc and retinal photography, ocular coherence topography and conjunctival autofluorescence assessment. Additionally, blood or saliva was taken for DNA testing. Results: 781 participants aged over 15 years were seen (54% female), comprising 61% of the permanent Island population. 343 people (43.9%) could trace their family history to the Pitcairn Islanders (Norfolk Island Pitcairn Pedigree). Mean anterior chamber depth was 3.32mm, mean axial length (AL) was 23.5mm, and mean central corneal thickness was 546 microns. There were no statistically significant differences in these characteristics between persons with and without Pitcairn Island ancestry. Mean intra-ocular pressure was lower in people with Pitcairn Island ancestry: 15.89mmHg compared to those without Pitcairn Island ancestry 16.49mmHg (P = .007). The mean keratometry value was lower in people with Pitcairn Island ancestry (43.22 vs. 43.52, P = .007). The corneas were flatter in people of Pitcairn ancestry but there was no corresponding difference in AL or refraction. Conclusion: Our study population is highly representative of the permanent population of Norfolk Island. Ocular biometry was similar to that of other white populations. Heritability estimates, linkage analysis and genome-wide studies will further elucidate the genetic determinants of chronic ocular diseases in this genetic isolate.
Resumo:
A nanoparticles size is one of their key physical characteristics that can affect their fate in a human’s respiratory tract (in case of inhalation) and also in the environment. Hence, measuring the size distribution of nanoparticles is absolutely essential and contributes greatly to their characterization. For years, Scanning Mobility Particle Sizers (SMPS), which rely on measuring the electrical mobility diameter of particles, have been used as one of the most reliable real-time instruments for the size distribution measurement of nanoparticles. Despite its benefits, this instrument has some drawbacks, including equivalency problems for non-spherical particles (i.e. assuming a non-spherical particle is equal to a spherical particle of diameter d due to the same electrical mobility), as well as limitations in terms of its use in workplaces, because of its large size and the complexity of its operation...