440 resultados para accurate
Resumo:
Purpose: To determine the extent to which the accuracy of magnetic resonance imaging (MRI) based virtual 3-dimensional (3D) models of the intact orbit can approach that of the gold standard, computed tomography (CT) based models. The goal was to determine whether MRI is a viable alternative to CT scans in patients with isolated orbital fractures and penetrating eye injuries, pediatric patients, and patients requiring multiple scans in whom radiation exposure is ideally limited. Materials and Methods: Patients who presented with unilateral orbital fractures to the Royal Brisbane and Women’s Hospital from March 2011 to March 2012 were recruited to participate in this cross-sectional study. The primary predictor variable was the imaging technique (MRI vs CT). The outcome measurements were orbital volume (primary outcome) and geometric intraorbital surface deviations (secondary outcome)between the MRI- and CT-based 3D models. Results: Eleven subjects (9 male) were enrolled. The patients’ mean age was 30 years. On average, the MRI models underestimated the orbital volume of the CT models by 0.50 0.19 cm3 . The average intraorbital surface deviation between the MRI and CT models was 0.34 0.32 mm, with 78 2.7% of the surface within a tolerance of 0.5 mm. Conclusions: The volumetric differences of the MRI models are comparable to reported results from CT models. The intraorbital MRI surface deviations are smaller than the accepted tolerance for orbital surgical reconstructions. Therefore, the authors believe that MRI is an accurate radiation-free alternative to CT for the primary imaging and 3D reconstruction of the bony orbit. �
Resumo:
There have been substantial advances in small field dosimetry techniques and technologies, over the last decade, which have dramatically improved the achievable accuracy of small field dose measurements. This educational note aims to help radiation oncology medical physicists to apply some of these advances in clinical practice. The evaluation of a set of small field output factors (total scatter factors) is used to exemplify a detailed measurement and simulation procedure and as a basis for discussing the possible effects of simplifying that procedure. Field output factors were measured with an unshielded diode and a micro-ionisation chamber, at the centre of a set of square fields defined by a micro-multileaf collimator. Nominal field sizes investigated ranged from 6×6 to 98×98 mm2. Diode measurements in fields smaller than 30 mm across were corrected using response factors calculated using Monte Carlo simulations of the full diode geometry and daisy-chained to match micro-chamber measurements at intermediate field sizes. Diode measurements in fields smaller than 15 mm across were repeated twelve times over three separate measurement sessions, to evaluate the to evaluate the reproducibility of the radiation field size and its correspondence with the nominal field size. The five readings that contributed to each measurement on each day varied by up to 0.26%, for the “very small” fields smaller than 15 mm, and 0.18% for the fields larger than 15 mm. The diode response factors calculated for the unshielded diode agreed with previously published results, within 1.6%. The measured dimensions of the very small fields differed by up to 0.3 mm, across the different measurement sessions, contributing an uncertainty of up to 1.2% to the very small field output factors. The overall uncertainties in the field output factors were 1.8% for the very small fields and 1.1% for the fields larger than 15 mm across. Recommended steps for acquiring small field output factor measurements for use in radiotherapy treatment planning system beam configuration data are provided.
Resumo:
Business processes are prone to continuous and unexpected changes. Process workers may start executing a process differently in order to adjust to changes in workload, season, guidelines or regulations for example. Early detection of business process changes based on their event logs – also known as business process drift detection – enables analysts to identify and act upon changes that may otherwise affect process performance. Previous methods for business process drift detection are based on an exploration of a potentially large feature space and in some cases they require users to manually identify the specific features that characterize the drift. Depending on the explored feature set, these methods may miss certain types of changes. This paper proposes a fully automated and statistically grounded method for detecting process drift. The core idea is to perform statistical tests over the distributions of runs observed in two consecutive time windows. By adaptively sizing the window, the method strikes a trade-off between classification accuracy and drift detection delay. A validation on synthetic and real-life logs shows that the method accurately detects typical change patterns and scales up to the extent it is applicable for online drift detection.
Resumo:
In this paper we discuss the use of a series of column experiments to improve understanding of the effect irrigation water chemistry (saline solutions) has on measurements of saturated hydraulic conductivity (Ksat) of a sodic clay soil. We highlight in particular the use of extended leaching periods to determine whether the duration of leaching affects the results. In the experiments, mixed cation solutions of two different salinity levels, 50 meq/L and 100 meq/L, were applied under constant head to columns of a repacked sodic clay soil using three replicates for each treatment. The maximum Ksat measured during leaching with the 100 meq/L solution was approximately double the maximum Ksat measured during leaching with the 50 meq/L solution. Measured flow rates were found to increase rapidly after flow commenced then decrease gradually until flow rates became stable. The final, stable flow rate was roughly 80% less than the maximum flow rate measured. Reasons for these changes in saturated hydraulic conductivity are discussed. The key finding from these experiments is that long term leaching, involving significantly more pore volumes than is commonly reported in the literature, is required to obtain a ‘stable’ Ksat. We recommend that further studies be carried out to (1) determine whether similar behaviour in Ksat occurs in a wide range of sodic clay soils and (2) to help build a better understanding of the causes and implications of the observed behaviour in Ksat.
Resumo:
Many websites presently provide the facility for users to rate items quality based on user opinion. These ratings are used later to produce item reputation scores. The majority of websites apply the mean method to aggregate user ratings. This method is very simple and is not considered as an accurate aggregator. Many methods have been proposed to make aggregators produce more accurate reputation scores. In the majority of proposed methods the authors use extra information about the rating providers or about the context (e.g. time) in which the rating was given. However, this information is not available all the time. In such cases these methods produce reputation scores using the mean method or other alternative simple methods. In this paper, we propose a novel reputation model that generates more accurate item reputation scores based on collected ratings only. Our proposed model embeds statistical data, previously disregarded, of a given rating dataset in order to enhance the accuracy of the generated reputation scores. In more detail, we use the Beta distribution to produce weights for ratings and aggregate ratings using the weighted mean method. Experiments show that the proposed model exhibits performance superior to that of current state-of-the-art models.
Resumo:
This paper examines empirically the relative influence of the degree of endangerment of wildlife species and their stated likeability on individuals' allocation of funds for their conservation. To do this, it utilises data obtained from the IUCN Red List, and likeability and fund allocation data obtained from two serial surveys of a sample of the Australian public who were requested to assess 24 Australian wildlife species from three animal classes: mammals, birds and reptiles. Between the first and second survey, respondents were provided with extra information about the focal species. This information resulted in the dominance of endangerment as the major influence on the allocation of funding of respondents for the conservation of the focal wildlife species. Our results throw doubts on the proposition in the literature that the likeability of species is the dominant influence on willingness to pay for conservation of wildlife species. Furthermore, because the public's allocation of fund for conserving wildlife species seems to be more sensitive to information about the conservation status of species than to factors influencing their likeability, greater attention to providing accurate information about the former than the latter seems justified. Keywords: Conservation of wildlife species; Contingent valuation; Endangerment of species; Likeability of species; Willingness to pay
Resumo:
This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed.
Resumo:
An unstructured mesh �nite volume discretisation method for simulating di�usion in anisotropic media in two-dimensional space is discussed. This technique is considered as an extension of the fully implicit hybrid control-volume �nite-element method and it retains the local continuity of the ux at the control volume faces. A least squares function recon- struction technique together with a new ux decomposition strategy is used to obtain an accurate ux approximation at the control volume face, ensuring that the overall accuracy of the spatial discretisation maintains second order. This paper highlights that the new technique coincides with the traditional shape function technique when the correction term is neglected and that it signi�cantly increases the accuracy of the previous linear scheme on coarse meshes when applied to media that exhibit very strong to extreme anisotropy ratios. It is concluded that the method can be used on both regular and irregular meshes, and appears independent of the mesh quality.
Resumo:
In this paper, a singularly perturbed ordinary differential equation with non-smooth data is considered. The numerical method is generated by means of a Petrov-Galerkin finite element method with the piecewise-exponential test function and the piecewise-linear trial function. At the discontinuous point of the coefficient, a special technique is used. The method is shown to be first-order accurate and singular perturbation parameter uniform convergence. Finally, numerical results are presented, which are in agreement with theoretical results.
Resumo:
In this paper, a space fractional di®usion equation (SFDE) with non- homogeneous boundary conditions on a bounded domain is considered. A new matrix transfer technique (MTT) for solving the SFDE is proposed. The method is based on a matrix representation of the fractional-in-space operator and the novelty of this approach is that a standard discretisation of the operator leads to a system of linear ODEs with the matrix raised to the same fractional power. Analytic solutions of the SFDE are derived. Finally, some numerical results are given to demonstrate that the MTT is a computationally e±cient and accurate method for solving SFDE.
Resumo:
In this work, we examine unbalanced computation between an initiator and a responder that leads to resource exhaustion attacks in key exchange protocols. We construct models for two cryp-tographic protocols; one is the well-known Internet protocol named Secure Socket Layer (SSL) protocol, and the other one is the Host Identity Protocol (HIP) which has built-in DoS-resistant mechanisms. To examine such protocols, we develop a formal framework based on Timed Coloured Petri Nets (Timed CPNs) and use a simulation approach provided in CPN Tools to achieve a formal analysis. By adopting the key idea of Meadows' cost-based framework and re¯ning the de¯nition of operational costs during the protocol execution, our simulation provides an accurate cost estimate of protocol execution compar- ing among principals, as well as the percentage of successful connections from legitimate users, under four di®erent strategies of DoS attack.
Resumo:
This paper presents a prototype tracking system for tracking people in enclosed indoor environments where there is a high rate of occlusions. The system uses a stereo camera for acquisition, and is capable of disambiguating occlusions using a combination of depth map analysis, a two step ellipse fitting people detection process, the use of motion models and Kalman filters and a novel fit metric, based on computationally simple object statistics. Testing shows that our fit metric outperforms commonly used position based metrics and histogram based metrics, resulting in more accurate tracking of people.
Resumo:
Despite an ostensibly technology-driven society, the ability to communicate orally is still seen as an essential ability for students at school and university, as it is for graduates in the workplace. The need to develop effective oral communication skills is often tied to future work-related tasks. One tangible way that educators have assessed proficiency in this area is through prepared oral presentations. While some use the terms oral communication and oral presentation interchangeably, other writers question the role more formal presentations play in the overall development of oral communication skills. Adding to the discussion, this paper is part of a larger study examining the knowledge and skills students bring into the academy from previous educational experiences. The study examines some of the teaching and assessment methods used in secondary schools to develop oral communication skills through the use of formal oral presentations. Specifically, it will look at assessment models and how these are used as a form of instruction as well as how they contribute to an accurate evaluation of student abilities. The purpose of this paper is to explore key terms and identify tensions between expectations and practice. Placing the emphasis on the ‘oral’ aspect of this form of communication this paper will particularly look at the ‘delivery’ element of the process.
Resumo:
Non Alcoholic Fatty Liver Disease (NAFLD) is a condition that is frequently seen but seldom investigated. Until recently, NAFLD was considered benign, self-limiting and unworthy of further investigation. This opinion is based on retrospective studies with relatively small numbers and scant follow-up of histology data. (1) The prevalence for adults, in the USA is, 30%, and NAFLD is recognized as a common and increasing form of liver disease in the paediatric population (1). Australian data, from New South Wales, suggests the prevalence of NAFLD in “healthy” 15 year olds as being 10%.(2) Non-alcoholic fatty liver disease is a condition where fat progressively invades the liver parenchyma. The degree of infiltration ranges from simple steatosis (fat only) to steatohepatitis (fat and inflammation) steatohepatitis plus fibrosis (fat, inflammation and fibrosis) to cirrhosis (replacement of liver texture by scarred, fibrotic and non functioning tissue).Non-alcoholic fatty liver is diagnosed by exclusion rather than inclusion. None of the currently available diagnostic techniques -liver biopsy, liver function tests (LFT) or Imaging; ultrasound, Computerised tomography (CT) or Magnetic Resonance Imaging (MRI) are specific for non-alcoholic fatty liver. An association exists between NAFLD, Non Alcoholic Steatosis Hepatitis (NASH) and irreversible liver damage, cirrhosis and hepatoma. However, a more pervasive aspect of NAFLD is the association with Metabolic Syndrome. This Syndrome is categorised by increased insulin resistance (IR) and NAFLD is thought to be the hepatic representation. Those with NAFLD have an increased risk of death (3) and it is an independent predictor of atherosclerosis and cardiovascular disease (1). Liver biopsy is considered the gold standard for diagnosis, (4), and grading and staging, of non-alcoholic fatty liver disease. Fatty-liver is diagnosed when there is macrovesicular steatosis with displacement of the nucleus to the edge of the cell and at least 5% of the hepatocytes are seen to contain fat (4).Steatosis represents fat accumulation in liver tissue without inflammation. However, it is only called non-alcoholic fatty liver disease when alcohol - >20gms-30gms per day (5), has been excluded from the diet. Both non-alcoholic and alcoholic fatty liver are identical on histology. (4).LFT’s are indicative, not diagnostic. They indicate that a condition may be present but they are unable to diagnosis what the condition is. When a patient presents with raised fasting blood glucose, low HDL (high density lipoprotein), and elevated fasting triacylglycerols they are likely to have NAFLD. (6) Of the imaging techniques MRI is the least variable and the most reproducible. With CT scanning liver fat content can be semi quantitatively estimated. With increasing hepatic steatosis, liver attenuation values decrease by 1.6 Hounsfield units for every milligram of triglyceride deposited per gram of liver tissue (7). Ultrasound permits early detection of fatty liver, often in the preclinical stages before symptoms are present and serum alterations occur. Earlier, accurate reporting of this condition will allow appropriate intervention resulting in better patient health outcomes. References 1. Chalasami N. Does fat alone cause significant liver disease: It remains unclear whether simple steatosis is truly benign. American Gastroenterological Association Perspectives, February/March 2008 www.gastro.org/wmspage.cfm?parm1=5097 Viewed 20th October, 2008 2. Booth, M. George, J.Denney-Wilson, E: The population prevalence of adverse concentrations with adiposity of liver tests among Australian adolescents. Journal of Paediatrics and Child Health.2008 November 3. Catalano, D, Trovato, GM, Martines, GF, Randazzo, M, Tonzuso, A. Bright liver, body composition and insulin resistance changes with nutritional intervention: a follow-up study .Liver Int.2008; February 1280-9 4. Choudhury, J, Sanysl, A. Clinical aspects of Fatty Liver Disease. Semin in Liver Dis. 2004:24 (4):349-62 5. Dionysus Study Group. Drinking factors as cofactors of risk for alcohol induced liver change. Gut. 1997; 41 845-50 6. Preiss, D, Sattar, N. Non-alcoholic fatty liver disease: an overview of prevalence, diagnosis, pathogenesis and treatment considerations. Clin Sci.2008; 115 141-50 7. American Gastroenterological Association. Technical review on nonalcoholic fatty liver disease. Gastroenterology.2002; 123: 1705-25