937 resultados para STEPS
Resumo:
There have been substantial advances in small field dosimetry techniques and technologies, over the last decade, which have dramatically improved the achievable accuracy of small field dose measurements. This educational note aims to help radiation oncology medical physicists to apply some of these advances in clinical practice. The evaluation of a set of small field output factors (total scatter factors) is used to exemplify a detailed measurement and simulation procedure and as a basis for discussing the possible effects of simplifying that procedure. Field output factors were measured with an unshielded diode and a micro-ionisation chamber, at the centre of a set of square fields defined by a micro-multileaf collimator. Nominal field sizes investigated ranged from 6×6 to 98×98 mm2. Diode measurements in fields smaller than 30 mm across were corrected using response factors calculated using Monte Carlo simulations of the full diode geometry and daisy-chained to match micro-chamber measurements at intermediate field sizes. Diode measurements in fields smaller than 15 mm across were repeated twelve times over three separate measurement sessions, to evaluate the to evaluate the reproducibility of the radiation field size and its correspondence with the nominal field size. The five readings that contributed to each measurement on each day varied by up to 0.26%, for the “very small” fields smaller than 15 mm, and 0.18% for the fields larger than 15 mm. The diode response factors calculated for the unshielded diode agreed with previously published results, within 1.6%. The measured dimensions of the very small fields differed by up to 0.3 mm, across the different measurement sessions, contributing an uncertainty of up to 1.2% to the very small field output factors. The overall uncertainties in the field output factors were 1.8% for the very small fields and 1.1% for the fields larger than 15 mm across. Recommended steps for acquiring small field output factor measurements for use in radiotherapy treatment planning system beam configuration data are provided.
Resumo:
Background The high recurrence rate of chronic venous leg ulcers has a significant impact on an individual’s quality of life and healthcare costs. Objectives This study aimed to identify risk and protective factors for recurrence of venous leg ulcers using a theoretical approach by applying a framework of self and family management of chronic conditions to underpin the study. Design Secondary analysis of combined data collected from three previous prospective longitudinal studies. Setting The contributing studies’ participants were recruited from two metropolitan hospital outpatient wound clinics and three community-based wound clinics. Participants Data were available on a sample of 250 adults, with a leg ulcer of primarily venous aetiology, who were followed after ulcer healing for a median follow-up time of 17 months after healing (range: 3 to 36 months). Methods Data from the three studies were combined. The original participant data were collected through medical records and self-reported questionnaires upon healing and every 3 months thereafter. A Cox proportion-hazards regression analysis was undertaken to determine the influential factors on leg ulcer recurrence based on the proposed conceptual framework. Results The median time to recurrence was 42 weeks (95% CI 31.9–52.0), with an incidence of 22% (54 of 250 participants) recurrence within three months of healing, 39% (91 of 235 participants) for those who were followed for six months, 57% (111 of 193) by 12 months, 73% (53 of 72) by two years and 78% (41 of 52) of those who were followed up for three years. A Cox proportional-hazards regression model revealed that the risk factors for recurrence included a history of deep vein thrombosis (HR 1.7, 95% CI 1.07–2.67, p=0.024), history of multiple previous leg ulcers (HR 4.4, 95% CI 1.84–10.5, p=0.001), and longer duration (in weeks) of previous ulcer (HR 1.01, 95% CI 1.003–1.01, p<0.001); while the protective factors were elevating legs for at least 30 minutes per day (HR 0.33, 95% CI 0.19–0.56, p<0.001), higher levels of self-efficacy (HR 0.95, 95% CI 0.92–0.99, p=0.016), and walking around for at least three hours/day (HR 0.66, 95% CI 0.44–0.98, p=0.040). Conclusions Results from this study provide a comprehensive examination of risk and protective factors associated with leg ulcer recurrence based on the chronic disease self and family management framework. These results in turn provide essential steps towards developing and testing interventions to promote optimal prevention strategies for venous leg ulcer recurrence.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
The focus of this paper is on two World Heritage Areas: the Great Barrier Reef in Queensland, Australia and the Everglades in Florida. While both are World Heritage listed by the UNESCO, the Everglades is on the "World Heritage in Danger" list and the Great Barrier Reef could be on this list within the next year if present pressures continue. This paper examines the planning approaches and governance structures used in these two areas (Queensland and Florida) to manage the growth and development pressures. To make the analysis manageable, given the scale of these World Heritage areas, case studies at the local government level will be used: the Cairns Regional Council in Queensland and Monroe County in Florida. The case study analysis will involve three steps: (1) examination of the various plans at the federal, state, local levels that impact upon environmental quality in the Great Barrier Reef and Everglades; (2) assessing the degree to which these plans have been implemented; and (3) determine if (and how) the plans have improved environmental quality. In addition to the planning analysis we will also examine the governance structures (Lebel et al. 2006) within which planning operates. In any comparative analysis context is important (Hantrais 2009). Contextual differences between Queensland and Florida have previously been examined by Sipe, et al. (2007) and will be used as the starting point for this analysis. Our operating hypothesis and preliminary analysis suggests that the planning approaches and governance structures used in Florida and Queensland are considerably different, but the environmental outcomes may be similar. This is based, in part, on Vella (2004) who did a comparative analysis of environmental practices in the sugar industry in Florida and Queensland. This research re-examines this hypothesis and broadens the focus beyond the sugar industry to growth and development more broadly.
Resumo:
Biological systems are typically complex and adaptive, involving large numbers of entities, or organisms, and many-layered interactions between these. System behaviour evolves over time, and typically benefits from previous experience by retaining memory of previous events. Given the dynamic nature of these phenomena, it is non-trivial to provide a comprehensive description of complex adaptive systems and, in particular, to define the importance and contribution of low-level unsupervised interactions to the overall evolution process. In this chapter, the authors focus on the application of the agent-based paradigm in the context of the immune response to HIV. Explicit implementation of lymph nodes and the associated lymph network, including lymphatic chain structure, is a key objective, and requires parallelisation of the model. Steps taken towards an optimal communication strategy are detailed.
Resumo:
Purpose – In structural, earthquake and aeronautical engineering and mechanical vibration, the solution of dynamic equations for a structure subjected to dynamic loading leads to a high order system of differential equations. The numerical methods are usually used for integration when either there is dealing with discrete data or there is no analytical solution for the equations. Since the numerical methods with more accuracy and stability give more accurate results in structural responses, there is a need to improve the existing methods or develop new ones. The paper aims to discuss these issues. Design/methodology/approach – In this paper, a new time integration method is proposed mathematically and numerically, which is accordingly applied to single-degree-of-freedom (SDOF) and multi-degree-of-freedom (MDOF) systems. Finally, the results are compared to the existing methods such as Newmark’s method and closed form solution. Findings – It is concluded that, in the proposed method, the data variance of each set of structural responses such as displacement, velocity, or acceleration in different time steps is less than those in Newmark’s method, and the proposed method is more accurate and stable than Newmark’s method and is capable of analyzing the structure at fewer numbers of iteration or computation cycles, hence less time-consuming. Originality/value – A new mathematical and numerical time integration method is proposed for the computation of structural responses with higher accuracy and stability, lower data variance, and fewer numbers of iterations for computational cycles.
Resumo:
XD: Experience Design Magazine is an interdisciplinary publication that focuses on the concept and practice of ‘experience design’, as a holistic concept separate from the well known concept of ‘user experience’. The magazine aims to present a mixture of interrelated perspectives from industry and academic researchers with practicing designers and managers. The informal, journalistic style of the publication aims to simultaneously provide a platform for researchers and other writers to promote their work in an applied way for global impact, and for industry designers to present practical perspectives to inspire a global research audience. Each issue will feature a series of projects, interviews, visuals, reviews and creative inspiration – all of which help everyone understand why experience design is important, who does it and where, how experience design is done in practice and how experience design research can enhance practice. Contents Issue 1 Miller, F. Developing Principles for Designing Optimal Experiences Lavallee, P. Design for Emotions Khan, H. The Entropii XD Framework Bowe, M. & Silvers, A. First Steps in Experience Design Leaper, N. Learning by Design Forrest, R. & Roberts, T. Interpretive Design: Think, Do, Feel Tavakkoli, P. Working Hard at Play Stow, C. Designing Engaging Learning Experiences Wood, M. Enhance Your Travel Experience Using Apps Miller, F. Humanizing It Wood, M. Designing the White Night Experience Newberry, P. & Farnham, K. Experience Design Book Excerpt
Resumo:
We present an approach for detecting sensor spoofing attacks on a cyber-physical system. Our approach consists of two steps. In the first step, we construct a safety envelope of the system. Under nominal conditions (that is, when there are no attacks), the system always stays inside its safety envelope. In the second step, we build an attack detector: a monitor that executes synchronously with the system and raises an alarm whenever the system state falls outside the safety envelope. We synthesize safety envelopes using a modified machine learning procedure applied on data collected from the system when it is not under attack. We present experimental results that show effectiveness of our approach, and also validate the several novel features that we introduced in our learning procedure.
Resumo:
Automatic Vehicle Identification Systems are being increasingly used as a new source of travel information. As in the last decades these systems relied on expensive new technologies, few of them were scattered along a networks making thus Travel-Time and Average Speed estimation their main objectives. However, as their price dropped, the opportunity of building dense AVI networks arose, as in Brisbane where more than 250 Bluetooth detectors are now installed. As a consequence this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed. Some of these problems stem from the structure of a network made out of isolated detectors itself while others are inherent of Bluetooth technology (overlapping detection area, missing detections,\...). The aim of this paper is threefold: First, after having presented the level of details that can be reached with a network of isolated detectors we present how we modelled Brisbane's network, keeping only the information valuable for the retrieval of trip information. Second, we give an overview of the issues inherent to the Bluetooth technology and we propose a method for retrieving the itineraries of the individual Bluetooth vehicles. Last, through a comparison with Brisbane Transport Strategic Model results, we highlight the opportunities and the limits of Bluetooth detectors networks. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.
Resumo:
Study Design: Comparative analysis Background: Calculations of lower limbs kinetics are limited by floor-mounted force-plates. Objectives: Comparison of hip joint moments, power and mechanical work on the prosthetic limb of a transfemoral amputee calculated by inverse dynamics using either the ground reactions (force-plates) or knee reactions (transducer). Methods: Kinematics, ground reactions and knee reactions were collected using a motion analysis system, two force-plates and a multi-axial transducer mounted below the socket, respectively. Results: The inverse dynamics using ground reactions under-estimated the peaks of hip energy generation and absorption occurring at 63 % and 76 % of the gait cycle (GC) by 28 % and 54 %, respectively. This method over-estimated a phase of negative work at the hip (from 37 %GC to 56 %GC) by 24%. It under-estimated the phases of positive (from 57 %GC to 72 %GC) and negative (from 73 %GC to 98 %GC) work at the hip by 11 % and 58%, respectively. Conclusions: A transducer mounted within the prosthesis has the capacity to provide more realistic kinetics of the prosthetic limb because it enables assessment of multiple consecutive steps and a wide range of activities without issues of foot placement on force-plates. CLINICAL RELEVANCE The hip is the only joint that an amputee controls directly to set in motion the prosthesis. Hip joint kinetics are associated with joint degeneration, low back pain, risks of fall, etc. Therefore, realistic assessment of hip kinetics over multiple gait cycles and a wide range of activities is essential.
Resumo:
Experts are increasingly being called upon to quantify their knowledge, particularly in situations where data is not yet available or of limited relevance. In many cases this involves asking experts to estimate probabilities. For example experts, in ecology or related fields, might be called upon to estimate probabilities of incidence or abundance of species, and how they relate to environmental factors. Although many ecologists undergo some training in statistics at undergraduate and postgraduate levels, this does not necessarily focus on interpretations of probabilities. More accurate elicitation can be obtained by training experts prior to elicitation, and if necessary tailoring elicitation to address the expert’s strengths and weaknesses. Here we address the first step of diagnosing conceptual understanding of probabilities. We refer to the psychological literature which identifies several common biases or fallacies that arise during elicitation. These form the basis for developing a diagnostic questionnaire, as a tool for supporting accurate elicitation, particularly when several experts or elicitors are involved. We report on a qualitative assessment of results from a pilot of this questionnaire. These results raise several implications for training experts, not only prior to elicitation, but more strategically by targeting them whilst still undergraduate or postgraduate students.
Resumo:
Draglines are extremely large machines that are widely used in open-cut coal mines for overburden stripping. Since 1994 we have been working toward the development of a computer control system capable of automatically driving a dragline for a large portion of its operating cycle. This has necessitated the development and experimental evaluation of sensor systems, machines models, closed-loop control controllers, and an operator interface. This paper describes our steps toward the goal through scale-model and full-scale field experimentation.
Resumo:
In the internet age, copyright owners are increasingly looking to online intermediaries to take steps to prevent copyright infringement. Sometimes these intermediaries are closely tied to the acts of infringement; sometimes – as in the case of ISPs – they are not. In 2012, the Australian High Court decided the Roadshow Films v iiNet case, in which it held that an Australian ISP was not liable under copyright’s authorization doctrine, which asks whether the intermediary has sanctioned, approved or countenanced the infringement. The Australian Copyright Act 1968 directs a court to consider, in these situations, whether the intermediary had the power to prevent the infringement and whether it took any reasonable steps to prevent or avoid the infringement. It is generally not difficult for a court to find the power to prevent infringement – power to prevent can include an unrefined technical ability to disconnect users from the copyright source, such as an ISP terminating users’ internet accounts. In the iiNet case, the High Court eschewed this broad approach in favor of focusing on a notion of control that was influenced by principles of tort law. In tort, when a plaintiff asserts that a defendant should be liable for failing to act to prevent harm caused to the plaintiff by a third party, there is a heavy burden on the plaintiff to show that the defendant had a duty to act. The duty must be clear and specific, and will often hinge on the degree of control that the defendant was able to exercise over the third party. Control in these circumstances relates directly to control over the third party’s actions in inflicting the harm. Thus, in iiNet’s case, the control would need to be directed to the third party’s infringing use of BitTorrent; control over a person’s ability to access the internet is too imprecise. Further, when considering omissions to act, tort law differentiates between the ability to control and the ability to hinder. The ability to control may establish a duty to act, and the court will then look to small measures taken to prevent the harm to determine whether these satisfy the duty. But the ability to hinder will not suffice to establish liability in the absence of control. This article argues that an inquiry grounded in control as defined in tort law would provide a more principled framework for assessing the liability of passive intermediaries in copyright. In particular, it would set a higher, more stable benchmark for determining the copyright liability of passive intermediaries, based on the degree of actual, direct control that the intermediary can exercise over the infringing actions of its users. This approach would provide greater clarity and consistency than has existed to date in this area of copyright law in Australia.
Resumo:
The surfaces of natural beidellite clay were modified with cationic surfactant, tetradecyltrimethylammonium bromide, at different concentrations. The organo-beidellites were analysed using thermogravimetric analysis which shows four thermal oxidation/decomposition steps. The first step of mass loss is observed from room temperature to 130 °C due to the dehydration of adsorbed water. The second step of mass loss between 130 and 400 °C is attributed to the oxidation step of the intercalated organic surfactant with the formation of charcoal. The third mass loss happens between 400 and 500 °C which is assigned to the loss of hydroxyl groups on the edge of clays and the further oxidation step of charcoal. The fourth step is ascribed to the loss of structural OH units as well as the final oxidation/decomposition step of charcoal which takes place between 500 and 700 °C. Thermogravimetric analysis has proven to be a useful tool for estimating loaded surfactant amount.
Resumo:
This thesis considers whether the Australian Privacy Commissioner's use of its powers supports compliance with the requirement to 'take reasonable steps' to protect personal information in National Privacy Principle 4 of the Privacy Act 1988 (Cth). Two unique lenses were used. First, the Commissioner's use of powers was assessed against the principles of transparency, balance and vigorousness and secondly against alignment with an industry practice approach to securing information. Following a comprehensive review of publicly available materials, interviews and investigation file records, this thesis found that the Commissioner's use of his powers has not been transparent, balanced or vigorous, nor has it been supportive of an industry practice approach to securing data. Accordingly, it concludes that the Privacy Commissioner's use of its regulatory powers is unlikely to result in any significant improvement to the security of personal information held by organisations in Australia.