927 resultados para Derivación nominal
Resumo:
Considering the wide spectrum of situations that it may encounter, a robot navigating autonomously in outdoor environments needs to be endowed with several operating modes, for robustness and efficiency reasons. Indeed, the terrain it has to traverse may be composed of flat or rough areas, low cohesive soils such as sand dunes, concrete road etc. . .Traversing these various kinds of environment calls for different navigation and/or locomotion functionalities, especially if the robot is endowed with different locomotion abilities, such as the robots WorkPartner, Hylos [4], Nomad or the Marsokhod rovers. Numerous rover navigation techniques have been proposed, each of them being suited to a particular environment context (e.g. path following, obstacle avoidance in more or less cluttered environments, rough terrain traverses...). However, seldom contributions in the literature tackle the problem of selecting autonomously the most suited mode [3]. Most of the existing work is indeed devoted to the passive analysis of a single navigation mode, as in [2]. Fault detection is of course essential: one can imagine that a proper monitoring of the Mars Exploration Rover Opportunity could have avoided the rover to be stuck during several weeks in a dune, by detecting non-nominal behavior of some parameters. But the ability to recover the anticipated problem by switching to a better suited navigation mode would bring higher autonomy abilities, and therefore a better overall efficiency. We propose here a probabilistic framework to achieve this, which fuses environment related and robot related information in order to actively control the rover operations.
Resumo:
Purpose The goal of this work was to set out a methodology for measuring and reporting small field relative output and to assess the application of published correction factors across a population of linear accelerators. Methods and materials Measurements were made at 6 MV on five Varian iX accelerators using two PTW T60017 unshielded diodes. Relative output readings and profile measurements were made for nominal square field sizes of side 0.5 to 1.0 cm. The actual in-plane (A) and cross-plane (B) field widths were taken to be the FWHM at the 50% isodose level. An effective field size, defined as FSeff=A·B, was calculated and is presented as a field size metric. FSeffFSeff was used to linearly interpolate between published Monte Carlo (MC) calculated kQclin,Qmsrfclin,fmsr values to correct for the diode over-response in small fields. Results The relative output data reported as a function of the nominal field size were different across the accelerator population by up to nearly 10%. However, using the effective field size for reporting showed that the actual output ratios were consistent across the accelerator population to within the experimental uncertainty of ±1.0%. Correcting the measured relative output using kQclin,Qmsrfclin,fmsr at both the nominal and effective field sizes produce output factors that were not identical but differ by much less than the reported experimental and/or MC statistical uncertainties. Conclusions In general, the proposed methodology removes much of the ambiguity in reporting and interpreting small field dosimetric quantities and facilitates a clear dosimetric comparison across a population of linacs
Resumo:
Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.
Resumo:
This paper discusses a model of the civil aviation reg- ulation framework and shows how the current assess- ment of reliability and risk for piloted aircraft has limited applicability for Unmanned Aircraft Systems (UAS) with high levels of autonomous decision mak- ing. Then, a new framework for risk management of robust autonomy is proposed, which arises from combining quantified measures of risk with normative decision making. The term Robust Autonomy de- scribes the ability of an autonomous system to either continue or abort its operation whilst not breaching a minimum level of acceptable safety in the presence of anomalous conditions. The decision making associ- ated with risk management requires quantifying prob- abilities associated with the measures of risk and also consequences of outcomes related to the behaviour of autonomy. The probabilities are computed from an assessment under both nominal and anomalous sce- narios described by faults, which can be associated with the aircraft’s actuators, sensors, communication link, changes in dynamics, and the presence of other aircraft in the operational space. The consequences of outcomes are characterised by a loss function which rewards the certification decision
Resumo:
Introduction Since 1992 there have been several articles published on research on plastic scintillators for use in radiotherapy. Plastic scintillators are said to be tissue equivalent, temperature independent and dose rate independent [1]. Although their properties were found to be promising for measurements in megavoltage X-ray beams there were some technical difficulties with regards to its commercialisation. Standard Imaging has produced the first commercial system which is now available for use in a clinical setting. The Exradin W1 scintillator device uses a dual fibre system where one fibre is connected to the Plastic Scintillator and the other fibre only measures Cerenkov radiation [2]. This paper presents results obtained during commissioning of this dosimeter system. Methods All tests were performed on a Novalis Tx linear accelerator equipped with a 6 MV SRS photon beam and conventional 6 and 18 MV X-ray beams. The following measurements were performed in a Virtual Water phantom at a depth of dose maximum. Linearity: The dose delivered was varied between 0.2 and 3.0 Gy for the same field conditions. Dose rate dependence: For this test the repetition rate of the linac was varied between 100 and 1,000 MU/min. A nominal dose of 1.0 Gy was delivered for each rate. Reproducibility: A total of five irradiations for the same setup. Results The W1 detector gave a highly linear relationship between dose and the number of Monitor Units delivered for a 10 9 10 cm2 field size at a SSD of 100 cm. The linearity was within 1 % for the high dose end and about 2 % for the very low dose end. For the dose rate dependence, the dose measured as a function of repetition the rate (100–1,000 MU/min) gave a maximum deviation of 0.9 %. The reproducibility was found to be better than 0.5 %. Discussion and conclusions The results for this system look promising so far being a new dosimetry system available for clinical use. However, further investigation is needed to produce a full characterisation prior to use in megavoltage X-ray beams.
Resumo:
Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.
Resumo:
Introduction This study investigates uncertainties pertaining to the use of optically stimulated luminescence dosimeters (OSLDs) in radiotherapy dosimetry. The sensitivity of the luminescent material is related to the density of recombination centres [1], which is in the range of 1015–1016 cm-3. Because of this non-uniform distribution of traps in crystal growth the sensitivity varies substantially within a batch of dosimeters. However, a quantitative understanding of the relationship between the response of an OSLD and its sensitive volume has not yet been investigated or reported in literature. Methods In this work, OSLDs are scanned with a MicroCT scanner to determine potential sources for the variation in relative sensitivity across a selection of Landauer nanoDot dosimeters. Specifically, the correlation between a dosimeters relative sensitivity and the loading density of Al2O3:C powder was determined. Results When extrapolating the sensitive volume’s radiodensity from the CT data, it was shown that there is a non-uniform distribution incrystal growth as illustrated in Fig. 1. A plot of voxel count versus the element-specific correction factor is shown in Fig. 2 where each point represents a single OSLD. A line was fitted which has an R2-value of 0.69 and a P-value of 8.21 9 10-19. This data shows that the response of a dosimeter decreases proportionally with sensitive volume. Extrapolating from this data, a quantitative relationship between response and sensitive volume was roughly determined for this batch of dosimeters. A change in volume of 1.176 9 10-5 cm3 corresponds to a 1 % change in response. In other words, a 0.05 % change in the nominal volume of the chip would result in a 1 % change in response. Discussion and conclusions This work demonstrated that the amount of sensitive material is approximately linked to the total correction factor. Furthermore, the ‘true’ volume of an OSLD’s sensitive material is, on average, 17.90 % less than that which has been reported in literature, mainly due to the presence of air cavities in the material’s structure. Finally, the potential effects of the inaccuracy of Al2O3:C deposition increases with decreasing chip size. If a luminescent dosimeter were manufactured with a smaller volume than currently employed using the same manufacturing protocol, the variation in response from chip to chip would more than likely exceed the current 5 % range.
Resumo:
In Gideona v Nominal Defendant [2005] QCA 261, the Queensland Court of Appeal reconsidered the question of what is the material time for determining whether registration of a motor vehicle is required. The Court declined to follow the decision in Kelly v Alford [1988] 1 Qd R 404; deciding that the material time was the time when the accident occurred.
Resumo:
The decisions in Perdis v The Nominal Defendant [2003] QCA 555, Miller v the Nominal Defendant [2003] QCA 558 and Piper v the Nominal Defendant [2003] QCA 557 were handed down contemporaneously by the Queensland Court of Appeal on December 15 2003. They consider important issues as to the construction of key provisions of the Motor Accident Insurance Act 1994 (Qld)
Resumo:
In Nominal Defendant v Kisse [2001] QDC 290 a person suffered personal injury caused by a motor vehicle in circumstances where there was a cause of action to which the Motor Accident Insurance Act 1994 applied. The person died before taking the steps required under Pt 4 of the Act and before commencing litigation to enforce that cause of action. The decision also involved a costs order against solicitors on an indemnity basis, providing a timely reminder to practitioners of the importance of ensuring they have proper authority before commencing any court proceedings.
Resumo:
Intelligent Transport System (ITS) technology is seen as a cost-effective way to increase the conspicuity of approaching trains and the effectiveness of train warnings at level crossings by providing an in-vehicle warning of an approaching train. The technology is often seen as a potential low-cost alternative to upgrading passive level crossings with traditional active warning systems (flashing lights and boom barriers). ITS platforms provide sensor, localization and dedicated short-range communication (DSRC) technologies to support cooperative applications such as collision avoidance for road vehicles. In recent years, in-vehicle warning systems based on ITS technology have been trialed at numerous locations around Australia, at level crossing sites with active and passive controls. While significant research has been conducted on the benefits of the technology in nominal operating modes, little research has focused on the effects of the failure modes, the human factors implications of unreliable warnings and the technology adoption process from the railway industry’s perspective. Many ITS technology suppliers originate from the road industry and often have limited awareness of the safety assurance requirements, operational requirements and legal obligations of railway operators. This paper aims to raise awareness of these issues and start a discussion on how such technology could be adopted. This paper will describe several ITS implementation cenarios and discuss failure modes, human factors considerations and the impact these scenarios are likely to have in terms of safety, railway safety assurance requirements and the practicability of meeting these requirements. The paper will identify the key obstacles impeding the adoption of ITS systems for the different implementation scenarios and a possible path forward towards the adoption of ITS technology.
Resumo:
Cold-formed steel members have many advantages over hot-rolled steel members. However, they are susceptible to various buckling modes at stresses below the yield stress of the member because of their relatively high width-to-thickness ratio. Web crippling is one of the failure modes that can occur when the members are subjected to transverse high concentrated loadings and/or reactions. The four common loading conditions are the end-one-flange (EOF), interior-one-flange (IOF), end-two-flange (ETF) and interior-two-flange (ITF) loadings. Recently a new test method has been proposed by AISI to obtain the web crippling capacities under these four loading conditions. Using this test method 38 tests were conducted in this research to investigate the web crippling behaviour and strength of channel beams under ETF and ITF cases. Unlipped channel sections having a nominal yield stress of 450 MPa were tested with different web slenderness and bearing lengths. The flanges of these channel sections were not fastened to the supports. In this research the suitability of the current design rules in AS/NZS 4600 and the AISI S100 Specification for unlipped channels subject to web crippling was investigated, and suitable modifications were proposed where necessary. In addition to this, a new design rule was proposed based on the direct strength method to predict the web crippling capacities of tested beams. This paper presents the details of this experimental study and the results.
Resumo:
Capacity measurement and reduction is a major international issue to emerge in the new millennium. However, there has been limited assessment of the success of capacity reduction schemes (CRS). In this paper, the success of a CRS is assessed for a European fishery characterised by differences in efficiency levels of individual boats. In such a fishery, given it is assumed that the least efficient producers are the first to exit through a CRS, the reduction in harvesting capacity is less than the nominal reduction in physical fleet capacity. Further, there is potential for harvesting capacity to increase if remaining vessels improve their efficiency.
Resumo:
There is an increasing desire and emphasis to integrate assessment tools into the everyday training environment of athletes. These tools are intended to fine-tune athlete development, enhance performance and aid in the development of individualised programmes for athletes. The areas of workload monitoring, skill development and injury assessment are expected to benefit from such tools. This paper describes the development of an instrumented leg press and its application to testing leg dominance with a cohort of athletes. The developed instrumented leg press is a 45° reclining sled-type leg press with dual force plates, a displacement sensor and a CCD camera. A custom software client was developed using C#. The software client enabled near-real-time display of forces beneath each limb together with displacement of the quad track roller system and video feedback of the exercise. In recording mode, the collection of athlete particulars is prompted at the start of the exercise, and pre-set thresholds are used subsequently to separate the data into epochs from each exercise repetition. The leg press was evaluated in a controlled study of a cohort of physically active adults who performed a series of leg press exercises. The leg press exercises were undertaken at a set cadence with nominal applied loads of 50%, 100% and 150% of body weight without feedback. A significant asymmetry in loading of the limbs was observed in healthy adults during both the eccentric and concentric phases of the leg press exercise (P < .05). Mean forces were significantly higher beneath the non-dominant limb (4–10%) and during the concentric phase of the muscle action (5%). Given that symmetrical loading is often emphasized during strength training and remains a common goal in sports rehabilitation, these findings highlight the clinical potential for this instrumented leg press system to monitor symmetry in lower-limb loading during progressive strength training and sports rehabilitation protocols.
Resumo:
This paper deals with constrained image-based visual servoing of circular and conical spiral motion about an unknown object approximating a single image point feature. Effective visual control of such trajectories has many applications for small unmanned aerial vehicles, including surveillance and inspection, forced landing (homing), and collision avoidance. A spherical camera model is used to derive a novel visual-predictive controller (VPC) using stability-based design methods for general nonlinear model-predictive control. In particular, a quasi-infinite horizon visual-predictive control scheme is derived. A terminal region, which is used as a constraint in the controller structure, can be used to guide appropriate reference image features for spiral tracking with respect to nominal stability and feasibility. Robustness properties are also discussed with respect to parameter uncertainty and additive noise. A comparison with competing visual-predictive control schemes is made, and some experimental results using a small quad rotor platform are given.