164 resultados para output-only


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Everyone knows there’s a problem with copyright. Artists get paid very little for their work, and legitimate consumers aren’t getting a very fair deal either. Unfortunately, nobody agrees about how we should fix it. Speaking at the Australian Digital Alliance forum last Friday, the Attorney-General and Arts Minister George Brandis said we might have to ask Internet Service Providers (ISPs) to police copyright, in order to deal with “piracy”. In 2012, the High Court in the iiNet case thought it wasn’t a good idea to make ISPs responsible for protecting the rights of third parties...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bed nucleus of the stria terminalis (BNST) is believed to be a critical relay between the central nucleus of the amygdala (CE) and the paraventricular nucleus of the hypothalamus in the control of hypothalamic–pituitary– adrenal (HPA) responses elicited by conditioned fear stimuli. If correct, lesions of CE or BNST should block expression of HPA responses elicited by either a specific conditioned fear cue or a conditioned context. To test this, rats were subjected to cued (tone) or contextual classical fear conditioning. Two days later, electrolytic or sham lesions were placed in CE or BNST. After 5 days, the rats were tested for both behavioral (freezing) and neuroendocrine (corticosterone) responses to tone or contextual cues. CE lesions attenuated conditioned freezing and corticosterone responses to both tone and con- text. In contrast, BNST lesions attenuated these responses to contextual but not tone stimuli. These results suggest CE is indeed an essential output of the amygdala for the expres- sion of conditioned fear responses, including HPA re- sponses, regardless of the nature of the conditioned stimu- lus. However, because lesions of BNST only affected behav- ioral and endocrine responses to contextual stimuli, the results do not support the notion that BNST is critical for HPA responses elicited by conditioned fear stimuli in general. Instead, the BNST may be essential specifically for contex- tual conditioned fear responses, including both behavioral and HPA responses, by virtue of its connections with the hippocampus, a structure essential to contextual condition- ing. The results are also not consistent with the hypothesis that BNST is only involved in unconditioned aspects of fear and anxiety.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind power is one of the world's major renewable energy sources, and its utilization provides an important contribution in helping solve the energy problems of many countries. After nearly 40 years of development, China's wind power industry now not only manufactures its own massive six MW turbines but also has the largest capacity in the world with a national output of 50 million MW•h in 2010 and set to rise by eight times of that amount by 2020. This paper investigates this development route by analyzing relevant academic literature, statistics, laws and regulations, policies and research and industry reports. The main drivers of the development in the industry are identified as technologies, turbines, wind farm construction, pricing mechanism and government support systems, each of which is also divided into different stages with distinctive features. A systematic review of these aspects provides academics and practitioners with a better understanding of the history of the wind power industry in China and reasons for its rapid development with a view to enhancing progress in wind power development both in China and the world generally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The failure of medical practitioners to consistently discharge their obligation to report sudden or unnatural deaths to coroners has rightly prompted concern. Following recent public scandals, coroners and health authorities have increasingly developed procedures to ensure that concerning deaths are reported to coroners. However, the negative consequences of deaths being unnecessarily reported have received less attention: unnecessary intrusion into bereavement; a waste of public resources; and added delay and hindrance to the investigation of matters needing a coroner’s attention. Traditionally, coroners have largely, unquestioningly assumed jurisdiction over any deaths for which a medical practitioner has not issued a cause of death certificate. The Office of the State Coroner in Queensland has recently trialled a system to more rigorously assess whether deaths apparently resulting from natural causes, which have been reported to a coroner, should be investigated by the coroner, rather than being finalised by a doctor issuing a cause of death certificate. This article describes that trial and its results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background  Doctors have the potential to influence opportunities for normative life experiences in the area of sexuality for individuals with intellectual disability (ID). Method  In Study One, 106 doctors completed the Attitudes to Sexuality Questionnaire (Individuals with an Intellectual Disability). In Study Two, 97 doctors completed a modified form of the questionnaire that included additional questions designed to assess their views about sterilisation. Results  Attitudes were less positive about parenting than about other aspects of sexuality, and less sexual freedom was seen as desirable for adults with ID. A surprising number of doctors agreed that sterilisation was a desirable practice. Study Two provided data about the conditions under which sterilisation was endorsed. Most doctors reported they had not been approached to perform sterilisations. Only 12% believed medical practitioners receive sufficient training in the area of disability and sexuality. Conclusions  The findings have implications for training and professional development for doctors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current research extends our knowledge of the main effects of attitude, subjective norm, and perceived control over the individual’s technology adoption. We propose a critical buffering role of social influence on the collectivistic culture in the relationship between attitude, perceived behavioral control, and Information Technology (IT) adoption. Adoption behavior was studied among 132 college students being introduced to a new virtual learning system. While past research mainly treated these three variables as being in parallel relationships, we found a moderating role for subjective norm on technology attitude and perceived control on adoption intent. Implications and limitations for understating the role of social influence in the collectivistic society are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motion control systems have a significant impact on the performance of ships and marine structures allowing them to perform tasks in severe sea states and during long periods of time. Ships are designed to operate with adequate reliability and economy, and in order to achieve this, it is essential to control the motion. For each type of ship and operation performed (transit, landing a helicopter, fishing, deploying and recovering loads, etc.), there are not only desired motion settings, but also limits on the acceptable (undesired) motion induced by the environment. The task of a ship motion control system is therefore to act on the ship so it follows the desired motion as closely as possible. This book provides an introduction to the field of ship motion control by studying the control system designs for course-keeping autopilots with rudder roll stabilisation and integrated rudder-fin roll stabilisation. These particular designs provide a good overview of the difficulties encountered by designers of ship motion control systems and, therefore, serve well as an example driven introduction to the field. The idea of combining the control design of autopilots with that of fin roll stabilisers, and the idea of using rudder induced roll motion as a sole source of roll stabilisation seems to have emerged in the late 1960s. Since that time, these control designs have been the subject of continuous and ongoing research. This ongoing interest is a consequence of the significant bearing that the control strategy has on the performance and the issues associated with control system design. The challenges of these designs lie in devising a control strategy to address the following issues: underactuation, disturbance rejection with a non minimum phase system, input and output constraints, model uncertainty, and large unmeasured stochastic disturbances. To date, the majority of the work reported in the literature has focused strongly on some of the design issues whereas the remaining issues have been addressed using ad hoc approaches. This has provided an additional motivation for revisiting these control designs and looking at the benefits of applying a contemporary design framework, which can potentially address the majority of the design issues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Total scatter factor (or output factor) in megavoltage photon dosimetry is a measure of relative dose relating a certain field size to a reference field size. The use of solid phantoms has been well established for output factor measurements, however to date these phantoms have not been tested with small fields. In this work, we evaluate the water equivalency of a number of solid phantoms for small field output factor measurements using the EGSnrc Monte Carlo code. Methods The following small square field sizes were simulated using BEAMnrc: 5, 6, 7, 8, 10 and 30 mm. Each simulated phantom geometry was created in DOSXYZnrc and consisted of a silicon diode (of length and width 1.5 mm and depth 0.5 mm) submersed in the phantom at a depth of 5 g/cm2. The source-to-detector distance was 100 cm for all simulations. The dose was scored in a single voxel at the location of the diode. Interaction probabilities and radiation transport parameters for each material were created using custom PEGS4 files. Results A comparison of the resultant output factors in the solid phantoms, compared to the same factors in a water phantom are shown in Fig. 1. The statistical uncertainty in each point was less than or equal to 0.4 %. The results in Fig. 1 show that the density of the phantoms affected the output factor results, with higher density materials (such as PMMA) resulting in higher output factors. Additionally, it was also calculated that scaling the depth for equivalent path length had negligible effect on the output factor results at these field sizes. Discussion and conclusions Electron stopping power and photon mass energy absorption change minimally with small field size [1]. Also, it can be seen from Fig. 1 that the difference from water decreases with increasing field size. Therefore, the most likely cause for the observed discrepancies in output factors is differing electron disequilibrium as a function of phantom density. When measuring small field output factors in a solid phantom, it is important that the density is very close to that of water.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There’s a diagram that does the rounds online that neatly sums up the difference between the quality of equipment used in the studio to produce music, and the quality of the listening equipment used by the consumer...