108 resultados para Gagliardi, Paolo, 1675-1742.
Resumo:
Objective Describe the characteristics of patients presenting to Emergency Departments (EDs) within Queensland, Australia with injuries due to assault with a glass implement (‘glassing’) and to set this within the broader context of presentations due to alcohol-related violence. Methods Analysis of prospectively collected ED injury surveillance data collated by the Queensland Injury Surveillance Unit (QISU) between 1999 and 2011. Cases of injury due to alcohol-related violence were identified and analysed using coded fields supplemented with qualitative data contained within the injury description text. Descriptive statistics were used to assess the characteristics of injury presentations due to alcohol-related violence. Violence included interpersonal violence and aggression (verbal aggression and object violence). Results A total of 4629 cases were studied. The study population was predominantly male (72%) and aged 18 to 24 (36%), with males in this age group comprising more than a quarter of the study population (28%). Nine percent of alcohol-related assault injuries were a consequence of ‘glassing’. The home was the most common location for alcohol-related violence (31%) and alcohol-related ‘glassings’ (33%). Overall, the most common glass object involved was a bottle (75%), however, within licensed venues an even mix of a drinking glass (44%) and glass bottle (45%) were identified. Conclusions Contrary to public perception generated by media, ‘glassing’ incidents, particularly at licensed venues, constitute a relatively small proportion of all alcohol-related violence. The current study highlights the predominance of young men injured following alcohol-related violence, demonstrating a key focus area within the population for aiming prevention strategies.
Resumo:
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
Resumo:
Cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. The rich sources of prior information in IGRT are incorporated into a hidden Markov random field model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk. The voxel labels are estimated using iterated conditional modes. The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom. The mean voxel-wise misclassification rate was 6.2\%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
This paper introduces a parallel implementation of an agent-based model applied to electricity distribution grids. A fine-grained shared memory parallel implementation is presented, detailing the way the agents are grouped and executed on a multi-threaded machine, as well as the way the model is built (in a composable manner) which is an aid to the parallelisation. Current results show a medium level speedup of 2.6, but improvements are expected by incor-porating newer distributed or parallel ABM schedulers into this implementa-tion. While domain-specific, this parallel algorithm can be applied to similarly structured ABMs (directed acyclic graphs).
Resumo:
Many factors are identified as contributing to the high demand for emergency department (ED) care. Similarly, there have been many initiatives taken to minimise the impact that is placed on EDs. Many of these, however, do not consider the patient's opinions and motivations. The aim of this cross-sectional study was to understand patients’ perspectives and reasons behind their decision to present to EDs. 911 surveys were collected from patients presenting to eight QLD EDs in 2011. Based on the Principal Component Analysis technique, a six-item scale entitled "Best services at emergency departments" was extracted (α = 0.729) measuring patients' opinions and perspectives. Further, the independent t-tests were conducted between various groups of ED users. The results suggest that multiple users more likely viewed EDs as the best place for their conditions than the first-time users (Median 10.73 v 11.56, p<0.001). Moreover, patients who made the decision to present by themselves had a more favourable perception of the ED services than those for whom the decision was made or others were involved (Median 11.38 v 10.80, p=0.003). Method of arrival did not affect the respondents’ perception of ED (11.13 v 11.00, p=0.65). The results of this research indicate that patients’ perception of ED as the best and most appropriate place for attention to their medical conditions plays an important role in their decision to present and keep returning to ED. Understanding patients’ reasons and decisions enhances the success of planning and implementing alternative services to manage the demand for ED services.
Resumo:
Chemical vapor deposition (CVD) is widely utilized to synthesize graphene with controlled properties for many applications, especially when continuous films over large areas are required. Although hydrocarbons such as methane are quite efficient precursors for CVD at high temperature (∼1000 °C), finding less explosive and safer carbon sources is considered beneficial for the transition to large-scale production. In this work, we investigated the CVD growth of graphene using ethanol, which is a harmless and readily processable carbon feedstock that is expected to provide favorable kinetics. We tested a wide range of synthesis conditions (i.e., temperature, time, gas ratios), and on the basis of systematic analysis by Raman spectroscopy, we identified the optimal parameters for producing highly crystalline graphene with different numbers of layers. Our results demonstrate the importance of high temperature (1070 °C) for ethanol CVD and emphasize the significant effects that hydrogen and water vapor, coming from the thermal decomposition of ethanol, have on the crystal quality of the synthesized graphene.
Resumo:
The diagnostics of mechanical components operating in transient conditions is still an open issue, in both research and industrial field. Indeed, the signal processing techniques developed to analyse stationary data are not applicable or are affected by a loss of effectiveness when applied to signal acquired in transient conditions. In this paper, a suitable and original signal processing tool (named EEMED), which can be used for mechanical component diagnostics in whatever operating condition and noise level, is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED) and the analytical approach of the Hilbert transform. The proposed tool is able to supply diagnostic information on the basis of experimental vibrations measured in transient conditions. The tool has been originally developed in order to detect localized faults on bearings installed in high speed train traction equipments and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on spectral kurtosis or envelope analysis, which represent until now the landmark for bearings diagnostics.
Resumo:
The signal processing techniques developed for the diagnostics of mechanical components operating in stationary conditions are often not applicable or are affected by a loss of effectiveness when applied to signals measured in transient conditions. In this chapter, an original signal processing tool is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition, Minimum Entropy Deconvolution and the analytical approach of the Hilbert transform. The tool has been developed to detect localized faults on bearings of traction systems of high speed trains and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on envelope analysis or spectral kurtosis, which represent until now the landmark for bearings diagnostics.
Resumo:
In the field of rolling element bearing diagnostics, envelope analysis has gained in the last years a leading role among the different digital signal processing techniques. The original constraint of constant operating speed has been relaxed thanks to the combination of this technique with the computed order tracking, able to resample signals at constant angular increments. In this way, the field of application of this technique has been extended to cases in which small speed fluctuations occur, maintaining high effectiveness and efficiency. In order to make this algorithm suitable to all industrial applications, the constraint on speed has to be removed completely. In fact, in many applications, the coincidence of high bearing loads, and therefore high diagnostic capability, with acceleration-deceleration phases represents a further incentive in this direction. This chapter presents a procedure for the application of envelope analysis to speed transients. The effect of load variation on the proposed technique will be also qualitatively addressed.
Resumo:
Objective The 2010–2011 Queensland floods resulted in the most deaths from a single flood event in Australia since 1916. This article analyses the information on these deaths for comparison with those from previous floods in modern Australia in an attempt to identify factors that have contributed to those deaths. Haddon's Matrix, originally designed for prevention of road trauma, offers a framework for understanding the interplay between contributing factors and helps facilitate a clearer understanding of the varied strategies required to ensure people's safety for particular flood types. Methods Public reports and flood relevant literature were searched using key words ‘flood’, ‘fatality’, ‘mortality’, ‘death’, ‘injury’ and ‘victim’ through Google Scholar, PubMed, ProQuest and EBSCO. Data relating to reported deaths during the 2010–2011 Queensland floods, and relevant data of previous Australian flood fatality (1997–2009) were collected from these available sources. These sources were also used to identify contributing factors. Results There were 33 deaths directly attributed to the event, of which 54.5% were swept away in a flash flood on 10 January 2011. A further 15.1% of fatalities were caused by inappropriate behaviours. This is different to floods in modern Australia where over 90% of deaths are related to the choices made by individuals. There is no single reason why people drown in floods, but rather a complex interplay of factors. Conclusions The present study and its integration of research findings and conceptual frameworks might assist governments and communities to develop policies and strategies to prevent flood injury and fatalities.
Resumo:
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific sub-regions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.
Resumo:
This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.
Resumo:
This study investigates the variation of photon field penumbra shape with initial electron beam diameter, for very narrow beams. A Varian Millenium MLC (Varian Medical Systems, Palo Alto, USA) and a Brainlab m3 microMLC (Brainlab AB. Feldkirchen, Germany) were used, with one Varian iX linear accelerator, to produce fields that were (nominally) 0.20 cm across. Dose profiles for these fields were measured using radiochromic film and compared with the results of simulations completed using BEAMnrc and DOSXYZnrc, where the initial electron beam was set to FWHM = 0.02, 0.10, 0.12, 0.15, 0.20 and 0.50 cm. Increasing the electron-beam FWHM produced increasing occlusion of the photon source by the closely spaced collimator leaves and resulted in blurring of the simulated profile widths from 0.26 to 0.64 cm, for the MLC, from 0.12 to 0.43 cm, for the microMLC. Comparison with measurement data suggested that the electron spot size in the clinical linear accelerator was between FWHM = 0.10 and 0.15 cm, encompassing the result of our previous output-factor based work, which identified a FWHM of 0.12. Investigation of narrow-beam penumbra variation has been found to be a useful procedure, with results varying noticeably with linear accelerator spot size and allowing FWHM estimates obtained using other methods to be verified.
Resumo:
To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.
Resumo:
Nuclei and electrons in condensed matter and/or molecules are usually entangled, due to the prevailing (mainly electromagnetic) interactions. However, the "environment" of a microscopic scattering system (e.g. a proton) causes ultrafast decoherence, thus making atomic and/or nuclear entanglement e®ects not directly accessible to experiments. However, our neutron Compton scattering experiments from protons (H-atoms) in condensed systems and molecules have a characteristic collisional time about 100|1000 attoseconds. The quantum dynamics of an atom in this ultrashort, but ¯nite, time window is governed by non-unitary time evolution due to the aforementioned decoherence. Unexpectedly, recent theoretical investigations have shown that decoherence can also have the following energetic consequences. Disentangling two subsystems A and B of a quantum system AB is tantamount to erasure of quantum phase relations between A and B. This erasure is widely believed to be an innocuous process, which e.g. does not a®ect the energies of A and B. However, two independent groups proved recently that disentangling two systems, within a su±ciently short time interval, causes increase of their energies. This is also derivable by the simplest Lindblad-type master equation of one particle being subject to pure decoherence. Our neutron-proton scattering experiments with H2 molecules provide for the first time experimental evidence of this e®ect. Our results reveal that the neutron-proton collision, leading to the cleavage of the H-H bond in the attosecond timescale, is accompanied by larger energy transfer (by about 2|3%) than conventional theory predicts. Preliminary results from current investigations show qualitatively the same e®ect in the neutron-deuteron Compton scattering from D2 molecules. We interpret the experimental findings by treating the neutron-proton (or neutron-deuteron) collisional system as an entangled open quantum system being subject to fast decoherence caused by its "environment" (i.e., two electrons plus second nucleus of H2 or D2). The presented results seem to be of generic nature, and may have considerable consequences for various processes in condensed matter and molecules, e.g. in elementary chemical reactions.