216 resultados para Homogeneous phantom


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detailed investigation of an intermediate member of the reddingite–phosphoferrite series, using infrared and Raman spectroscopy, scanning electron microcopy and electron microprobe analysis, has been carried out on a homogeneous sample from a lithium-bearing pegmatite named Cigana mine, near Conselheiro Pena, Minas Gerais, Brazil. The determined formula is (Mn1.60Fe1.21Ca0.01Mg0.01)∑2.83(PO4)2.12⋅(H2O2.85F0.01)∑2.86 indicating predominance in the reddingite member. Raman spectroscopy coupled with infrared spectroscopy supports the concept of phosphate, hydrogen phosphate and dihydrogen phosphate units in the structure of reddingite-phosphoferrite. Infrared and Raman bands attributed to water and hydroxyl stretching modes are identified. Vibrational spectroscopy adds useful information to the molecular structure of reddingite–phosphoferrite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a two-dimensional space-fractional reaction diffusion equation with a fractional Laplacian operator and homogeneous Neumann boundary conditions. The finite volume method is used with the matrix transfer technique of Ilić et al. (2006) to discretise in space, yielding a system of equations that requires the action of a matrix function to solve at each timestep. Rather than form this matrix function explicitly, we use Krylov subspace techniques to approximate the action of this matrix function. Specifically, we apply the Lanczos method, after a suitable transformation of the problem to recover symmetry. To improve the convergence of this method, we utilise a preconditioner that deflates the smallest eigenvalues from the spectrum. We demonstrate the efficiency of our approach for a fractional Fisher’s equation on the unit disk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel multiple regression method (RM) is developed to predict identity-by-descent probabilities at a locus L (IBDL), among individuals without pedigree, given information on surrounding markers and population history. These IBDL probabilities are a function of the increase in linkage disequilibrium (LD) generated by drift in a homogeneous population over generations. Three parameters are sufficient to describe population history: effective population size (Ne), number of generations since foundation (T), and marker allele frequencies among founders (p). IBD L are used in a simulation study to map a quantitative trait locus (QTL) via variance component estimation. RM is compared to a coalescent method (CM) in terms of power and robustness of QTL detection. Differences between RM and CM are small but significant. For example, RM is more powerful than CM in dioecious populations, but not in monoecious populations. Moreover, RM is more robust than CM when marker phases are unknown or when there is complete LD among founders or Ne is wrong, and less robust when p is wrong. CM utilises all marker haplotype information, whereas RM utilises information contained in each individual marker and all possible marker pairs but not in higher order interactions. RM consists of a family of models encompassing four different population structures, and two ways of using marker information, which contrasts with the single model that must cater for all possible evolutionary scenarios in CM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The existence of the Macroscopic Fundamental Diagram (MFD), which relates network space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since the MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. The key requirements for the well-defined MFD is the homogeneity of the area wide traffic condition, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take drivers’ behaviour under real time information provision into account, which has a significant impact on the shape of the MFD. This research aims to demonstrate the impact of drivers’ route choice behaviour on network performance by employing the MFD as a measurement. A microscopic simulation is chosen as an experimental platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers as well as by taking different route choice parameters, various scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance and the MFD shape. This study confirmed and addressed the impact of information provision on the MFD shape and highlighted the significance of the route choice parameter setting as an influencing factor in the MFD analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detailed spectroscopic and chemical investigation of matioliite, including infrared and Raman spectroscopy, scanning electron microscopy and electron probe microanalysis has been carried out on homogeneous samples from the Gentil pegmatite, Mendes Pimentel, Minas Gerais, Brazil. The chemical composition is (wt.%): FeO 2.20, CaO 0.05, Na2O 1.28, MnO 0.06, Al2O3 39.82, P2O5 42.7, MgO 4.68, F 0.02 and H2O 9.19; total 100.00. The mineral crystallize in the monoclinic crystal system, C2/c space group, with a = 25.075(1) Å, b = 5.0470(3) Å, c = 13.4370(7) Å, β = 110.97(3)°, V = 1587.9(4) Å3, Z = 4. Raman spectroscopy coupled with infrared spectroscopy supports the concept of phosphate, hydrogen phosphate and dihydrogen phosphate units in the structure of matioliite. Infrared and Raman bands attributed to water and hydroxyl stretching modes are identified. Vibrational spectroscopy adds useful information to the molecular structure of matioliite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vibration Based Damage Identification Techniques which use modal data or their functions, have received significant research interest in recent years due to their ability to detect damage in structures and hence contribute towards the safety of the structures. In this context, Strain Energy Based Damage Indices (SEDIs), based on modal strain energy, have been successful in localising damage in structuers made of homogeneous materials such as steel. However, their application to reinforced concrete (RC) structures needs further investigation due to the significant difference in the prominent damage type, the flexural crack. The work reported in this paper is an integral part of a comprehensive research program to develop and apply effective strain energy based damage indices to assess damage in reinforced concrete flexural members. This research program established (i) a suitable flexural crack simulation technique, (ii) four improved SEDI's and (iii) programmable sequentional steps to minimise effects of noise. This paper evaluates and ranks the four newly developed SEDIs and existing seven SEDIs for their ability to detect and localise flexural cracks in RC beams. Based on the results of the evaluations, it recommends the SEDIs for use with single and multiple vibration modes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge Management (KM) is a process that focuses on knowledge-related activities to facilitate knowledge creation, capture, transformation and use, with the ultimate aim of leveraging organisations’ intellectual capital to achieve organisational objectives. Organisational culture and climate have been identified as major catalysts to knowledge creation and sharing, and hence are considered important dimensions of KM research. The fragmented and hierarchical nature of the construction industry illustrates its difficulties to operate in a co-ordinated and homogeneous way when dealing with knowledge-related issues such as research and development, training and innovation. The culture and climate of organisations operating within the construction industry are profoundly shaped by the long-established characteristics of the industry, whilst also being influenced by the changes within the sector. Meanwhile, the special project-based structure of construction organisations constitutes additional challenges in facing knowledge production. The study this paper reports on addresses the impact of organisational culture and climate on the intensity of KM activities within construction organisations, with specific focus on the managerial activities that help to manage these challenges and to facilitate KM. A series of semi-structured interviews were undertaken to investigate the KM activities of the contractors operating in Hong Kong. The analysis on the qualitative data revealed that leadership on KM, innovation management, communication management and IT development were key factors that impact positively on the KM activities within the organisations under investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The growing proportion of older adults in Australia is predicted to comprise 23% of the population by 2030. Accordingly, an increasing number of older drivers and fatal crashes of these drivers could also be expected. While the cognitive and physiological limitations of ageing and their road safety implications have been widely documented, research has generally considered older drivers as a homogeneous group. Knowledge of age-related crash trends within the older driver group itself is currently limited. Objective: The aim of this research was to identify age-related differences in serious road crashes of older drivers. This was achieved by comparing crash characteristics between older and younger drivers and between sub-groups of older drivers. Particular attention was paid to serious crashes (crashes resulting in hospitalisation and fatalities) as they place the greatest burden on the Australian health system. Method: Using Queensland Crash data, a total of 191,709 crashes of all-aged drivers (17–80+) over a 9-year period were analysed. Crash patterns of drivers’ aged 17–24, 25–39, 40–49, 50–59, 60–69, 70–79 and 80+ were compared in terms of crash severity (e.g., fatal), at fault levels, traffic control measures (e.g., stop signs) and road features (e.g., intersections). Crashes of older driver sub-groups (60–69, 70–79, 80+) were also compared to those of middle-aged drivers (40–49 and 50–59 combined, who were identified as the safest driving cohort) with respect to crash-related traffic control features and other factors (e.g., speed). Confounding factors including speed and crash nature (e.g., sideswipe) were controlled for. Results and discussion: Results indicated that patterns of serious crashes, as a function of crash severity, at-fault levels, road conditions and traffic control measures, differed significantly between age groups. As a group, older drivers (60+) represented the greatest proportion of crashes resulting in fatalities and hospitalisation, as well as those involving uncontrolled intersections and failure to give way. The opposite was found for middle-aged drivers, although they had the highest proportion of alcohol and speed-related crashes when compared to older drivers. Among all older drivers, those aged 60–69 were least likely to be involved in or the cause of crashes, but most likely to crash at interchanges and as a result of driving while fatigued or after consuming alcohol. Drivers aged 70–79 represented a mid-range level of crash involvement and culpability, and were most likely to crash at stop and give way signs. Drivers aged 80 years and beyond were most likely to be seriously injured or killed in, and at-fault for, crashes, and had the greatest number of crashes at both conventional and circular intersections. Overall, our findings highlight the heterogeneity of older drivers’ crash patterns and suggest that age-related differences must be considered in measures designed to improve older driver safety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes an empirical study to test the proposition that all construction contract bidders are homogeneous ie. they can be treated as behaving collectively in an identical (statistical) manner. Examination of previous analyses of bidding data reveals a flaw in the method of standardising bids across different size contracts and a new procedure is proposed which involves the estimation of a contract datum. Three independent sets of bidding data were then subjected to this procedure and estimates of the necessary distributional parameters obtained. These were then tested against the bidder homogeneity assumption resulting in the conclusion that the assumption may be appropriate for a three parameter log-normal shape, but not for scale and location.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to develop software that is capable of back projecting primary fluence images obtained from EPID measurements through phantom and patient geometries in order to calculate 3D dose distributions. In the first instance, we aim to develop a tool for pretreatment verification in IMRT. In our approach, a Geant4 application is used to back project primary fluence values from each EPID pixel towards the source. Each beam is considered to be polyenergetic, with a spectrum obtained from Monte Carlo calculations for the LINAC in question. At each step of the ray tracing process, the energy differential fluence is corrected for attenuation and beam divergence. Subsequently, the TERMA is calculated and accumulated to an energy differential 3D TERMA distribution. This distribution is then convolved with monoenergetic point spread kernels, thus generating energy differential 3D dose distributions. The resulting dose distributions are accumulated to yield the total dose distribution, which can then be used for pre-treatment verification of IMRT plans. Preliminary results were obtained for a test EPID image comprised of 100 9 100 pixels of unity fluence. Back projection of this field into a 30 cm9 30 cm 9 30 cm water phantom was performed, with TERMA distributions obtained in approximately 10 min (running on a single core of a 3 GHz processor). Point spread kernels for monoenergetic photons in water were calculated using a separate Geant4 application. Following convolution and summation, the resulting 3D dose distribution produced familiar build-up and penumbral features. In order to validate the dose model we will use EPID images recorded without any attenuating material in the beam for a number of MLC defined square fields. The dose distributions in water will be calculated and compared to TPS predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to open-up the black box of the boardroom by directly observing directors’ interactions during meetings to better understand board processes. Design/methodology/approach: We analyse videotaped observations of board meetings at two Australian companies to develop insights into what directors do in meetings and how they participate in decision-making processes. The direct observations are triangulated with semi-structured interviews, mini-surveys and document reviews. Findings: Our analyses lead to two key findings: (i) while board meetings appear similar at a surface-level, boardroom interactions vary significantly at a deeper level (i.e. board members participate differently during different stages of discussions) and (ii) factors at multiple levels of analysis explain differences in interaction patterns, revealing the complex and nested nature of boardroom discussions. Research implications: By documenting significant intra- and inter-board meeting differences our study (i) challenges the widespread notion of board meetings as rather homogeneous and monolithic, (ii) points towards agenda items as a new unit of analysis (iii) highlights the need for more multi-level analyses in a board setting. Practical implications: While policy makers have been largely occupied with the “right” board composition, our findings suggest that decision outcomes or roles’ execution could be potentially affected by interactions at a board level. Differences in board meeting styles might explain prior ambiguous board structure-performance results, enhancing the need for greater normative consideration of how boards do their work. Originality/value: Our study complements existing research on boardroom dynamics and provides a systematic account of director interactions during board meetings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and purpose: The purpose of the work presented in this paper was to determine whether patient positioning and delivery errors could be detected using electronic portal images of intensity modulated radiotherapy (IMRT). Patients and methods: We carried out a series of controlled experiments delivering an IMRT beam to a humanoid phantom using both the dynamic and multiple static field method of delivery. The beams were imaged, the images calibrated to remove the IMRT fluence variation and then compared with calibrated images of the reference beams without any delivery or position errors. The first set of experiments involved translating the position of the phantom both laterally and in a superior/inferior direction a distance of 1, 2, 5 and 10 mm. The phantom was also rotated 1 and 28. For the second set of measurements the phantom position was kept fixed and delivery errors were introduced to the beam. The delivery errors took the form of leaf position and segment intensity errors. Results: The method was able to detect shifts in the phantom position of 1 mm, leaf position errors of 2 mm, and dosimetry errors of 10% on a single segment of a 15 segment IMRT step and shoot delivery (significantly less than 1% of the total dose). Conclusions: The results of this work have shown that the method of imaging the IMRT beam and calibrating the images to remove the intensity modulations could be a useful tool in verifying both the patient position and the delivery of the beam.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.