501 resultados para Research Support


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Australia lacks a satisfactory, national paradigm for assessing competence and capacity in the context of testamentary, enduring power of attorney and advance care directive documents. Competence/capacity assessments are currently conducted on an ad hoc basis by legal and/or medical professionals. The reliability of the assessment process is subject to the skill set and mutual understanding of the legal and/or medical professional conducting the assessment. There is a growth in the prevalence of diseases such as dementia. Such diseases impact upon cognition which increasingly necessitates collaboration between the legal and medical professions when assessing the effect of mentally disabling conditions upon competency/capacity. Miscommunication and lack of understanding between legal and medical professionals involved could impede the development of a satisfactory paradigm. A qualitative study seeking the views of legal and medical professionals who practise in this area has been conducted. This incorporated surveys and interviews of 10 legal and 20 medical practitioners. Some of the results are discussed here. Practitioners were asked whether there is a standard approach and whether national guidelines were desirable. There was general agreement that uniform guidelines for the assessment of competence/capacity would be desirable. The interviews also canvassed views as to the state of the relationship between the professions. The results of the empirical research support the hypothesis that relations between the professions could be improved. The development of a national paradigm would promote consistency and transparency of process, helping to improve the professional relationship and maximising the principles of autonomy, participation and dignity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a polynomial time algorithm is presented for solving the Eden problem for graph cellular automata. The algorithm is based on our neighborhood elimination operation which removes local neighborhood configurations which cannot be used in a pre-image of a given configuration. This paper presents a detailed derivation of our algorithm from first principles, and a detailed complexity and accuracy analysis is also given. In the case of time complexity, it is shown that the average case time complexity of the algorithm is \Theta(n^2), and the best and worst cases are \Omega(n) and O(n^3) respectively. This represents a vast improvement in the upper bound over current methods, without compromising average case performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Children with Autism Spectrum Disorder experience difficulty in communication and in understanding the social world which can have negative consequences for their relationships, in managing emotions, and generally dealing with the challenges of everyday life. This thesis examines the effectiveness of the Active and Reflective components of the Get REAL program through the assessment of the detailed coding of video-recorded observations and longitudinal quantitative analysis. The aim of Get REAL is to increase the social, emotional, and cognitive learning of children with High Functioning Autism (HFA). Get REAL is a group program designed specifically for use in inclusive primary school settings. The Get REAL program was designed in response to the mixed success of generalisation of learning to new contexts of existing social skills programs. The theoretical foundation of Get REAL is based upon pedagogical theory and learning theory to facilitate transfer of learning, combined with experiential, individualised, evaluative and organisational approaches. This thesis is by publication and consists of four refereed journal papers; 1 accepted for publication and 3 that are under review. Paper 1 describes the development and theoretical basis of the Get REAL program and provides detail of the program structure and learning cycle. The focus of Paper 1 reflects the first question of interest in the thesis which is about the extent to which learning derived from participation in the program can be generalised to other contexts. Participants are 16 children with HFA ranging in age from 8-13 years. Results provided support for the generalisability of learning from Get REAL to home and school evidenced by parent and teacher data collected pre and post participation in Get REAL. Following establishment of the generalisation of learning from Get REAL, Papers 2 and 3 focus on the Active and Reflective components of the program in order to examine how individual and group learning takes place. Participants (N = 12) in the program are video-taped during the Active and Reflective Sessions. Using identical coding protocols of video data, improvements in prosocial behaviour and diminishing of inappropriate behaviours were apparent with the exception of perspective taking. Data also revealed that 2 of the participants had atypical trajectories. An in-depth case study analysis was then conducted with these 2 participants in Paper 4. Data included reports from health care and education professionals within the school and externally (e.g., paediatrician) and identified the multi-faceted nature of care needed for children with comorbid diagnoses and extremely challenging family circumstances as a complex task to effect change. Results of this research support the effectiveness of the Get REAL program in promoting pro social behaviours such as improvements in engaging with others and emotional regulation, and in diminishing unwanted behaviours such as conduct problems. Further, the gains made by the participating children were found to be generalisable beyond Get REAL to home and other school settings. The research contained in the thesis adds to current knowledge about how learning can take place for children with HFA. Results show that an experiential learning framework with a focus on social cognition, together with explicit teaching, scaffolded with video feedback, are key ingredients for the generalisation of social learning to broader contexts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract An experimental dataset representing a typical flow field in a stormwater gross pollutant trap (GPT) was visualised. A technique was developed to apply the image-based flow visualisation (IBFV) algorithm to the raw dataset. Particle image velocimetry (PIV) software was previously used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding stormwater pollutant capture and retention behaviour within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate the possible flow paths of pollutants entering the GPT. The investigated flow paths were compared with the behaviour of pollutants monitored during experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background The management of unruptured aneurysms is controversial with the decision to treat influenced by aneurysm characteristics including size and morphology. Aneurysmal bleb formation is thought to be associated with an increased risk of rupture. Objective To correlate computational fluid dynamic (CFD) indices with bleb formation. Methods Anatomical models were constructed from three-dimensional rotational angiogram (3DRA) data in 27 patients with cerebral aneurysms harbouring single blebs. Additional models representing the aneurysm before bleb formation were constructed by digitally removing the bleb. We characterised haemodynamic features of models both with and without the bleb using CFDs. Flow structure, wall shear stress (WSS), pressure and oscillatory shear index (OSI) were analysed. Results There was a statistically significant association between bleb location at or adjacent to the point of maximal WSS (74.1%, p=0.019), irrespective of rupture status. Aneurysmal blebs were related to the inflow or outflow jet in 88.9% of cases (p<0.001) whilst 11.1% were unrelated. Maximal wall pressure and OSI were not significantly related to bleb location. The bleb region attained a lower WSS following its formation in 96.3% of cases (p<0.001) and was also lower than the average aneurysm WSS in 86% of cases (p<0.001). Conclusion Cerebral aneurysm blebs generally form at or adjacent to the point of maximal WSS and are aligned with major flow structures. Wall pressure and OSI do not contribute to determining bleb location. The measurement of WSS using CFD models may potentially predict bleb formation and thus improve the assessment of rupture risk in unruptured aneurysms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the a mission should be aborted due to mechanical or other failure. On-board cameras provide information that can be used in the determination of potential landing sites, which are continually updated and ranked to prevent injury and minimize damage. Pulse Coupled Neural Networks have been used for the detection of features in images that assist in the classification of vegetation and can be used to minimize damage to the aerial vehicle. However, a significant drawback in the use of PCNNs is that they are computationally expensive and have been more suited to off-line applications on conventional computing architectures. As heterogeneous computing architectures are becoming more common, an OpenCL implementation of a PCNN feature generator is presented and its performance is compared across OpenCL kernels designed for CPU, GPU and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images obtained during unmanned aerial vehicle trials to determine the plausibility for real-time feature detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives This study introduces and assesses the precision of a standardized protocol for anthropometric measurement of the juvenile cranium using three-dimensional surface rendered models, for implementation in forensic investigation or paleodemographic research. Materials and methods A subset of multi-slice computed tomography (MSCT) DICOM datasets (n=10) of modern Australian subadults (birth—10 years) was accessed from the “Skeletal Biology and Forensic Anthropology Virtual Osteological Database” (n>1200), obtained from retrospective clinical scans taken at Brisbane children hospitals (2009–2013). The capabilities of Geomagic Design X™ form the basis of this study; introducing standardized protocols using triangle surface mesh models to (i) ascertain linear dimensions using reference plane networks and (ii) calculate the area of complex regions of interest on the cranium. Results The protocols described in this paper demonstrate high levels of repeatability between five observers of varying anatomical expertise and software experience. Intra- and inter-observer error was indiscernible with total technical error of measurement (TEM) values ≤0.56 mm, constituting <0.33% relative error (rTEM) for linear measurements; and a TEM value of ≤12.89 mm2, equating to <1.18% (rTEM) of the total area of the anterior fontanelle and contiguous sutures. Conclusions Exploiting the advances of MSCT in routine clinical assessment, this paper assesses the application of this virtual approach to acquire highly reproducible morphometric data in a non-invasive manner for human identification and population studies in growth and development. The protocols and precision testing presented are imperative for the advancement of “virtual anthropology” into routine Australian medico-legal death investigation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reconfigurable computing devices can increase the performance of compute intensive algorithms by implementing application specific co-processor architectures. The power cost for this performance gain is often an order of magnitude less than that of modern CPUs and GPUs. Exploiting the potential of reconfigurable devices such as Field-Programmable Gate Arrays (FPGAs) is typically a complex and tedious hardware engineering task. Re- cently the major FPGA vendors (Altera, and Xilinx) have released their own high-level design tools, which have great potential for rapid development of FPGA based custom accelerators. In this paper, we will evaluate Altera’s OpenCL Software Development Kit, and Xilinx’s Vivado High Level Sythesis tool. These tools will be compared for their per- formance, logic utilisation, and ease of development for the test case of a Tri-diagonal linear system solver.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the mission should be aborted due to mechanical or other failure. This article presents a pulse-coupled neural network (PCNN) to assist in the vegetation classification in a vision-based landing site detection system for an unmanned aircraft. We propose a heterogeneous computing architecture and an OpenCL implementation of a PCNN feature generator. Its performance is compared across OpenCL kernels designed for CPU, GPU, and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images to determine the plausibility for real-time feature detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc...

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computational optimisation of clinically important electrocardiogram signal features, within a single heart beat, using a Markov-chain Monte Carlo (MCMC) method is undertaken. A detailed, efficient data-driven software implementation of an MCMC algorithm has been shown. Initially software parallelisation is explored and has been shown that despite the large amount of model parameter inter-dependency that parallelisation is possible. Also, an initial reconfigurable hardware approach is explored for future applicability to real-time computation on a portable ECG device, under continuous extended use.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Realistic plant models are important for leaf area and plant volume estimation, reconstruction of growth canopies, structure generation of the plant, reconstruction of leaf surfaces and agrichemical spray droplet modelling. This article investigates several different scanning devices for obtaining a three dimensional digitisation of plant leaves with a point cloud resolution of 200-500μm. The devices tested were a Roland mdx-20, Microsoft Kinect, Roland lpx-250, Picoscan and Artec S. The applicability of each of these devices for scanning plant leaves is discussed. The most suitable tested digitisation device for scanning plant leaves is the Artec S scanner.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Realistic virtual models of leaf surfaces are important for a number of applications in the plant sciences, such as modelling agrichemical spray droplet movement and spreading on the surface. In this context, the virtual surfaces are required to be sufficiently smooth to facilitate the use of the mathematical equations that govern the motion of the droplet. While an effective approach is to apply discrete smoothing D2-spline algorithms to reconstruct the leaf surfaces from three-dimensional scanned data, difficulties arise when dealing with wheat leaves that tend to twist and bend. To overcome this topological difficulty, we develop a parameterisation technique that rotates and translates the original data, allowing the surface to be fitted using the discrete smoothing D2-spline methods in the new parameter space. Our algorithm uses finite element methods to represent the surface as a linear combination of compactly supported shape functions. Numerical results confirm that the parameterisation, along with the use of discrete smoothing D2-spline techniques, produces realistic virtual representations of wheat leaves.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents an empirical study of the effects of topology on cellular automata rule spaces. The classical definition of a cellular automaton is restricted to that of a regular lattice, often with periodic boundary conditions. This definition is extended to allow for arbitrary topologies. The dynamics of cellular automata within the triangular tessellation were analysed when transformed to 2-manifolds of topological genus 0, genus 1 and genus 2. Cellular automata dynamics were analysed from a statistical mechanics perspective. The sample sizes required to obtain accurate entropy calculations were determined by an entropy error analysis which observed the error in the computed entropy against increasing sample sizes. Each cellular automata rule space was sampled repeatedly and the selected cellular automata were simulated over many thousands of trials for each topology. This resulted in an entropy distribution for each rule space. The computed entropy distributions are indicative of the cellular automata dynamical class distribution. Through the comparison of these dynamical class distributions using the E-statistic, it was identified that such topological changes cause these distributions to alter. This is a significant result which implies that both global structure and local dynamics play a important role in defining long term behaviour of cellular automata.