838 resultados para computer-based instrumentation
Resumo:
Echocardiography is the commonest form of non-invasive cardiac imaging and is fundamental to patient management. However, due to its methodology, it is also operator dependent. There are well defined pathways in training and ongoing accreditation to achieve and maintain competency. To satisfy these requirements, significant time has to be dedicated to scanning patients, often in the time pressured clinical environment. Alternative, computer based training methods are being considered to augment echocardiographic training. Numerous advances in technology have resulted in the development of interactive programmes and simulators to teach trainees the skills to perform particular procedures, including transthoracic and transoesophageal echocardiography. 82 sonographers and TOE proceduralists utilised an echocardiographic simulator and assessed its utility using defined criteria. 40 trainee sonographers assessed the simulator and were taught how to obtain an apical 2 chamber (A2C) view and image the superior vena cava (SVC). 100% and 88% found the simulator useful in obtaining the SVC or A2C view respectively. All users found it easy to use and the majority found it helped with image acquisition and interpretation. 42 attendees of a TOE training day utilising the simulator assessed the simulator with 100% finding it easy to use, as well as the augmented reality graphics benefiting image acquisition. 90% felt that it was realistic. This study revealed that both trainee sonographers and TOE proceduralists found the simulation process was realistic, helped in image acquisition and improved assessment of spatial relationships. Echocardiographic simulators may play an important role in the future training of echocardiographic skills.
Resumo:
This thesis is concerned with creating and evaluating interactive art systems that facilitate emergent participant experiences. For the purposes of this research, interactive art is the computer based arts involving physical participation from the audience, while emergence is when a new form or concept appears that was not directly implied by the context from which it arose. This emergent ‘whole’ is more than a simple sum of its parts. The research aims to develop understanding of the nature of emergent experiences that might arise during participant interaction with interactive art systems. It also aims to understand the design issues surrounding the creation of these systems. The approach used is Practice-based, integrating practice, evaluation and theoretical research. Practice used methods from Reflection-in-action and Iterative design to create two interactive art systems: Glass Pond and +-now. Creation of +-now resulted in a novel method for instantiating emergent shapes. Both art works were also evaluated in exploratory studies. In addition, a main study with 30 participants was conducted on participant interaction with +-now. These sessions were video recorded and participants were interviewed about their experience. Recordings were transcribed and analysed using Grounded theory methods. Emergent participant experiences were identified and classified using a taxonomy of emergence in interactive art. This taxonomy draws on theoretical research. The outcomes of this Practice-based research are summarised as follows. Two interactive art systems, where the second work clearly facilitates emergent interaction, were created. Their creation involved the development of a novel method for instantiating emergent shapes and it informed aesthetic and design issues surrounding interactive art systems for emergence. A taxonomy of emergence in interactive art was also created. Other outcomes are the evaluation findings about participant experiences, including different types of emergence experienced and the coding schemes produced during data analysis.
Resumo:
By the end of the 20th century the shift from professional recording studio to personal computer based recording systems was well established (Chadabe 1997) and musicians could increasingly see the benefits of value adding to the musical process by producing their own musical endeavours. At the Queensland University of Technology (QUT) where we were teaching, the need for a musicianship program that took account of these trends was becoming clear. The Sound Media Musicianship unit described in this chapter was developed to fill this need and ran from 1999 through 2010.
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
The project investigated the relationships between diversification in modes ofdelivery, use of information and communication technologies, academics’ teaching practices, and the context in which those practices are employed, in two of the three large universities in Brisbane—Griffith University and the Queensland University of Technology (QUT). The project’s initial plan involved the investigation of two sites: Queensland University of Technology’s Faculty of Education (Kelvin Grove campus) and Griffith University’s Faculty of Humanities(Nathan campus). Interviews associated with the Faculty of Education led to a decision to include a third site—the School of Law within Queensland University of Technology’s Faculty of Law, which is based on the Gardens Point Campus. Here the investigation focused on the use of computer-based flexible learning practices, as distinct from the more text-based practices identified within the original two sites.
Resumo:
Railway is one of the most important, reliable and widely used means of transportation, carrying freight, passengers, minerals, grains, etc. Thus, research on railway tracks is extremely important for the development of railway engineering and technologies. The safe operation of a railway track is based on the railway track structure that includes rails, fasteners, pads, sleepers, ballast, subballast and formation. Sleepers are very important components of the entire structure and may be made of timber, concrete, steel or synthetic materials. Concrete sleepers were first installed around the middle of last century and currently are installed in great numbers around the world. Consequently, the design of concrete sleepers has a direct impact on the safe operation of railways. The "permissible stress" method is currently most commonly used to design sleepers. However, the permissible stress principle does not consider the ultimate strength of materials, probabilities of actual loads, and the risks associated with failure, all of which could lead to the conclusion of cost-ineffectiveness and over design of current prestressed concrete sleepers. Recently the limit states design method, which appeared in the last century and has been already applied in the design of buildings, bridges, etc, is proposed as a better method for the design of prestressed concrete sleepers. The limit states design has significant advantages compared to the permissible stress design, such as the utilisation of the full strength of the member, and a rational analysis of the probabilities related to sleeper strength and applied loads. This research aims to apply the ultimate limit states design to the prestressed concrete sleeper, namely to obtain the load factors of both static and dynamic loads for the ultimate limit states design equations. However, the sleepers in rail tracks require different safety levels for different types of tracks, which mean the different types of tracks have different load factors of limit states design equations. Therefore, the core tasks of this research are to find the load factors of the static component and dynamic component of loads on track and the strength reduction factor of the sleeper bending strength for the ultimate limit states design equations for four main types of tracks, i.e., heavy haul, freight, medium speed passenger and high speed passenger tracks. To find those factors, the multiple samples of static loads, dynamic loads and their distributions are needed. In the four types of tracks, the heavy haul track has the measured data from Braeside Line (A heavy haul line in Central Queensland), and the distributions of both static and dynamic loads can be found from these data. The other three types of tracks have no measured data from sites and the experimental data are hardly available. In order to generate the data samples and obtain their distributions, the computer based simulations were employed and assumed the wheel-track impacts as induced by different sizes of wheel flats. A valid simulation package named DTrack was firstly employed to generate the dynamic loads for the freight and medium speed passenger tracks. However, DTrack is only valid for the tracks which carry low or medium speed vehicles. Therefore, a 3-D finite element (FE) model was then established for the wheel-track impact analysis of the high speed track. This FE model has been validated by comparing its simulation results with the DTrack simulation results, and with the results from traditional theoretical calculations based on the case of heavy haul track. Furthermore, the dynamic load data of the high speed track were obtained from the FE model and the distributions of both static and dynamic loads were extracted accordingly. All derived distributions of loads were fitted by appropriate functions. Through extrapolating those distributions, the important parameters of distributions for the static load induced sleeper bending moment and the extreme wheel-rail impact force induced sleeper dynamic bending moments and finally, the load factors, were obtained. Eventually, the load factors were obtained by the limit states design calibration based on reliability analyses with the derived distributions. After that, a sensitivity analysis was performed and the reliability of the achieved limit states design equations was confirmed. It has been found that the limit states design can be effectively applied to railway concrete sleepers. This research significantly contributes to railway engineering and the track safety area. It helps to decrease the failure and risks of track structure and accidents; better determines the load range for existing sleepers in track; better rates the strength of concrete sleepers to support bigger impact and loads on railway track; increases the reliability of the concrete sleepers and hugely saves investments on railway industries. Based on this research, many other bodies of research can be promoted in the future. Firstly, it has been found that the 3-D FE model is suitable for the study of track loadings and track structure vibrations. Secondly, the equations for serviceability and damageability limit states can be developed based on the concepts of limit states design equations of concrete sleepers obtained in this research, which are for the ultimate limit states.
Resumo:
This thesis addresses the process simulation and validation in Business Process Management. It proposes that the hybrid Multi Agent System (MAS) / 3D Virtual World approach is a valid method for better simulating the behaviour of human resources in business processes, supporting a wide range of rich visualization applications that can facilitate communication between business analysts and stakeholders. It is expected that the findings of this thesis may be fruitfully extended from BPM to other application domains, such as social simulation in video games and computer-based training animations.
Resumo:
Scientific visualisations such as computer-based animations and simulations are increasingly a feature of high school science instruction. Visualisations are adopted enthusiastically by teachers and embraced by students, and there is good evidence that they are popular and well received. There is limited evidence, however, of how effective they are in enabling students to learn key scientific concepts. This paper reports the results of a quantitative study conducted in Australian chemistry classrooms. The visualisations chosen were from free online sources, intended to model the ways in which classroom teachers use visualisations, but were found to have serious flaws for conceptual learning. There were also challenges in the degree of interactivity available to students using the visualisations. Within these limitations, no significant difference was found for teaching with and without these visualisations. Further study using better designed visualisations and with explicit attention to the pedagogy surrounding the visualisations will be required to gather high quality evidence of the effectiveness of visualisations for conceptual development.
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. The creative work titled “Contours in Motion” was the first in a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. With the source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. The aim of the creative work was to diverge way from a standard practice of using particle system and/or a simple re-targeting of the motion data to drive a 3d character as a means to produce abstracted visual forms. To facilitate this divergence a virtual dynamic object was tether to a selection of data points from a captured performance. The proprieties of the dynamic object were then adjusted to balance the influences from the human movement data with the influence of computer based randomization. The resulting outcome was a visual form that surpassed simple data visualization to project the intent of the performer’s movements into a visual shape itself. The reported outcomes from this investigation have contributed to a larger study on the use of motion capture in the generative arts, furthering the understanding of and generating theories on practice.
Resumo:
Too often the relationship between client and external consultants is perceived as one of protagonist versus antogonist. Stories on dramatic, failed consultancies abound, as do related anecdotal quips. A contributing factor to many "apparently" failed consultancies is a poor appreciation by both the client and consultant of the client's true goals for the project and how to assess progress toward these goals. This paper presents and analyses a measurement model for assessing client success when engaging an external consultant. Three main areas of assessment are identified: (1) the consultant;s recommendations, (2) client learning, and (3) consultant performance. Engagement success is emperically measured along these dimensions through a series of case studies and a subsequent survey of clients and consultants involved in 85 computer-based information system selection projects. Validation fo the model constructs suggests the existence of six distinct and individually important dimensions of engagement success. both clients and consultants are encouraged to attend to these dimensions in pre-engagement proposal and selection processes, and post-engagement evaluation of outcomes.
Resumo:
This study explored the interaction between physical and psychosocial factors in the workplace on neck pain and disability in female computer users. A self-report survey was used to collect data on physical risk factors (monitor location, duration of time spent using the keyboard and mouse) and psychosocial domains (as assessed by the Job Content Questionnaire). The neck disability index was the outcome measure. Interactions among the physical and psychosocial factors were examined in analysis of covariance. High supervisor support, decision authority and skill discretion protect against the negative impact of (1) time spent on computer-based tasks, (2) non-optimal placement of the computer monitor, and; (3) long duration of mouse use. Office workers with greater neck pain experience a combination of high physical and low psychosocial stressors at work. Prevention and intervention strategies that target both sets of risk factors are likely to be more successful than single intervention programmes. Statement of Relevance The results of this study demonstrate that the interaction of physical and psychosocial factors in the workplace has a stronger association with neck pain and disability than the presence of either factor alone. This finding has important implications for strategies aimed at the prevention of musculoskeletal problems in office workers.
Resumo:
Effects of physical activity interventions in youth: A review. International SportMed Journal. Vol.2 No.5 2001. The purpose of this paper is to review the peer-reviewed literature pertinent to physical activity interventions for children and adolescents. In order to provide a more quantitative conclusion regarding the effectiveness of these interventions, a meta-analytic approach was utilized in which effect sizes (the efficacy of each intervention or magnitude of the intervention effect was expressed as a standardized effect size, which represents the influence of the treatment or intervention on the dependent variable) from each study are pooled to provide a global estimate of effectiveness. A search of the relevant peer-reviewed literature was conducted using several computer-based databases, including MEDLINE, PYSCHLIT, SOCIAL SCIENCE INDEX, and SPORTS DISCUS. Manual searches were also made using the reference lists from recovered articles. Applying strict criteria for quality of design and assessment of physical activity, 10 studies were located, yielding a total of 44 effect sizes. The mean effect size was 0.47 (95% C.I. 0.28 – 0.66) suggesting that interventions have produced moderate increases in physical activity behavior. Effect sizes ranged from –0.61 to 2.5. Interventions focusing on increasing the amount of physical activity performed during regular physical education were more effective than those targeting overall levels of physical activity. Interventions were almost entirely school-based. Accordingly, the development and evaluation of community-based approaches for promoting physical activity among young people, especially older adolescents, remains an urgent priority for future research.
Resumo:
Abstract PURPOSE: Compensatory responses may attenuate the effectiveness of exercise training in weight management. The aim of this study was to compare the effect of moderate- and high-intensity interval training on eating behavior compensation. METHODS: Using a crossover design, 10 overweight and obese men participated in 4-week moderate (MIIT) and high (HIIT) intensity interval training. MIIT consisted of 5-min cycling stages at ± 20% of mechanical work at 45%VO(2)peak, and HIIT consisted of alternate 30-s work at 90%VO(2)peak and 30-s rests, for 30 to 45 min. Assessments included a constant-load exercise test at 45%VO(2)peak for 45 min followed by 60-min recovery. Appetite sensations were measured during the exercise test using a Visual Analog Scale. Food preferences (liking and wanting) were assessed using a computer-based paradigm, and this paradigm uses 20 photographic food stimuli varying along two dimensions, fat (high or low) and taste (sweet or nonsweet). An ad libitum test meal was provided after the constant-load exercise test. RESULTS: Exercise-induced hunger and desire to eat decreased after HIIT, and the difference between MIIT and HIIT in desire to eat approached significance (p = .07). Exercise-induced liking for high-fat nonsweet food tended to increase after MIIT and decreased after HIIT (p = .09). Fat intake decreased by 16% after HIIT, and increased by 38% after MIIT, with the difference between MIIT and HIIT approaching significance (p = .07). CONCLUSIONS: This study provides evidence that energy intake compensation differs between MIIT and HIIT.
Resumo:
In this paper, we propose a new load distribution strategy called `send-and-receive' for scheduling divisible loads, in a linear network of processors with communication delay. This strategy is designed to optimally utilize the network resources and thereby minimizes the processing time of entire processing load. A closed-form expression for optimal size of load fractions and processing time are derived when the processing load originates at processor located in boundary and interior of the network. A condition on processor and link speed is also derived to ensure that the processors are continuously engaged in load distributions. This paper also presents a parallel implementation of `digital watermarking problem' on a personal computer-based Pentium Linear Network (PLN) topology. Experiments are carried out to study the performance of the proposed strategy and results are compared with other strategies found in literature.