772 resultados para Computer based training
Resumo:
Previous studies have shown that exercise (Ex) interventions create a stronger coupling between energy intake (EI) and energy expenditure (EE) leading to increased homeostasis of the energy-balance (EB) regulatory system compared to a diet intervention where an un-coupling between EI and EE occurs. The benefits of weight loss from Ex and diet interventions greatly depend on compensatory responses. The present study investigated an 8-week medium-term Ex and diet intervention program (Ex intervention comprised of 500kcal EE five days per week over four weeks at 65-75% maximal heart rate, whereas the diet intervention comprised of a 500kcal decrease in EI five days per week over four weeks) and its effects on compensatory responses and appetite regulation among healthy individuals using a between- and within-subjects design. Effects of an acute dietary manipulation on appetite and compensatory behaviours and whether a diet and/or Ex intervention pre-disposes individuals to disturbances in EB homeostasis were tested. Energy intake at an ad libitum lunch test meal after a breakfast high- and low-energy pre-load (the high energy pre-load contained 556kcal and the low energy pre-load contained 239kcal) were measured at the Baseline (Weeks -4 to 0) and Intervention (Weeks 0 to 4) phases in 13 healthy volunteers (three males and ten females; mean age 35 years [sd + 9] and mean BMI 25 kg/m2 [sd + 3.8]) [participants in each group included Ex=7, diet=5 (one female in the diet group dropped out midway), thus, 12 participants completed the study]. At Weeks -4, 0 and 4, visual analogue scales (VAS) were used to assess hunger and satiety and liking and wanting (L&W) for nutrient and taste preferences using a computer-based system (E-Prime v1.1.4). Ad libitum test meal EI was consistently lower after the HE pre-load compared to the LE pre-load. However, this was not consistent during the diet intervention however. A pre-load x group interaction on ad libitum test meal EI revealed that during the intervention phase the Ex group showed an improved sensitivity to detect the energy content between the two pre-loads and improved compensation for the ad libitum test meal whereas the diet group’s ability to differentiate between the two pre-loads decreased and showed poorer compensation (F[1,10]=2.88, p-value not significant). This study supports previous findings of the effect Ex and diet interventions have on appetite and compensatory responses; Ex increases and diet decreases energy balance sensitivity.
Resumo:
This paper presents a number of characteristics of the Internet that makes it attractive to online groomers. Relevant Internet characteristics include disconnected personal communication, mediating technology, universality, network externalities, distribution channel, time moderator, low‐cost standard, electronic double, electronic double manipulation, information asymmetry, infinite virtual capacity, independence in time and space, cyberspace, and dynamic social network. Potential sex offenders join virtual communities, where they meet other persons who have the same interest. A virtual community provides an online meeting place where people with similar interests can communicate and find useful information. Communication between members may be via email, bulletin boards, online chat, web‐based conferencing or other computer‐based media.
Resumo:
A quantitative, quasi-experimental study of the effectiveness of computer-based scientific visualizations for concept learning on the part of Year 11 physics students (n=80) was conducted in six Queensland high school classrooms. Students’ gender and academic ability were also considered as factors in relation to the effectiveness of teaching with visualizations. Learning with visualizations was found to be equally effective as learning without them for all students, with no statistically significant difference in outcomes being observed for the group as a whole or on the academic ability dimension. Male students were found to learn significantly better with visualizations than without, while no such effect was observed for female students. This may give rise to some concern for the equity issues raised by introducing visualizations. Given that other research shows that students enjoy learning with visualizations and that their engagement with learning is enhanced, the finding that the learning outcomes are the same as for teaching without visualizations supports teachers’ use of visualizations.
Consecutive days of cold water immersion: effects on cycling performance and heart rate variability.
Resumo:
We investigated performance and heart rate (HR) variability (HRV) over consecutive days of cycling with post-exercise cold water immersion (CWI) or passive recovery (PAS). In a crossover design, 11 cyclists completed two separate 3-day training blocks (120 min cycling per day, 66 maximal sprints, 9 min time trialling [TT]), followed by 2 days of recovery-based training. The cyclists recovered from each training session by standing in cold water (10 °C) or at room temperature (27 °C) for 5 min. Mean power for sprints, total TT work and HR were assessed during each session. Resting vagal-HRV (natural logarithm of square-root of mean squared differences of successive R-R intervals; ln rMSSD) was assessed after exercise, after the recovery intervention, during sleep and upon waking. CWI allowed better maintenance of mean sprint power (between-trial difference [90 % confidence limits] +12.4 % [5.9; 18.9]), cadence (+2.0 % [0.6; 3.5]), and mean HR during exercise (+1.6 % [0.0; 3.2]) compared with PAS. ln rMSSD immediately following CWI was higher (+144 % [92; 211]) compared with PAS. There was no difference between the trials in TT performance (-0.2 % [-3.5; 3.0]) or waking ln rMSSD (-1.2 % [-5.9; 3.4]). CWI helps to maintain sprint performance during consecutive days of training, whereas its effects on vagal-HRV vary over time and depend on prior exercise intensity.
Resumo:
This thesis is concerned with creating and evaluating interactive art systems that facilitate emergent participant experiences. For the purposes of this research, interactive art is the computer based arts involving physical participation from the audience, while emergence is when a new form or concept appears that was not directly implied by the context from which it arose. This emergent ‘whole’ is more than a simple sum of its parts. The research aims to develop understanding of the nature of emergent experiences that might arise during participant interaction with interactive art systems. It also aims to understand the design issues surrounding the creation of these systems. The approach used is Practice-based, integrating practice, evaluation and theoretical research. Practice used methods from Reflection-in-action and Iterative design to create two interactive art systems: Glass Pond and +-now. Creation of +-now resulted in a novel method for instantiating emergent shapes. Both art works were also evaluated in exploratory studies. In addition, a main study with 30 participants was conducted on participant interaction with +-now. These sessions were video recorded and participants were interviewed about their experience. Recordings were transcribed and analysed using Grounded theory methods. Emergent participant experiences were identified and classified using a taxonomy of emergence in interactive art. This taxonomy draws on theoretical research. The outcomes of this Practice-based research are summarised as follows. Two interactive art systems, where the second work clearly facilitates emergent interaction, were created. Their creation involved the development of a novel method for instantiating emergent shapes and it informed aesthetic and design issues surrounding interactive art systems for emergence. A taxonomy of emergence in interactive art was also created. Other outcomes are the evaluation findings about participant experiences, including different types of emergence experienced and the coding schemes produced during data analysis.
Resumo:
By the end of the 20th century the shift from professional recording studio to personal computer based recording systems was well established (Chadabe 1997) and musicians could increasingly see the benefits of value adding to the musical process by producing their own musical endeavours. At the Queensland University of Technology (QUT) where we were teaching, the need for a musicianship program that took account of these trends was becoming clear. The Sound Media Musicianship unit described in this chapter was developed to fill this need and ran from 1999 through 2010.
Resumo:
Purpose The aim was to assess the effects of a Tai Chi based program on health related quality of life (HR-QOL) in people with elevated blood glucose or diabetes who were not on medication for glucose control. Method 41 participants were randomly allocated to either a Tai Chi intervention group (N = 20) or a usual medical care control group (N = 21). The Tai Chi group involved 3 x 1.5 hour supervised and group-based training sessions per week for 12 weeks. Indicators of HR-QOL were assessed by self-report survey immediately prior to and after the intervention. Results There were significant improvements in favour of the Tai Chi group for the SF36 subscales of physical functioning (mean difference = 5.46, 95% CI = 1.35-9.57, P < 0.05), role physical (mean difference = 18.60, 95% CI = 2.16-35.05, P < 0.05), bodily pain (mean difference = 9.88, 95%CI = 2.06-17.69, P < 0.05) and vitality (mean difference = 9.96, 95% CI = 0.77-19.15, P < 0.05). Conclusions The findings show that this Tai Chi program improved indicators of HR-QOL including physical functioning, role physical, bodily pain and vitality in people with elevated blood glucose or diabetes who were not on diabetes medication.
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
The project investigated the relationships between diversification in modes ofdelivery, use of information and communication technologies, academics’ teaching practices, and the context in which those practices are employed, in two of the three large universities in Brisbane—Griffith University and the Queensland University of Technology (QUT). The project’s initial plan involved the investigation of two sites: Queensland University of Technology’s Faculty of Education (Kelvin Grove campus) and Griffith University’s Faculty of Humanities(Nathan campus). Interviews associated with the Faculty of Education led to a decision to include a third site—the School of Law within Queensland University of Technology’s Faculty of Law, which is based on the Gardens Point Campus. Here the investigation focused on the use of computer-based flexible learning practices, as distinct from the more text-based practices identified within the original two sites.
Resumo:
Most individuals have more than one job or occupation in their working lives. Most employees are repeatedly faced with the choice of whether to remain in their present job (with the possibility of promotion), or quit to another job in the same occupation with a different firm, or - more radically change occupation. At each stage in an individual's career, the scope for future job or occupational mobility is largely conditioned by the type and quantity of their human capital. This paper presents an empirical study of the factors which link occupational mobility and the acquisition of either firm-based, occupation-specific or general human capital. The data employed are from a cohort of 1980 UK graduates drawn from the Department of Employment Survey 1987. The econometric work presents estimates of the role of firm-based training and occupation-specific training in the career mobility of qualified manpower in the first seven years in the labour market
Resumo:
Railway is one of the most important, reliable and widely used means of transportation, carrying freight, passengers, minerals, grains, etc. Thus, research on railway tracks is extremely important for the development of railway engineering and technologies. The safe operation of a railway track is based on the railway track structure that includes rails, fasteners, pads, sleepers, ballast, subballast and formation. Sleepers are very important components of the entire structure and may be made of timber, concrete, steel or synthetic materials. Concrete sleepers were first installed around the middle of last century and currently are installed in great numbers around the world. Consequently, the design of concrete sleepers has a direct impact on the safe operation of railways. The "permissible stress" method is currently most commonly used to design sleepers. However, the permissible stress principle does not consider the ultimate strength of materials, probabilities of actual loads, and the risks associated with failure, all of which could lead to the conclusion of cost-ineffectiveness and over design of current prestressed concrete sleepers. Recently the limit states design method, which appeared in the last century and has been already applied in the design of buildings, bridges, etc, is proposed as a better method for the design of prestressed concrete sleepers. The limit states design has significant advantages compared to the permissible stress design, such as the utilisation of the full strength of the member, and a rational analysis of the probabilities related to sleeper strength and applied loads. This research aims to apply the ultimate limit states design to the prestressed concrete sleeper, namely to obtain the load factors of both static and dynamic loads for the ultimate limit states design equations. However, the sleepers in rail tracks require different safety levels for different types of tracks, which mean the different types of tracks have different load factors of limit states design equations. Therefore, the core tasks of this research are to find the load factors of the static component and dynamic component of loads on track and the strength reduction factor of the sleeper bending strength for the ultimate limit states design equations for four main types of tracks, i.e., heavy haul, freight, medium speed passenger and high speed passenger tracks. To find those factors, the multiple samples of static loads, dynamic loads and their distributions are needed. In the four types of tracks, the heavy haul track has the measured data from Braeside Line (A heavy haul line in Central Queensland), and the distributions of both static and dynamic loads can be found from these data. The other three types of tracks have no measured data from sites and the experimental data are hardly available. In order to generate the data samples and obtain their distributions, the computer based simulations were employed and assumed the wheel-track impacts as induced by different sizes of wheel flats. A valid simulation package named DTrack was firstly employed to generate the dynamic loads for the freight and medium speed passenger tracks. However, DTrack is only valid for the tracks which carry low or medium speed vehicles. Therefore, a 3-D finite element (FE) model was then established for the wheel-track impact analysis of the high speed track. This FE model has been validated by comparing its simulation results with the DTrack simulation results, and with the results from traditional theoretical calculations based on the case of heavy haul track. Furthermore, the dynamic load data of the high speed track were obtained from the FE model and the distributions of both static and dynamic loads were extracted accordingly. All derived distributions of loads were fitted by appropriate functions. Through extrapolating those distributions, the important parameters of distributions for the static load induced sleeper bending moment and the extreme wheel-rail impact force induced sleeper dynamic bending moments and finally, the load factors, were obtained. Eventually, the load factors were obtained by the limit states design calibration based on reliability analyses with the derived distributions. After that, a sensitivity analysis was performed and the reliability of the achieved limit states design equations was confirmed. It has been found that the limit states design can be effectively applied to railway concrete sleepers. This research significantly contributes to railway engineering and the track safety area. It helps to decrease the failure and risks of track structure and accidents; better determines the load range for existing sleepers in track; better rates the strength of concrete sleepers to support bigger impact and loads on railway track; increases the reliability of the concrete sleepers and hugely saves investments on railway industries. Based on this research, many other bodies of research can be promoted in the future. Firstly, it has been found that the 3-D FE model is suitable for the study of track loadings and track structure vibrations. Secondly, the equations for serviceability and damageability limit states can be developed based on the concepts of limit states design equations of concrete sleepers obtained in this research, which are for the ultimate limit states.
Resumo:
Scientific visualisations such as computer-based animations and simulations are increasingly a feature of high school science instruction. Visualisations are adopted enthusiastically by teachers and embraced by students, and there is good evidence that they are popular and well received. There is limited evidence, however, of how effective they are in enabling students to learn key scientific concepts. This paper reports the results of a quantitative study conducted in Australian chemistry classrooms. The visualisations chosen were from free online sources, intended to model the ways in which classroom teachers use visualisations, but were found to have serious flaws for conceptual learning. There were also challenges in the degree of interactivity available to students using the visualisations. Within these limitations, no significant difference was found for teaching with and without these visualisations. Further study using better designed visualisations and with explicit attention to the pedagogy surrounding the visualisations will be required to gather high quality evidence of the effectiveness of visualisations for conceptual development.
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. The creative work titled “Contours in Motion” was the first in a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. With the source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. The aim of the creative work was to diverge way from a standard practice of using particle system and/or a simple re-targeting of the motion data to drive a 3d character as a means to produce abstracted visual forms. To facilitate this divergence a virtual dynamic object was tether to a selection of data points from a captured performance. The proprieties of the dynamic object were then adjusted to balance the influences from the human movement data with the influence of computer based randomization. The resulting outcome was a visual form that surpassed simple data visualization to project the intent of the performer’s movements into a visual shape itself. The reported outcomes from this investigation have contributed to a larger study on the use of motion capture in the generative arts, furthering the understanding of and generating theories on practice.
Resumo:
Too often the relationship between client and external consultants is perceived as one of protagonist versus antogonist. Stories on dramatic, failed consultancies abound, as do related anecdotal quips. A contributing factor to many "apparently" failed consultancies is a poor appreciation by both the client and consultant of the client's true goals for the project and how to assess progress toward these goals. This paper presents and analyses a measurement model for assessing client success when engaging an external consultant. Three main areas of assessment are identified: (1) the consultant;s recommendations, (2) client learning, and (3) consultant performance. Engagement success is emperically measured along these dimensions through a series of case studies and a subsequent survey of clients and consultants involved in 85 computer-based information system selection projects. Validation fo the model constructs suggests the existence of six distinct and individually important dimensions of engagement success. both clients and consultants are encouraged to attend to these dimensions in pre-engagement proposal and selection processes, and post-engagement evaluation of outcomes.