931 resultados para time-motion
Resumo:
For humans and robots to communicate using natural language it is necessary for the robots to develop concepts and associated terms that correspond to the human use of words. Time and space are foundational concepts in human language, and to develop a set of words that correspond to human notions of time and space, it is necessary to take into account the way that they are used in natural human conversations, where terms and phrases such as `soon', `in a while', or `near' are often used. We present language learning robots called Lingodroids that can learn and use simple terms for time and space. In previous work, the Lingodroids were able to learn terms for space. In this work we extend their abilities by adding temporal variables which allow them to learn terms for time. The robots build their own maps of the world and interact socially to form a shared lexicon for location and duration terms. The robots successfully use the shared lexicons to communicate places and times to meet again.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
In this paper we present a new simulation methodology in order to obtain exact or approximate Bayesian inference for models for low-valued count time series data that have computationally demanding likelihood functions. The algorithm fits within the framework of particle Markov chain Monte Carlo (PMCMC) methods. The particle filter requires only model simulations and, in this regard, our approach has connections with approximate Bayesian computation (ABC). However, an advantage of using the PMCMC approach in this setting is that simulated data can be matched with data observed one-at-a-time, rather than attempting to match on the full dataset simultaneously or on a low-dimensional non-sufficient summary statistic, which is common practice in ABC. For low-valued count time series data we find that it is often computationally feasible to match simulated data with observed data exactly. Our particle filter maintains $N$ particles by repeating the simulation until $N+1$ exact matches are obtained. Our algorithm creates an unbiased estimate of the likelihood, resulting in exact posterior inferences when included in an MCMC algorithm. In cases where exact matching is computationally prohibitive, a tolerance is introduced as per ABC. A novel aspect of our approach is that we introduce auxiliary variables into our particle filter so that partially observed and/or non-Markovian models can be accommodated. We demonstrate that Bayesian model choice problems can be easily handled in this framework.
Resumo:
Introduction: Participants may respond to phases of a workplace walking program at different rates. This study evaluated the factors that contribute to the number of steps through phases of the program. The intervention was automated through a web-based program designed to increase workday walking. Methods: The study reviewed independent variable influences throughout phases I–III. A convenience sample of university workers (n=56; 43.6±1.7 years; BMI 27.44±.2.15 kg/m2; 48 female) were recruited at worksites in Australia. These workers were given a pedometer (Yamax SW 200) and access to the website program. For analyses, step counts entered by workers into the website were downloaded and mean workday steps were compared using a seemingly unrelated regression. This model was employed to capture the contemporaneous correlation within individuals in the study across observed time periods. Results: The model predicts that the 36 subjects with complete information took an average 7460 steps in the baseline two week period. After phase I, statistically significance increases in steps (from baseline) were explained by age, working status (full or part time), occupation (academic or professional), and self reported public transport (PT) use (marginally significant). Full time workers walked more than part time workers by about 440 steps, professionals walked about 300 steps more than academics, and PT users walked about 400 steps more than non-PT users. The ability to differentiate steps after two weeks among participants suggests a differential affect of the program after only two weeks. On average participants increased steps from week two to four by about 525 steps, but regular auto users had nearly 750 steps less than non-auto users at week four. The effect of age was diminished in the 4th week of observation and accounted for 34 steps per year of age. In phase III, discriminating between participants became more difficult, with only age effects differentiating their increase over baseline. The marginal effect of age by phase III compared to phase I, increased from 36 to 50, suggesting a 14 step per year increase from the 2nd to 6th week. Discussion: The findings suggest that participants responded to the program at different rates, with uniformity of effect achieved by the 6th week. Participants increased steps, however a tapering off occurred over time. Age played the most consistent role in predicting steps over the program. PT use was associated with increased step counts, while Auto use was associated with decreased step counts.
Resumo:
In this paper we demonstrate passive vision-based localization in environments more than two orders of magnitude darker than the current benchmark using a 100 webcam and a 500 camera. Our approach uses the camera’s maximum exposure duration and sensor gain to achieve appropriately exposed images even in unlit night-time environments, albeit with extreme levels of motion blur. Using the SeqSLAM algorithm, we first evaluate the effect of variable motion blur caused by simulated exposures of 132 ms to 10000 ms duration on localization performance. We then use actual long exposure camera datasets to demonstrate day-night localization in two different environments. Finally we perform a statistical analysis that compares the baseline performance of matching unprocessed greyscale images to using patch normalization and local neighbourhood normalization – the two key SeqSLAM components. Our results and analysis show for the first time why the SeqSLAM algorithm is effective, and demonstrate the potential for cheap camera-based localization systems that function across extreme perceptual change.
Resumo:
Deploying wireless networks in networked control systems (NCSs) has become more and more popular during the last few years. As a typical type of real-time control systems, an NCS is sensitive to long and nondeterministic time delay and packet losses. However, the nature of the wireless channel has the potential to degrade the performance of NCS networks in many aspects, particularly in time delay and packet losses. Transport layer protocols could play an important role in providing both reliable and fast transmission service to fulfill NCS’s real-time transmission requirements. Unfortunately, none of the existing transport protocols, including the Transport Control Protocol (TCP) and the User Datagram Protocol (UDP), was designed for real-time control applications. Moreover, periodic data and sporadic data are two types of real-time data traffic with different priorities in an NCS. Due to the lack of support for prioritized transmission service, the real-time performance for periodic and sporadic data in an NCS network is often degraded significantly, particularly under congested network conditions. To address these problems, a new transport layer protocol called Reliable Real-Time Transport Protocol (RRTTP) is proposed in this thesis. As a UDP-based protocol, RRTTP inherits UDP’s simplicity and fast transmission features. To improve the reliability, a retransmission and an acknowledgement mechanism are designed in RRTTP to compensate for packet losses. They are able to avoid unnecessary retransmission of the out-of-date packets in NCSs, and collisions are unlikely to happen, and small transmission delay can be achieved. Moreover, a prioritized transmission mechanism is also designed in RRTTP to improve the real-time performance of NCS networks under congested traffic conditions. Furthermore, the proposed RRTTP is implemented in the Network Simulator 2 for comprehensive simulations. The simulation results demonstrate that RRTTP outperforms TCP and UDP in terms of real-time transmissions in an NCS over wireless networks.
Resumo:
Objective: To estimate the time spent by the researchers for preparing grant proposals, and to examine whether spending more time increase the chances of success. Design: Observational study. Setting: The National Health and Medical Research Council (NHMRC) of Australia. Participants: Researchers who submitted one or more NHMRC Project Grant proposals in March 2012. Main outcome measures: Total researcher time spent preparing proposals; funding success as predicted by the time spent. Results: The NHMRC received 3727 proposals of which 3570 were reviewed and 731 (21%) were funded. Among our 285 participants who submitted 632 proposals, 21% were successful. Preparing a new proposal took an average of 38 working days of researcher time and a resubmitted proposal took 28 working days, an overall average of 34 days per proposal. An estimated 550 working years of researchers' time (95% CI 513 to 589) was spent preparing the 3727 proposals, which translates into annual salary costs of AU$66 million. More time spent preparing a proposal did not increase the chances of success for the lead researcher (prevalence ratio (PR) of success for 10 day increase=0.91, 95% credible interval 0.78 to 1.04) or other researchers (PR=0.89, 95% CI 0.67 to 1.17). Conclusions: Considerable time is spent preparing NHMRC Project Grant proposals. As success rates are historically 20–25%, much of this time has no immediate benefit to either the researcher or society, and there are large opportunity costs in lost research output. The application process could be shortened so that only information relevant for peer review, not administration, is collected. This would have little impact on the quality of peer review and the time saved could be reinvested into research.
Resumo:
Objective To evaluate the time course of the recovery of transverse strain in the Achilles and patellar tendon following a bout of resistance exercise. Methods Seventeen healthy adults underwent sonographic examination of the right patellar (n=9) and Achilles (n=8) tendons immediately prior to and following 90 repetitions of weight-bearing quadriceps and gastrocnemius-resistance exercise performed against an effective resistance of 175% and 250% body weight, respectively. Sagittal tendon thickness was determined 20 mm from the enthesis and transverse strain, as defined by the stretch ratio, was repeatedly monitored over a 24 h recovery period. Results Resistance exercise resulted in an immediate decrease in Achilles (t7=10.6, p<0.01) and patellar (t8=8.9, p<0.01) tendon thickness, resulting in an average transverse stretch ratio of 0.86±0.04 and 0.82±0.05, which was not significantly different between tendons. The magnitude of the immediate transverse strain response, however, was reduced with advancing age (r=0.63, p<0.01). Recovery in transverse strain was prolonged compared with the duration of loading and exponential in nature. The average primary recovery time was not significantly different between the Achilles (6.5±3.2 h) and patellar (7.1±3.2 h) tendons. Body weight accounted for 62% and 64% of the variation in recovery time, respectively. Conclusions Despite structural and biochemical differences between the Achilles and patellar tendon, the mechanisms underlying transverse creep recovery in vivo appear similar and are highly time dependent. These novel findings have important implications concerning the time required for the mechanical recovery of high-stress tendons following an acute bout of exercise.
Resumo:
Mobile/tower cranes are the most essential forms of construction plant in use in the construction industry but are also the subject of several safety issues. Of these, blind lifting has been found to be one of the most hazardous of crane operations. To improve the situation, a real-time monitoring system that integrates the use of a Global Positioning System (GPS) and Radio Frequency Identification (RFID) is developed. This system aims to identify unauthorized work or entrance of personnel within a pre-defined risk zone by obtaining positioning data of both site workers and the crane. The system alerts to the presence of unauthorized workers within a risk zone——currently defined as 3m from the crane. When this happens, the system suspends the power of the crane and a warning signal is generated to the safety management team. In this way the system assists the safety management team to manage the safety of hundreds of workers simultaneously. An onsite trial with debriefing interviews is presented to illustrate and validate the system in use.
Resumo:
In Australia, as in some other western nations, governments impose accountability measures on educational institutions (Earl, 2005). One such accountability measure is the National Assessment Program - Literacy and Numeracy (NAPLAN) from which high-stakes assessment data is generated. In this article, a practical method of data analysis known as the Over Time Assessment Data Analysis (OTADA) is offered as an analytical process by which schools can monitor their current and over time performances. This analysis developed by the author, is currently used extensively in schools throughout Queensland. By Analysing in this way, teachers, and in particular principals, can obtain a quick and insightful performance overview. For those seeking to track the achievements and progress of year level cohorts, the OTADA should be considered.
Resumo:
This project researched the performance of emerging digital technology for high voltage electricity substations that significantly improves safety for staff and reduces the potential impact on the environment of equipment failure. The experimental evaluation used a scale model of a substation control system that incorporated real substation control and networking equipment with real-time simulation of the power system. The outcomes confirm that it is possible to implement Ethernet networks in high voltage substations that meet the needs of utilities; however component-level testing of devices is necessary to achieve this. The assessment results have been used to further develop international standards for substation communication and precision timing.
Resumo:
The work presented in this thesis investigates the mathematical modelling of charge transport in electrolyte solutions, within the nanoporous structures of electrochemical devices. We compare two approaches found in the literature, by developing onedimensional transport models based on the Nernst-Planck and Maxwell-Stefan equations. The development of the Nernst-Planck equations relies on the assumption that the solution is infinitely dilute. However, this is typically not the case for the electrolyte solutions found within electrochemical devices. Furthermore, ionic concentrations much higher than those of the bulk concentrations can be obtained near the electrode/electrolyte interfaces due to the development of an electric double layer. Hence, multicomponent interactions which are neglected by the Nernst-Planck equations may become important. The Maxwell-Stefan equations account for these multicomponent interactions, and thus they should provide a more accurate representation of transport in electrolyte solutions. To allow for the effects of the electric double layer in both the Nernst-Planck and Maxwell-Stefan equations, we do not assume local electroneutrality in the solution. Instead, we model the electrostatic potential as a continuously varying function, by way of Poisson’s equation. Importantly, we show that for a ternary electrolyte solution at high interfacial concentrations, the Maxwell-Stefan equations predict behaviour that is not recovered from the Nernst-Planck equations. The main difficulty in the application of the Maxwell-Stefan equations to charge transport in electrolyte solutions is knowledge of the transport parameters. In this work, we apply molecular dynamics simulations to obtain the required diffusivities, and thus we are able to incorporate microscopic behaviour into a continuum scale model. This is important due to the small size scales we are concerned with, as we are still able to retain the computational efficiency of continuum modelling. This approach provides an avenue by which the microscopic behaviour may ultimately be incorporated into a full device-scale model. The one-dimensional Maxwell-Stefan model is extended to two dimensions, representing an important first step for developing a fully-coupled interfacial charge transport model for electrochemical devices. It allows us to begin investigation into ambipolar diffusion effects, where the motion of the ions in the electrolyte is affected by the transport of electrons in the electrode. As we do not consider modelling in the solid phase in this work, this is simulated by applying a time-varying potential to one interface of our two-dimensional computational domain, thus allowing a flow field to develop in the electrolyte. Our model facilitates the observation of the transport of ions near the electrode/electrolyte interface. For the simulations considered in this work, we show that while there is some motion in the direction parallel to the interface, the interfacial coupling is not sufficient for the ions in solution to be "dragged" along the interface for long distances.
Resumo:
The statutory demand procedure has been a part of our corporate law from its earliest modern formulations and it has been suggested, albeit anecdotally, that under the current regime, it gives rise to more litigation than any other part of the Corporations Act. Despite this there has been a lack of consideration of the underlying policy behind the procedure in both the case law and literature; both of which are largely centred on the technical aspects of the process. The purpose of this article is to examine briefly the process of the statutory demand in the context of the current insolvency law in Australia.
Resumo:
The application of different EMS current thresholds on muscle activates not only the muscle but also peripheral sensory axons that send proprioceptive and pain signals to the cerebral cortex. A 32-channel time-domain fNIRS instrument was employed to map regional cortical activities under varied EMS current intensities applied on the right wrist extensor muscle. Eight healthy volunteers underwent four EMS at different current thresholds based on their individual maximal tolerated intensity (MTI), i.e., 10 % < 50 % < 100 % < over 100 % MTI. Time courses of the absolute oxygenated and deoxygenated hemoglobin concentrations primarily over the bilateral sensorimotor cortical (SMC) regions were extrapolated, and cortical activation maps were determined by general linear model using the NIRS-SPM software. The stimulation-induced wrist extension paradigm significantly increased activation of the contralateral SMC region according to the EMS intensities, while the ipsilateral SMC region showed no significant changes. This could be due in part to a nociceptive response to the higher EMS current intensities and result also from increased sensorimotor integration in these cortical regions.