931 resultados para TIME-MOTION
Resumo:
Into the Bends of Time is a 40-minute work in seven movements for a large chamber orchestra with electronics, utilizing real-time computer-assisted processing of music performed by live musicians. The piece explores various combinations of interactive relationships between players and electronics, ranging from relatively basic processing effects to musical gestures achieved through stages of computer analysis, in which resulting sounds are crafted according to parameters of the incoming musical material. Additionally, some elements of interaction are multi-dimensional, in that they rely on the participation of two or more performers fulfilling distinct roles in the interactive process with the computer in order to generate musical material. Through processes of controlled randomness, several electronic effects induce elements of chance into their realization so that no two performances of this work are exactly alike. The piece gets its name from the notion that real-time computer-assisted processing, in which sound pressure waves are transduced into electrical energy, converted to digital data, artfully modified, converted back into electrical energy and transduced into sound waves, represents a “bending” of time.
The Bill Evans Trio featuring bassist Scott LaFaro and drummer Paul Motian is widely regarded as one of the most important and influential piano trios in the history of jazz, lauded for its unparalleled level of group interaction. Most analyses of Bill Evans’ recordings, however, focus on his playing alone and fail to take group interaction into account. This paper examines one performance in particular, of Victor Young’s “My Foolish Heart” as recorded in a live performance by the Bill Evans Trio in 1961. In Part One, I discuss Steve Larson’s theory of musical forces (expanded by Robert S. Hatten) and its applicability to jazz performance. I examine other recordings of ballads by this same trio in order to draw observations about normative ballad performance practice. I discuss meter and phrase structure and show how the relationship between the two is fixed in a formal structure of repeated choruses. I then develop a model of perpetual motion based on the musical forces inherent in this structure. In Part Two, I offer a full transcription and close analysis of “My Foolish Heart,” showing how elements of group interaction work with and against the musical forces inherent in the model of perpetual motion to achieve an unconventional, dynamic use of double-time. I explore the concept of a unified agential persona and discuss its role in imparting the song’s inherent rhetorical tension to the instrumental musical discourse.
Resumo:
Head motion during a Positron Emission Tomography (PET) brain scan can considerably degrade image quality. External motion-tracking devices have proven successful in minimizing this effect, but the associated time, maintenance, and workflow changes inhibit their widespread clinical use. List-mode PET acquisition allows for the retroactive analysis of coincidence events on any time scale throughout a scan, and therefore potentially offers a data-driven motion detection and characterization technique. An algorithm was developed to parse list-mode data, divide the full acquisition into short scan intervals, and calculate the line-of-response (LOR) midpoint average for each interval. These LOR midpoint averages, known as “radioactivity centroids,” were presumed to represent the center of the radioactivity distribution in the scanner, and it was thought that changes in this metric over time would correspond to intra-scan motion.
Several scans were taken of the 3D Hoffman brain phantom on a GE Discovery IQ PET/CT scanner to test the ability of the radioactivity to indicate intra-scan motion. Each scan incrementally surveyed motion in a different degree of freedom (2 translational and 2 rotational). The radioactivity centroids calculated from these scans correlated linearly to phantom positions/orientations. Centroid measurements over 1-second intervals performed on scans with ~1mCi of activity in the center of the field of view had standard deviations of 0.026 cm in the x- and y-dimensions and 0.020 cm in the z-dimension, which demonstrates high precision and repeatability in this metric. Radioactivity centroids are thus shown to successfully represent discrete motions on the submillimeter scale. It is also shown that while the radioactivity centroid can precisely indicate the amount of motion during an acquisition, it fails to distinguish what type of motion occurred.
Resumo:
The purpose of this review was to examine the utility and accuracy of commercially available motion sensors to measure step-count and time spent upright in frail older hospitalized patients. A database search (CINAHL and PubMed, 2004–2014) and a further hand search of papers’ references yielded 24 validation studies meeting the inclusion criteria. Fifteen motion sensors (eight pedometers, six accelerometers, and one sensor systems) have been tested in older adults. Only three have been tested in hospital patients, two of which detected postures and postural changes accurately, but none estimated step-count accurately. Only one motion sensor remained accurate at speeds typical of frail older hospitalized patients, but it has yet to be tested in this cohort. Time spent upright can be accurately measured in the hospital, but further validation studies are required to determine which, if any, motion sensor can accurately measure step-count.
Resumo:
Clinical optical motion capture allows us to obtain kinematic and kinetic outcome measures that aid clinicians in diagnosing and treating different pathologies affecting healthy gait. The long term aim for gait centres is for subject-specific analyses that can predict, prevent, or reverse the effects of pathologies through gait retraining. To track the body, anatomical segment coordinate systems are commonly created by applying markers to the surface of the skin over specific, bony anatomy that is manually palpated. The location and placement of these markers is subjective and precision errors of up to 25mm have been reported [1]. Additionally, the selection of which anatomical landmarks to use in segment models can result in large angular differences; for example angular differences in the trunk can range up to 53o for the same motion depending on marker placement [2]. These errors can result in erroneous kinematic outcomes that either diminish or increase the apparent effects of a treatment or pathology compared to healthy data. Our goal was to improve the accuracy and precision of optical motion capture outcome measures. This thesis describes two separate studies. In the first study we aimed to establish an approach that would allow us to independently quantify the error among trunk models. Using this approach we determined if there was a best model to accurately track trunk motion. In the second study we designed a device to improve precision for test, re-test protocols that would also reduce the set-up time for motion capture experiments. Our method to compare a kinematically derived centre of mass velocity to one that was derived kinetically was successful in quantifying error among trunk models. Our findings indicate that models that use lateral shoulder markers as well as limit the translational degrees of freedom of the trunk through shared pelvic markers result in the least amount of error for the tasks we studied. We also successfully reduced intra- and inter-operator anatomical marker placement errors using a marker alignment device. The improved accuracy and precision resulting from the methods established in this thesis may lead to increased sensitivity to changes in kinematics, and ultimately result in more consistent treatment outcomes.
Resumo:
INTRODUCTION Zero-G parabolic flight reproduces the weightlessness of space for short periods of time. However motion sickness may affect some fliers. The aim was to assess the extent of this problem and to find possible predictors and modifying factors. METHODS Airbus Zero-G flights consist of 31 parabolas performed in blocks. Each parabola consisted of 20s 0g sandwiched by 20s hypergravity of 1.5-1.8g. The survey covered n=246 person-flights (193 Males 53 Females), aged (M+/-SD) 36.0+/-11.3 years. An anonymous questionnaire included motion sickness rating (1=OK to 6=Vomiting), Motion Sickness Susceptibility Questionnaire (MSSQ), anti-motion sickness medication, prior Zero-G experience, anxiety level, and other characteristics. RESULTS Participants had lower MSSQ percentile scores 27.4+/-28.0 than the population norm of 50. Motion sickness was experienced by 33% and 12% vomited. Less motion sickness was predicted by older age, greater prior Zero-G flight experience, medication with scopolamine, lower MSSQ scores, but not gender nor anxiety. Sickness ratings in fliers pre-treated with scopolamine (1.81+/-1.58) were lower than for non-medicated fliers (2.93+/-2.16), and incidence of vomiting in fliers using scopolamine treatment was reduced by half to a third. Possible confounding factors including age, sex, flight experience, MSSQ, could not account for this. CONCLUSION Motion sickness affected one third of Zero-G fliers, despite being intrinsically less motion sickness susceptible compared to the general population. Susceptible individuals probably try to avoid such a provocative environment. Risk factors for motion sickness included younger age and higher MSSQ scores. Protective factors included prior Zero-G flight experience (habituation) and anti-motion sickness medication.
Resumo:
FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.
Resumo:
The aim of this work was to track and verify the delivery of respiratory-gated irradiations, performed with three versions of TrueBeam linac, using a novel phantom arrangement that combined the OCTAVIUS® SRS 1000 array with a moving platform. The platform was programmed to generate sinusoidal motion of the array. This motion was tracked using the real-time position management (RPM) system and four amplitude gating options were employed to interrupt MV beam delivery when the platform was not located within set limits. Time-resolved spatial information extracted from analysis of x-ray fluences measured by the array was compared to the programmed motion of the platform and to the trace recorded by the RPM system during the delivery of the x-ray field. Temporal data recorded by the phantom and the RPM system were validated against trajectory log files, recorded by the linac during the irradiation, as well as oscilloscope waveforms recorded from the linac target signal. Gamma analysis was employed to compare time-integrated 2D x-ray dose fluences with theoretical fluences derived from the probability density function for each of the gating settings applied, where gamma criteria of 2%/2 mm, 1%/1 mm and 0.5%/0.5 mm were used to evaluate the limitations of the RPM system. Excellent agreement was observed in the analysis of spatial information extracted from the SRS 1000 array measurements. Comparisons of the average platform position with the expected position indicated absolute deviations of <0.5 mm for all four gating settings. Differences were observed when comparing time-resolved beam-on data stored in the RPM files and trajectory logs to the true target signal waveforms. Trajectory log files underestimated the cycle time between consecutive beam-on windows by 10.0 ± 0.8 ms. All measured fluences achieved 100% pass-rates using gamma criteria of 2%/2 mm and 50% of the fluences achieved pass-rates >90% when criteria of 0.5%/0.5 mm were used. Results using this novel phantom arrangement indicate that the RPM system is capable of accurately gating x-ray exposure during the delivery of a fixed-field treatment beam.
Resumo:
We introduce a hybrid method for dielectric-metal composites that describes the dynamics of the metallic system classically whilst retaining a quantum description of the dielectric. The time-dependent dipole moment of the classical system is mimicked by the introduction of projected equations of motion (PEOM) and the coupling between the two systems is achieved through an effective dipole-dipole interaction. To benchmark this method, we model a test system (semiconducting quantum dot-metal nanoparticle hybrid). We begin by examining the energy absorption rate, showing agreement between the PEOM method and the analytical rotating wave approximation (RWA) solution. We then investigate population inversion and show that the PEOM method provides an accurate model for the interaction under ultrashort pulse excitation where the traditional RWA breaks down.
Resumo:
[EN]Active Vision Systems can be considered as dynamical systems which close the loop around artificial visual perception, controlling camera parameters, motion and also controlling processing to simplify, accelerate and do more robust visual perception. Research and Development in Active Vision Systems [Aloi87], [Bajc88] is a main area of interest in Computer Vision, mainly by its potential application in different scenarios where real-time performance is needed such as robot navigation, surveillance, visual inspection, among many others. Several systems have been developed during last years using robotic-heads for this purpose...
Resumo:
Surface flow types (SFT) are advocated as ecologically relevant hydraulic units, often mapped visually from the bankside to characterise rapidly the physical habitat of rivers. SFT mapping is simple, non-invasive and cost-efficient. However, it is also qualitative, subjective and plagued by difficulties in recording accurately the spatial extent of SFT units. Quantitative validation of the underlying physical habitat parameters is often lacking, and does not consistently differentiate between SFTs. Here, we investigate explicitly the accuracy, reliability and statistical separability of traditionally mapped SFTs as indicators of physical habitat, using independent, hydraulic and topographic data collected during three surveys of a c. 50m reach of the River Arrow, Warwickshire, England. We also explore the potential of a novel remote sensing approach, comprising a small unmanned aerial system (sUAS) and Structure-from-Motion photogrammetry (SfM), as an alternative method of physical habitat characterisation. Our key findings indicate that SFT mapping accuracy is highly variable, with overall mapping accuracy not exceeding 74%. Results from analysis of similarity (ANOSIM) tests found that strong differences did not exist between all SFT pairs. This leads us to question the suitability of SFTs for characterising physical habitat for river science and management applications. In contrast, the sUAS-SfM approach provided high resolution, spatially continuous, spatially explicit, quantitative measurements of water depth and point cloud roughness at the microscale (spatial scales ≤1m). Such data are acquired rapidly, inexpensively, and provide new opportunities for examining the heterogeneity of physical habitat over a range of spatial and temporal scales. Whilst continued refinement of the sUAS-SfM approach is required, we propose that this method offers an opportunity to move away from broad, mesoscale classifications of physical habitat (spatial scales 10-100m), and towards continuous, quantitative measurements of the continuum of hydraulic and geomorphic conditions which actually exists at the microscale.
Resumo:
We investigate the application of time-reversed electromagnetic wave propagation to transmit energy in a wireless power transmission system. “Time reversal” is a signal focusing method that exploits the time reversal invariance of the lossless wave equation to direct signals onto a single point inside a complex scattering environment. In this work, we explore the properties of time reversed microwave pulses in a low-loss ray-chaotic chamber. We measure the spatial profile of the collapsing wavefront around the target antenna, and demonstrate that time reversal can be used to transfer energy to a receiver in motion. We demonstrate how nonlinear elements can be controlled to selectively focus on one target out of a group. Finally, we discuss the design of a rectenna for use in a time reversal system. We explore the implication of these results, and how they may be applied in future technologies.
Resumo:
Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. Motion capture of a free flying insect is considered by using three synchronized high-speed cameras. A solid finite element representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. An objective function is formulated, and various shape difference definitions are considered. The proposed methodology is first studied for a synthetic case of a flexible cantilever structure undergoing large deformations, and then applied to a Manduca Sexta (hawkmoth) in free flight. The three-dimensional motions of this flapping system are reconstructed from image date collected by using three cameras. The complete deformation geometry of this system is analyzed. Finally, a computational investigation is carried out to understand the flow physics and aerodynamic performance by prescribing the body and wing motions in a fluid-body code. This thesis work contains one of the first set of such motion visualization and deformation analyses carried out for a hawkmoth in free flight. The tools and procedures used in this work are widely applicable to the studies of other flying animals with flexible wings as well as synthetic systems with flexible body elements.
Resumo:
To gain a better understanding of the fluid–structure interaction and especially when dealing with a flow around an arbitrarily moving body, it is essential to develop measurement tools enabling the instantaneous detection of moving deformable interface during the flow measurements. A particularly useful application is the determination of unsteady turbulent flow velocity field around a moving porous fishing net structure which is of great interest for selectivity and also for the numerical code validation which needs a realistic database. To do this, a representative piece of fishing net structure is used to investigate both the Turbulent Boundary Layer (TBL) developing over the horizontal porous moving fishing net structure and the turbulent flow passing through the moving porous structure. For such an investigation, Time Resolved PIV measurements are carried out and combined with a motion tracking technique allowing the measurement of the instantaneous motion of the deformable fishing net during PIV measurements. Once the two-dimensional motion of the porous structure is accessed, PIV velocity measurements are analyzed in connection with the detected motion. Finally, the TBL is characterized and the effect of the structure motion on the volumetric flow rate passing though the moving porous structure is clearly demonstrated.
Resumo:
Small particles and their dynamics are of widespread interest due both to their unique properties and their ubiquity. Here, we investigate several classes of small particles: colloids, polymers, and liposomes. All these particles, due to their size on the order of microns, exhibit significant similarity in that they are large enough to be visualized in microscopes, but small enough to be significantly influenced by thermal (or Brownian) motion. Further, similar optical microscopy and experimental techniques are commonly employed to investigate all these particles. In this work, we develop single particle tracking techniques, which allow thorough characterization of individual particle dynamics, observing many behaviors which would be overlooked by methods which time or ensemble average. The various particle systems are also similar in that frequently, the signal-to-noise ratio represented a significant concern. In many cases, development of image analysis and particle tracking methods optimized to low signal-to-noise was critical to performing experimental observations. The simplest particles studied, in terms of their interaction potentials, were chemically homogeneous (though optically anisotropic) hard-sphere colloids. Using these spheres, we explored the comparatively underdeveloped conjunction of translation and rotation and particle hydrodynamics. Developing off this, the dynamics of clusters of spherical colloids were investigated, exploring how shape anisotropy influences the translation and rotation respectively. Transitioning away from uniform hard-sphere potentials, the interactions of amphiphilic colloidal particles were explored, observing the effects of hydrophilic and hydrophobic interactions upon pattern assembly and inter-particle dynamics. Interaction potentials were altered in a different fashion by working with suspensions of liposomes, which, while homogeneous, introduce the possibility of deformation. Even further degrees of freedom were introduced by observing the interaction of particles and then polymers within polymer suspensions or along lipid tubules. Throughout, while examination of the trajectories revealed that while by some measures, the averaged behaviors accorded with expectation, often closer examination made possible by single particle tracking revealed novel and unexpected phenomena.
Resumo:
Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.