690 resultados para Computer Controlled Signals.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperthermia, raised temperature, has been used as a means of treating cancer for centuries. Hippocrates (400 BC) and Galen (200 BC) used red-hot irons to treat small tumours. Much later, after the Renaissance, there are many reports of spontaneous tumour regression in patients with fevers produced by erysipelas, malaria, smallpox, tuberculosis and influenza. These illnesses produce fevers of about 40 °C which last for several days. Temperatures of at least 40 °C were found to be necessary for tumour regression. Towards the end of the nineteenth century pyrogenic bacteria were injected into patients with cancer. In 1896, Coly used a mixture of erysipelas and B. prodigeosus, with some success...

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Knowmore (House of Commons) is a large scale generative interactive installation that incorporates embodied interaction, dynamic image creation, new furniture forms, touch sensitivity, innovative collaborative processes and multichannel generative sound creation. A large circular table spun by hand and a computer-controlled video projection falls on its top, creating an uncanny blend of physical object and virtual media. Participants’ presence around the table and how they touch it is registered, allowing up to five people to collaboratively ‘play’ this deeply immersive audiovisual work. Set within an ecological context, the work subtly asks what kind of resources and knowledges might be necessary to move us past simply knowing what needs to be changed to instead actually embodying that change, whilst hinting at other deeply relational ways of understanding and knowing the world. The work has successfully operated in two high traffic public environments, generating a subtle form of interactivity that allows different people to interact at different paces and speeds and with differing intentions, each contributing towards dramatic public outcomes. The research field involved developing new interaction and engagement strategies for eco-political media arts practice. The context was the creation of improved embodied, performative and improvisational experiences for participants; further informed by ‘Sustainment’ theory. The central question was, what ontological shifts may be necessary to better envision and align our everyday life choices in ways that respect that which is shared by all - 'The Commons'. The methodology was primarily practice-led and in concert with underlying theories. The work’s knowledge contribution was to question how new media interactive experience and embodied interaction might prompt participants to reflect upon the kind of resources and knowledges required to move past simply knowing what needs to be changed to instead actually embodying that change. This was achieved through focusing on the power of embodied learning implied by the works' strongly physical interface (i.e. the spinning of a full size table) in concert with the complex field of layered imagery and sound. The work was commissioned by the State Library of Queensland and Queensland Artworkers Alliance and significantly funded by The Australia Council for the Arts, Arts Queensland, QUT, RMIT Centre for Animation and Interactive Media and industry partners E2E Visuals. After premiering for 3 months at the State Library of Queensland it was curated into the significant ‘Mediations Biennial of Modern Art’ in Poznan, Poland. The work formed the basis of two papers, was reviewed in Realtime (90), was overviewed at Subtle Technologies (2010) in Toronto and shortlisted for ISEA 2011 Istanbul and included in the edited book/catalogue ‘Art in Spite of Economics’, a collaboration between Leonardo/ISAST (MIT Press); Goldsmiths, University of London; ISEA International; and Sabanci University, Istanbul.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

n the field of tissue engineering new polymers are needed to fabricate scaffolds with specific properties depending on the targeted tissue. This work aimed at designing and developing a 3D scaffold with variable mechanical strength, fully interconnected porous network, controllable hydrophilicity and degradability. For this, a desktop-robot-based melt-extrusion rapid prototyping technique was applied to a novel tri-block co-polymer, namely poly(ethylene glycol)-block-poly(epsi-caprolactone)-block-poly(DL-lactide), PEG-PCL-P(DL)LA. This co-polymer was melted by electrical heating and directly extruded out using computer-controlled rapid prototyping by means of compressed purified air to build porous scaffolds. Various lay-down patterns (0/30/60/90/120/150°, 0/45/90/135°, 0/60/120° and 0/90°) were produced by using appropriate positioning of the robotic control system. Scanning electron microscopy and micro-computed tomography were used to show that 3D scaffold architectures were honeycomb-like with completely interconnected and controlled channel characteristics. Compression tests were performed and the data obtained agreed well with the typical behavior of a porous material undergoing deformation. Preliminary cell response to the as-fabricated scaffolds has been studied with primary human fibroblasts. The results demonstrated the suitability of the process and the cell biocompatibility of the polymer, two important properties among the many required for effective clinical use and efficient tissue-engineering scaffolding.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose – This paper aims to present a novel rapid prototyping (RP) fabrication methods and preliminary characterization for chitosan scaffolds. Design – A desktop rapid prototyping robot dispensing (RPBOD) system has been developed to fabricate scaffolds for tissue engineering (TE) applications. The system is a computer-controlled four-axis machine with a multiple-dispenser head. Neutralization of the acetic acid by the sodium hydroxide results in a precipitate to form a gel-like chitosan strand. The scaffold properties were characterized by scanning electron microscopy, porosity calculation and compression test. An example of fabrication of a freeform hydrogel scaffold is demonstrated. The required geometric data for the freeform scaffold were obtained from CT-scan images and the dispensing path control data were converted form its volume model. The applications of the scaffolds are discussed based on its potential for TE. Findings – It is shown that the RPBOD system can be interfaced with imaging techniques and computational modeling to produce scaffolds which can be customized in overall size and shape allowing tissue-engineered grafts to be tailored to specific applications or even for individual patients. Research limitations/implications – Important challenges for further research are the incorporation of growth factors, as well as cell seeding into the 3D dispensing plotting materials. Improvements regarding the mechanical properties of the scaffolds are also necessary. Originality/value – One of the important aspects of TE is the design scaffolds. For customized TE, it is essential to be able to fabricate 3D scaffolds of various geometric shapes, in order to repair tissue defects. RP or solid free-form fabrication techniques hold great promise for designing 3D customized scaffolds; yet traditional cell-seeding techniques may not provide enough cell mass for larger constructs. This paper presents a novel attempt to fabricate 3D scaffolds, using hydrogels which in the future can be combined with cells.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We aim to fabricate computer-controlled hydrogel structures containing viable encapsulated cells to overcome the low seeding densities which are inherent to most pre-fabricated scaffold systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Additive manufacturing techniques offer the potential to fabricate organized tissue constructs to repair or replace damaged or diseased human tissues and organs. Using these techniques, spatial variations of cells along multiple axes with high geometric complexity in combination with different biomaterials can be generated. The level of control offered by these computer-controlled technologies to design and fabricate tissues will accelerate our understanding of the governing factors of tissue formation and function. Moreover, it will provide a valuable tool to study the effect of anatomy on graft performance. In this review, we discuss the rationale for engineering tissues and organs by combining computer-aided design with additive manufacturing technologies that encompass the simultaneous deposition of cells and materials. Current strategies are presented, particularly with respect to limitations due to the lack of suitable polymers, and requirements to move the current concepts to practical application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There have only been minor improvements in rigid lens material developments since silicone acrylates and fluoro-silicone acrylates were introduced over a quarter of a century ago. Although there have been enhancements in mechanical lathing technology in the rigid lens field - primarily as a result of developments in computer-controlled systems - rigid lenses are still manufactured using labour-intensive lathing processes, which is why the lens unit cost remains much higher than for disposable soft lenses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current study sought to identify the impact of whether teammates in a cooperative videogame were controlled by other humans (avatars) or by the game (agents). The impact on player experience was explored through both subjective questionnaire measures and brain wave activity measurement (electroencephalography). Play with human teammates was associated with a greater sense of relatedness, but less competence and flow than play with other computer-controlled teammates. In terms of brain activity, play with human teammates was associated with greater activity in the alpha, theta and beta power bands than play with computer-controlled teammates. Overall, the results suggest that play with human teammates involves greater cognitive activity in terms of 'mentalising' than play with computer-controlled teammates. Additionally, the associations between subjective measures of player experience and brain activity are described. Limitations of the current study are identified and key directions for future research are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analysis of phasor measurement method for tracking the fundamental power frequency to show if it has the performance necessary to cope with the requirements of power system protection and control. In this regard, several computer simulations presenting the conditions of a typical power system signal especially those highly distorted by harmonics, noise and offset, are provided to evaluate the response of the Phasor Measurement (PM) technique. A new method, which can shorten the delay of estimation, has also been proposed for the PM method to work for signals free of even-order harmonics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract—The role of cardiopulmonary signals in the dynamics of wavefront aberrations in the eye has been examined. Synchronous measurement of the eye’s wavefront aberrations, cardiac function, blood pulse, and respiration signals were taken for a group of young, healthy subjects. Two focusing stimuli, three breathing patterns, as well as natural and cycloplegic eye conditions were examined. A set of tools, including time–frequency coherence and its metrics, has been proposed to acquire a detailed picture of the interactions of the cardiopulmonary system with the eye’s wavefront aberrations. The results showed that the coherence of the blood pulse and its harmonics with the eye’s aberrations was, on average, weak (0.4 ± 0.15), while the coherence of the respiration signal with eye’s aberrations was, on average, moderate (0.53 ± 0.14). It was also revealed that there were significant intervals during which high coherence occurred. On average, the coherence was high (>0.75) during 16% of the recorded time, for the blood pulse, and 34% of the time for the respiration signal. A statistically significant decrease in average coherence was noted for the eye’s aberrations with respiration in the case of fast controlled breathing (0.5 Hz). The coherence between the blood pulse and the defocus was significantly larger for the far target than for the near target condition. After cycloplegia, the coherence of defocus with the blood pulse significantly decreased, while this was not the case for the other aberrations. There was also a noticeable, but not statistically significant, increase in the coherence of the comatic term and respiration in that case. By using nonstationary measures of signal coherence, a more detailed picture of interactions between the cardiopulmonary signals and eye’s wavefront aberrations has emerged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: An estimated 285 million people worldwide have diabetes and its prevalence is predicted to increase to 439 million by 2030. For the year 2010, it is estimated that 3.96 million excess deaths in the age group 20-79 years are attributable to diabetes around the world. Self-management is recognised as an integral part of diabetes care. This paper describes the protocol of a randomised controlled trial of an automated interactive telephone system aiming to improve the uptake and maintenance of essential diabetes self-management behaviours. ---------- Methods/Design: A total of 340 individuals with type 2 diabetes will be randomised, either to the routine care arm, or to the intervention arm in which participants receive the Telephone-Linked Care (TLC) Diabetes program in addition to their routine care. The intervention requires the participants to telephone the TLC Diabetes phone system weekly for 6 months. They receive the study handbook and a glucose meter linked to a data uploading device. The TLC system consists of a computer with software designed to provide monitoring, tailored feedback and education on key aspects of diabetes self-management, based on answers voiced or entered during the current or previous conversations. Data collection is conducted at baseline (Time 1), 6-month follow-up (Time 2), and 12-month follow-up (Time 3). The primary outcomes are glycaemic control (HbA1c) and quality of life (Short Form-36 Health Survey version 2). Secondary outcomes include anthropometric measures, blood pressure, blood lipid profile, psychosocial measures as well as measures of diet, physical activity, blood glucose monitoring, foot care and medication taking. Information on utilisation of healthcare services including hospital admissions, medication use and costs is collected. An economic evaluation is also planned.---------- Discussion: Outcomes will provide evidence concerning the efficacy of a telephone-linked care intervention for self-management of diabetes. Furthermore, the study will provide insight into the potential for more widespread uptake of automated telehealth interventions, globally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is directed towards providing an answer to the question, ”Can you control the trajectory of a Lagrangian float?” Being a float that has minimal actuation (only buoyancy control), their horizontal trajectory is dictated through drifting with ocean currents. However, with the appropriate vertical actuation and utilising spatio-temporal variations in water speed and direction, we show here that broad controllabilty results can be met such as waypoint following to keep a float inside of a bay or out of a designated region. This paper extends theory experimen- tally evaluted on horizontally actuated Autonomous Underwater Vehicles (AUVs) for trajectory control utilising ocean forecast models and presents an initial investi- gation into the controllability of these minimally actuated drifting AUVs. Simulated results for offshore coastal and within highly dynamic tidal bays illustrate two tech- niques with the promise for an affirmative answer to the posed question above.