917 resultados para post-processing method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and Purpose. Cardiorespiratory fitness is increasingly being recognized as an impairment requiring physiotherapy intervention after stroke. The present study seeks to investigate if routine physiotherapy treatment is capable of inducing a cardiorespiratory training effect and if stroke patients attending physiotherapy who are unable to walk experience less cardiorespiratory stress during physiotherapy when compared to those who are able to walk. Method. A descriptive, observational study, with heart rate monitoring and video-recording of physiotherapy rehabilitation, was conducted. Thirty consecutive stroke patients from a geriatric and rehabilitation unit of a tertiary metropolitan hospital, admitted for rehabilitation, and requiring physiotherapy were included in the study. The main measures of the study were duration (time) and intensity (percentage of heart rate reserve) of standing and walking activities during physiotherapy rehabilitation for non-walking and walking stroke patients. Results. Stroke patients spent an average of 21 minutes participating in standing and walking activities that were capable of inducing a cardiorespiratory training effect. Stroke patients who were able to walk spent longer in these activities during physiotherapy rehabilitation than non-walking stroke patients (p < 0.05). An average intensity of 24% heart rate reserve (HRR) during standing and walking activities was insufficient to result in a cardiorespiratory training effect, with a maximum of 35% achieved for the stroke patients able to walk and 30% for those unable to walk. Conclusions. Routine physiotherapy rehabilitation had insufficient duration and intensity to result in a cardiorespiratory training effect in our group of stroke patients. Copyright © 2006 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indexing high dimensional datasets has attracted extensive attention from many researchers in the last decade. Since R-tree type of index structures are known as suffering curse of dimensionality problems, Pyramid-tree type of index structures, which are based on the B-tree, have been proposed to break the curse of dimensionality. However, for high dimensional data, the number of pyramids is often insufficient to discriminate data points when the number of dimensions is high. Its effectiveness degrades dramatically with the increase of dimensionality. In this paper, we focus on one particular issue of curse of dimensionality; that is, the surface of a hypercube in a high dimensional space approaches 100% of the total hypercube volume when the number of dimensions approaches infinite. We propose a new indexing method based on the surface of dimensionality. We prove that the Pyramid tree technology is a special case of our method. The results of our experiments demonstrate clear priority of our novel method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-technical losses (NTL) identification and prediction are important tasks for many utilities. Data from customer information system (CIS) can be used for NTL analysis. However, in order to accurately and efficiently perform NTL analysis, the original data from CIS need to be pre-processed before any detailed NTL analysis can be carried out. In this paper, we propose a feature selection based method for CIS data pre-processing in order to extract the most relevant information for further analysis such as clustering and classifications. By removing irrelevant and redundant features, feature selection is an essential step in data mining process in finding optimal subset of features to improve the quality of result by giving faster time processing, higher accuracy and simpler results with fewer features. Detailed feature selection analysis is presented in the paper. Both time-domain and load shape data are compared based on the accuracy, consistency and statistical dependencies between features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work has, as its objective, the development of non-invasive and low-cost systems for monitoring and automatic diagnosing specific neonatal diseases by means of the analysis of suitable video signals. We focus on monitoring infants potentially at risk of diseases characterized by the presence or absence of rhythmic movements of one or more body parts. Seizures and respiratory diseases are specifically considered, but the approach is general. Seizures are defined as sudden neurological and behavioural alterations. They are age-dependent phenomena and the most common sign of central nervous system dysfunction. Neonatal seizures have onset within the 28th day of life in newborns at term and within the 44th week of conceptional age in preterm infants. Their main causes are hypoxic-ischaemic encephalopathy, intracranial haemorrhage, and sepsis. Studies indicate an incidence rate of neonatal seizures of 0.2% live births, 1.1% for preterm neonates, and 1.3% for infants weighing less than 2500 g at birth. Neonatal seizures can be classified into four main categories: clonic, tonic, myoclonic, and subtle. Seizures in newborns have to be promptly and accurately recognized in order to establish timely treatments that could avoid an increase of the underlying brain damage. Respiratory diseases related to the occurrence of apnoea episodes may be caused by cerebrovascular events. Among the wide range of causes of apnoea, besides seizures, a relevant one is Congenital Central Hypoventilation Syndrome (CCHS) \cite{Healy}. With a reported prevalence of 1 in 200,000 live births, CCHS, formerly known as Ondine's curse, is a rare life-threatening disorder characterized by a failure of the automatic control of breathing, caused by mutations in a gene classified as PHOX2B. CCHS manifests itself, in the neonatal period, with episodes of cyanosis or apnoea, especially during quiet sleep. The reported mortality rates range from 8% to 38% of newborn with genetically confirmed CCHS. Nowadays, CCHS is considered a disorder of autonomic regulation, with related risk of sudden infant death syndrome (SIDS). Currently, the standard method of diagnosis, for both diseases, is based on polysomnography, a set of sensors such as ElectroEncephaloGram (EEG) sensors, ElectroMyoGraphy (EMG) sensors, ElectroCardioGraphy (ECG) sensors, elastic belt sensors, pulse-oximeter and nasal flow-meters. This monitoring system is very expensive, time-consuming, moderately invasive and requires particularly skilled medical personnel, not always available in a Neonatal Intensive Care Unit (NICU). Therefore, automatic, real-time and non-invasive monitoring equipments able to reliably recognize these diseases would be of significant value in the NICU. A very appealing monitoring tool to automatically detect neonatal seizures or breathing disorders may be based on acquiring, through a network of sensors, e.g., a set of video cameras, the movements of the newborn's body (e.g., limbs, chest) and properly processing the relevant signals. An automatic multi-sensor system could be used to permanently monitor every patient in the NICU or specific patients at home. Furthermore, a wire-free technique may be more user-friendly and highly desirable when used with infants, in particular with newborns. This work has focused on a reliable method to estimate the periodicity in pathological movements based on the use of the Maximum Likelihood (ML) criterion. In particular, average differential luminance signals from multiple Red, Green and Blue (RGB) cameras or depth-sensor devices are extracted and the presence or absence of a significant periodicity is analysed in order to detect possible pathological conditions. The efficacy of this monitoring system has been measured on the basis of video recordings provided by the Department of Neurosciences of the University of Parma. Concerning clonic seizures, a kinematic analysis was performed to establish a relationship between neonatal seizures and human inborn pattern of quadrupedal locomotion. Moreover, we have decided to realize simulators able to replicate the symptomatic movements characteristic of the diseases under consideration. The reasons is, essentially, the opportunity to have, at any time, a 'subject' on which to test the continuously evolving detection algorithms. Finally, we have developed a smartphone App, called 'Smartphone based contactless epilepsy detector' (SmartCED), able to detect neonatal clonic seizures and warn the user about the occurrence in real-time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To demonstrate the application of low-coherence reflectometry to the study of biometric changes during disaccommodation responses in human eyes after cessation of a near task and to evaluate the effect of contact lenses on low-coherence reflectometry biometric measurements. METHODS: Ocular biometric parameters of crystalline lens thickness (LT) and anterior chamber depth (ACD) were measured with the LenStar device during and immediately after a 5 D accommodative task in 10 participants. In a separate trial, accommodation responses were recorded with a Shin-Nippon WAM-5500 optometer in a subset of two participants. Biometric data were interleaved to form a profile of post-task anterior segment changes. In a further experiment, the effect of soft contact lenses on LenStar measurements was evaluated in 15 participants. RESULTS: In 10 adult participants, increased LT and reduced ACD was seen during the 5 D task. Post-task, during fixation of a 0 D target, a profile of the change in LT and ACD against time was observed. In the two participants with accommodation data (one a sufferer of nearwork-induced transient myopia and other a non-sufferer), the post-task changes in refraction compared favorably with the interleaved LenStar biometry data. The insertion of soft contact lenses did not have a significant effect on LenStar measures of ACD or LT (mean change: -0.007 mm, p = 0.265 and + 0.001 mm, p = 0.875, respectively). CONCLUSIONS: With the addition of a relatively simple stimulus modification, the LenStar instrument can be used to produce a profile of post-task changes in LT and ACD. The spatial and temporal resolution of the system is sufficient for the investigation of nearwork-induced transient myopia from a biometric viewpoint. LenStar measurements of ACD and LT remain valid after the fitting of soft contact lenses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of having a fixed differential group delay term in the coarse step method results in a periodic pattern in the inserting a varying DGD term at each integration step, according to a Gaussian distribution. Simulation results are given to illustrate the phenomenon and provide some evidence about its statistical nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the extensive use of pulse modulation methods in telecommunications, much work has been done in the search for a better utilisation of the transmission channel.The present research is an extension of these investigations. A new modulation method, 'Variable Time-Scale Information Processing', (VTSIP), is proposed.The basic principles of this system have been established, and the main advantages and disadvantages investigated. With the proposed system, comparison circuits detect the instants at which the input signal voltage crosses predetermined amplitude levels.The time intervals between these occurrences are measured digitally and the results are temporarily stored, before being transmitted.After reception, an inverse process enables the original signal to be reconstituted.The advantage of this system is that the irregularities in the rate of information contained in the input signal are smoothed out before transmission, allowing the use of a smaller transmission bandwidth. A disadvantage of the system is the time delay necessarily introduced by the storage process.Another disadvantage is a type of distortion caused by the finite store capacity.A simulation of the system has been made using a standard speech signal, to make some assessment of this distortion. It is concluded that the new system should be an improvement on existing pulse transmission systems, allowing the use of a smaller transmission bandwidth, but introducing a time delay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main objectives of this study was to functionalise various rubbers (i.e. ethylene propylene copolymer (EP), ethylene propylene diene terpolymer (EPDM), and natural rubber (NR)) using functional monomers, maleic anhydride (MA) and glycidyl methacrylate (GMA), via reactive processing routes. The functionalisation of the rubber was carried out via different reactive processing methods in an internal mixer. GMA was free-radically grafted onto EP and EPDM in the melt state in the absence and presence of a comonomer, trimethylolpropane triacrylate (TRlS). To optinuse the grafting conditions and the compositions, the effects of various paranleters on the grafting yields and the extent of side reactions were investigated. Precipitation method and Soxhlet extraction method was established to purifY the GMA modified rubbers and the grafting degree was determined by FTIR and titration. It was found that without TRlS the grafting degree of GMA increased with increasing peroxide concentration. However, grafting was low and the homopolymerisation of GMA and crosslinking of the polymers were identified as the main side reactions competing with the desired grafting reaction for EP and EPDM, respectively. The use of the tri-functional comonomer, TRlS, was shown to greatly enhance the GMA grafting and reduce the side reactions in terms of the higher GMA grafting degree, less alteration of the rheological properties of the polymer substrates and very little formation of polyGMA. The grafting mechanisms were investigated. MA was grafted onto NR using both thermal initiation and peroxide initiation. The results showed clearly that the reaction of MA with NR could be thermally initiated above 140°C in the absence of peroxide. At a preferable temperature of 200°C, the grafting degree was increased with increasing MA concentration. The grafting reaction could also be initiated with peroxide. It was found that 2,5-dimethyl-2,5-bis(ter-butylproxy) hexane (TIOI) was a suitable peroxide to initiate the reaction efficiently above I50°C. The second objective of the work was to utilize the functionalised rubbers in a second step to achieve an in-situ compatibilisation of blends based on poly(ethylene terephthalate) (PET), in particular, with GMA-grafted-EP and -EPDM and the reactive blending was carried out in an internal mixer. The effects of GMA grafting degree, viscosities of GMAgrafted- EP and -EPDM and the presence of polyGMA in the rubber samples on the compatibilisation of PET blends in terms of morphology, dynamical mechanical properties and tensile properties were investigated. It was found that the GMA modified rubbers were very efficient in compatibilising the PET blends and this was supported by the much finer morphology and the better tensile properties. The evidence obtained from the analysis of the PET blends strongly supports the existence of the copolymers through the interfacial reactions between the grafted epoxy group in the GMA modified rubber and the terminal groups of PET in the blends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An interoperable web processing service (WPS) for the automatic interpolation of environmental data has been developed in the frame of the INTAMAP project. In order to assess the performance of the interpolation method implemented, a validation WPS has also been developed. This validation WPS can be used to perform leave one out and K-fold cross validation: a full dataset is submitted and a range of validation statistics and diagnostic plots (e.g. histograms, variogram of residuals, mean errors) is received in return. This paper presents the architecture of the validation WPS and a case study is used to briefly illustrate its use in practice. We conclude with a discussion on the current limitations of the system and make proposals for further developments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Some Australian pharmacists use continuing education to maintain knowledge and acquire new information. There has been a progression from continuing education to continuing professional development (CPD) - a mandatory requirement for pharmacists in all jurisdictions of Australia. Aim: To identify post-registration learning trends of community pharmacists in Western Australia. Method: A questionnaire was developed and administered by face-to-face interviews with community pharmacists in metropolitan Perth. Pharmacists registered for less than 12 months and pharmacists working in hospitals were excluded. Results: 103 pharmacists were approached with a response rate of 95%. Journals (41%), reference books (23%) and the Internet (18%) were the most commonly used educational resources cited by pharmacists. Keeping scientific information up-to-date (39%) and gathering practical knowledge (22%) were the leading motivators for pharmacists to participate in continuing education. Factors that hindered participation in continuing education included lack of time (34%), family commitments (21%) and business commitments (21%). 79% of pharmacists agreed with the concept of mandatory CPD. 47% of pharmacists suggested that the primary sanction for not complying with mandatory CPD should be counselling to determine reasons for non-compliance. Conclusion: Community pharmacists preferred educational resources that were easily accessible at convenient times. Most pharmacists were able to fulfil the requirements of CPD, however, further educational support and promotion would ensure the successful uptake of CPD by community pharmacists in Western Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All-optical technologies for data processing and signal manipulation are expected to play a major role in future optical communications. Nonlinear phenomena occurring in optical fibre have many attractive features and great, but not yet fully exploited potential in optical signal processing. Here, we overview our recent results and advances in developing novel photonic techniques and approaches to all-optical processing based on fibre nonlinearities. Amongst other topics, we will discuss phase-preserving optical 2R regeneration, the possibility of using parabolic/flat-top pulses for optical signal processing and regeneration, and nonlinear optical pulse shaping. A method for passive nonlinear pulse shaping based on pulse pre-chirping and propagation in a normally dispersive fibre will be presented. The approach provides a simple way of generating various temporal waveforms of fundamental and practical interest. Particular emphasis will be given to the formation and characterization of pulses with a triangular intensity profile. A new technique of doubling/copying optical pulses in both the frequency and time domains using triangular-shaped pulses will be also introduced.