960 resultados para Time-Frequency Methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background The somatosensory cortex has been inconsistently activated in pain studies and the functional properties of subregions within this cortical area are poorly understood. To address this we used magnetoencephalography (MEG), a brain imaging technique capable of recording changes in cortical neural activity in real-time, to investigate the functional properties of the somatosensory cortex during different phases of the visceral pain experience. Methods In eight participants (4 male), 151-channel whole cortex MEG was used to detect cortical neural activity during 25 trials lasting 20 seconds each. Each trial comprised four separate periods of 5 seconds in duration. During each of the periods, different visual cues were presented, indicating that period 1=rest, period 2=anticipation, period 3=pain and period 4=post pain. During period 3, participants received painful oesophageal balloon distensions (four at 1 Hz). Regions of cortical activity were identified using Synthetic Aperture Magnetometry (SAM) and by the placement of virtual electrodes in regions of interest within the somatosensory cortex, time-frequency wavelet plots were generated. Results SAM analysis revealed significant activation with the primary (S1) and secondary (S2) somatosensory cortices. The time-frequency wavelet spectrograms showed that activation in S1 increased during the anticipation phase and continued during the presentation of the stimulus. In S2, activation was tightly time and phase-locked to the stimulus within the pain period. Activations in both regions predominantly occurred within the 10–15 Hz and 20–30 Hz frequency bandwidths. Discussion These data are consistent with the role of S1 and S2 in the sensory discriminatory aspects of pain processing. Activation of S1 during anticipation and then pain may be linked to its proposed role in attentional as well as sensory processing. The stimulus-related phasic activity seen in S2 demonstrates that this region predominantly encodes information pertaining to the nature and intensity of the stimulus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective of this work was to explore the performance of a recently introduced source extraction method, FSS (Functional Source Separation), in recovering induced oscillatory change responses from extra-cephalic magnetoencephalographic (MEG) signals. Unlike algorithms used to solve the inverse problem, FSS does not make any assumption about the underlying biophysical source model; instead, it makes use of task-related features (functional constraints) to estimate source/s of interest. FSS was compared with blind source separation (BSS) approaches such as Principal and Independent Component Analysis, PCA and ICA, which are not subject to any explicit forward solution or functional constraint, but require source uncorrelatedness (PCA), or independence (ICA). A visual MEG experiment with signals recorded from six subjects viewing a set of static horizontal black/white square-wave grating patterns at different spatial frequencies was analyzed. The beamforming technique Synthetic Aperture Magnetometry (SAM) was applied to localize task-related sources; obtained spatial filters were used to automatically select BSS and FSS components in the spatial area of interest. Source spectral properties were investigated by using Morlet-wavelet time-frequency representations and significant task-induced changes were evaluated by means of a resampling technique; the resulting spectral behaviours in the gamma frequency band of interest (20-70 Hz), as well as the spatial frequency-dependent gamma reactivity, were quantified and compared among methods. Among the tested approaches, only FSS was able to estimate the expected sustained gamma activity enhancement in primary visual cortex, throughout the whole duration of the stimulus presentation for all subjects, and to obtain sources comparable to invasively recorded data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. ^ This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analogous to sunspots and solar photospheric faculae, which visibility is modulated by stellar rotation, stellar active regions consist of cool spots and bright faculae caused by the magnetic field of the star. Such starspots are now well established as major tracers used to estimate the stellar rotation period, but their dynamic behavior may also be used to analyze other relevant phenomena such as the presence of magnetic activity and its cycles. To calculate the stellar rotation period, identify the presence of active regions and investigate if the star exhibits or not differential rotation, we apply two methods: a wavelet analysis and a spot model. The wavelet procedure is also applied here to study pulsation in order to identify specific signatures of this particular stellar variability for different types of pulsating variable stars. The wavelet transform has been used as a powerful tool for treating several problems in astrophysics. In this work, we show that the time-frequency analysis of stellar light curves using the wavelet transform is a practical tool for identifying rotation, magnetic activity, and pulsation signatures. We present the wavelet spectral composition and multiscale variations of the time series for four classes of stars: targets dominated by magnetic activity, stars with transiting planets, those with binary transits, and pulsating stars. We applied the Morlet wavelet (6th order), which offers high time and frequency resolution. By applying the wavelet transform to the signal, we obtain the wavelet local and global power spectra. The first is interpreted as energy distribution of the signal in time-frequency space, and the second is obtained by time integration of the local map. Since the wavelet transform is a useful mathematical tool for nonstationary signals, this technique applied to Kepler and CoRoT light curves allows us to clearly identify particular signatures for different phenomena. In particular, patterns were identified for the temporal evolution of the rotation period and other periodicity due to active regions affecting these light curves. In addition, a beat-pattern vii signature in the local wavelet map of pulsating stars over the entire time span was also detected. The second method is based on starspots detection during transits of an extrasolar planet orbiting its host star. As a planet eclipses its parent star, we can detect physical phenomena on the surface of the star. If a dark spot on the disk of the star is partially or totally eclipsed, the integrated stellar luminosity will increase slightly. By analyzing the transit light curve it is possible to infer the physical properties of starspots, such as size, intensity, position and temperature. By detecting the same spot on consecutive transits, it is possible to obtain additional information such as the stellar rotation period in the planetary transit latitude, differential rotation, and magnetic activity cycles. Transit observations of CoRoT-18 and Kepler-17 were used to implement this model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Costs related to inventory are usually a significant amount of the company’s total assets. Despite this, companies in general don’t pay a lot of interest in it, even if the benefits from effective inventory are obvious when it comes to less tied up capital, increased customer satisfaction and better working environment. Permobil AB, Timrå is in an intense period when it comes to revenue and growth. The production unit is aiming for an increased output of 30 % in the next two years. To make this possible the company has to improve their way to distribute and handle material,The purpose of the study is to provide useful information and concrete proposals for action, so that the company can build a strategy for an effective and sustainable solution when it comes to inventory management. Alternative methods for making forecasts are suggested, in order to reach a more nuanced perception of different articles, and how they should be managed. Analytic Hierarchy Process (AHP) was used in order to give specially selected persons the chance to decide criteria for how the article should be valued. The criteria they agreed about were annual volume value, lead time, frequency rate and purchase price. The other method that was proposed was a two-dimensional model where annual volume value and frequency was the criteria that specified in which class an article should be placed. Both methods resulted in significant changes in comparison to the current solution. For the spare part inventory different forecast methods were tested and compared with the current solution. It turned out that the current forecast method performed worse than both moving average and exponential smoothing with trend. The small sample of ten random articles is not big enough to reject the current solution, but still the result is a reason enough, for the company to control the quality of the forecasts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biofilms are the primary cause of clinical bacterial infections and are impervious to typical amounts of antibiotics, necessitating very high doses for treatment. Therefore, it is highly desirable to develop new alternate methods of treatment that can complement or replace existing approaches using significantly lower doses of antibiotics. Current standards for studying biofilms are based on end-point studies that are invasive and destroy the biofilm during characterization. This dissertation presents the development of a novel real-time sensing and treatment technology to aid in the non-invasive characterization, monitoring and treatment of bacterial biofilms. The technology is demonstrated through the use of a high-throughput bifurcation based microfluidic reactor that enables simulation of flow conditions similar to indwelling medical devices. The integrated microsystem developed in this work incorporates the advantages of previous in vitro platforms while attempting to overcome some of their limitations. Biofilm formation is extremely sensitive to various growth parameters that cause large variability in biofilms between repeated experiments. In this work we investigate the use of microfluidic bifurcations for the reduction in biofilm growth variance. The microfluidic flow cell designed here spatially sections a single biofilm into multiple channels using microfluidic flow bifurcation. Biofilms grown in the bifurcated device were evaluated and verified for reduced biofilm growth variance using standard techniques like confocal microscopy. This uniformity in biofilm growth allows for reliable comparison and evaluation of new treatments with integrated controls on a single device. Biofilm partitioning was demonstrated using the bifurcation device by exposing three of the four channels to various treatments. We studied a novel bacterial biofilm treatment independent of traditional antibiotics using only small molecule inhibitors of bacterial quorum sensing (analogs) in combination with low electric fields. Studies using the bifurcation-based microfluidic flow cell integrated with real-time transduction methods and macro-scale end-point testing of the combination treatment showed a significant decrease in biomass compared to the untreated controls and well-known treatments such as antibiotics. To understand the possible mechanism of action of electric field-based treatments, fundamental treatment efficacy studies focusing on the effect of the energy of the applied electrical signal were performed. It was shown that the total energy and not the type of the applied electrical signal affects the effectiveness of the treatment. The linear dependence of the treatment efficacy on the applied electrical energy was also demonstrated. The integrated bifurcation-based microfluidic platform is the first microsystem that enables biofilm growth with reduced variance, as well as continuous real-time threshold-activated feedback monitoring and treatment using low electric fields. The sensors detect biofilm growth by monitoring the change in impedance across the interdigitated electrodes. Using the measured impedance change and user inputs provided through a convenient and simple graphical interface, a custom-built MATLAB control module intelligently switches the system into and out of treatment mode. Using this self-governing microsystem, in situ biofilm treatment based on the principles of the bioelectric effect was demonstrated by exposing two of the channels of the integrated bifurcation device to low doses of antibiotics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Errors in the decision-making process are probably the main threat to patient safety in the prehospital setting. The reason can be the change of focus in prehospital care from the traditional "scoop and run" practice to a more complex assessment and this new focus imposes real demands on clinical judgment. The use of Clinical Guidelines (CG) is a common strategy for cognitively supporting the prehospital providers. However, there are studies that suggest that the compliance with CG in some cases is low in the prehospital setting. One possible way to increase compliance with guidelines could be to introduce guidelines in a Computerized Decision Support System (CDSS). There is limited evidence relating to the effect of CDSS in a prehospital setting. The present study aimed to evaluate the effect of CDSS on compliance with the basic assessment process described in the prehospital CG and the effect of On Scene Time (OST). METHODS: In this time-series study, data from prehospital medical records were collected on a weekly basis during the study period. Medical records were rated with the guidance of a rating protocol and data on OST were collected. The difference between baseline and the intervention period was assessed by a segmented regression. RESULTS: In this study, 371 patients were included. Compliance with the assessment process described in the prehospital CG was stable during the baseline period. Following the introduction of the CDSS, compliance rose significantly. The post-intervention slope was stable. The CDSS had no significant effect on OST. CONCLUSIONS: The use of CDSS in prehospital care has the ability to increase compliance with the assessment process of patients with a medical emergency. This study was unable to demonstrate any effects of OST.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Health economic evaluations require estimates of expected survival from patients receiving different interventions, often over a lifetime. However, data on the patients of interest are typically only available for a much shorter follow-up time, from randomised trials or cohorts. Previous work showed how to use general population mortality to improve extrapolations of the short-term data, assuming a constant additive or multiplicative effect on the hazards for all-cause mortality for study patients relative to the general population. A more plausible assumption may be a constant effect on the hazard for the specific cause of death targeted by the treatments. To address this problem, we use independent parametric survival models for cause-specific mortality among the general population. Because causes of death are unobserved for the patients of interest, a polyhazard model is used to express their all-cause mortality as a sum of latent cause-specific hazards. Assuming proportional cause-specific hazards between the general and study populations then allows us to extrapolate mortality of the patients of interest to the long term. A Bayesian framework is used to jointly model all sources of data. By simulation, we show that ignoring cause-specific hazards leads to biased estimates of mean survival when the proportion of deaths due to the cause of interest changes through time. The methods are applied to an evaluation of implantable cardioverter defibrillators for the prevention of sudden cardiac death among patients with cardiac arrhythmia. After accounting for cause-specific mortality, substantial differences are seen in estimates of life years gained from implantable cardioverter defibrillators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Descrever a freqüência de consumo de frutas, legumes e verduras por adultos e analisar os fatores associados ao seu consumo. MÉTODOS: Estudo transversal realizado entre outubro e dezembro de 2003 no município de São Paulo (SP). Foram realizadas entrevistas telefônicas em amostra probabilística da população adulta (>18 anos) residente em domicílios servidos por linhas fixas de telefone, totalizando 1.267 mulheres e 855 homens. A freqüência do consumo de frutas, legumes e verduras foi medida por meio de um roteiro com perguntas curtas e simples. Na avaliação dos fatores associados ao consumo, realizou-se análise de regressão linear multivariada e hierarquizada, com variáveis sociodemográficas no primeiro nível hierárquico, comportamentais no segundo e relacionadas ao padrão alimentar no terceiro nível. RESULTADOS: A freqüência de consumo de frutas, legumes e verduras foi maior entre as mulheres. Para ambos os sexos, verificou-se que a freqüência desse consumo aumentava de acordo com a idade e a escolaridade do indivíduo. Entre mulheres que relataram ter realizado dieta no ano anterior houve maior consumo de frutas, legumes e verduras. O consumo de alimentos que indicam um padrão de consumo não saudável como açúcares e gorduras se mostrou inversamente associado ao consumo de frutas, legumes e verduras em ambos os sexos. CONCLUSÕES: O consumo de frutas, legumes e verduras da população adulta residente em São Paulo foi maior entre as mulheres, sendo influenciado pela idade, escolaridade e dieta

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The canonical representation of speech constitutes a perfect reconstruction (PR) analysis-synthesis system. Its parameters are the autoregressive (AR) model coefficients, the pitch period and the voiced and unvoiced components of the excitation represented as transform coefficients. Each set of parameters may be operated on independently. A time-frequency unvoiced excitation (TFUNEX) model is proposed that has high time resolution and selective frequency resolution. Improved time-frequency fit is obtained by using for antialiasing cancellation the clustering of pitch-synchronous transform tracks defined in the modulation transform domain. The TFUNEX model delivers high-quality speech while compressing the unvoiced excitation representation about 13 times over its raw transform coefficient representation for wideband speech.