940 resultados para Time-frequency analysis
Resumo:
The original cefepime product was withdrawn from the Swiss market in January 2007, and replaced by a generic 10 months later. The goals of the study were to assess the impact of this cefepime shortage on the use and costs of alternative broad-spectrum antibiotics, on antibiotic policy, and on resistance of Pseudomonas aeruginosa towards carbapenems, ceftazidime and piperacillin-tazobactam. A generalized regression-based interrupted time series model assessed how much the shortage changed the monthly use and costs of cefepime and of selected alternative broad-spectrum antibiotics (ceftazidime, imipenem-cilastatin, meropenem, piperacillin-tazobactam) in 15 Swiss acute care hospitals from January 2005 to December 2008. Resistance of P. aeruginosa was compared before and after the cefepime shortage. There was a statistically significant increase in the consumption of piperacillin-tazobactam in hospitals with definitive interruption of cefepime supply, and of meropenem in hospitals with transient interruption of cefepime supply. Consumption of each alternative antibiotic tended to increase during the cefepime shortage and to decrease when the cefepime generic was released. These shifts were associated with significantly higher overall costs. There was no significant change in hospitals with uninterrupted cefepime supply. The alternative antibiotics for which an increase in consumption showed the strongest association with a progression of resistance were the carbapenems. The use of alternative antibiotics after cefepime withdrawal was associated with a significant increase in piperacillin-tazobactam and meropenem use and in overall costs, and with a decrease in susceptibility of P. aeruginosa in hospitals. This warrants caution with regard to shortages and withdrawals of antibiotics.
Resumo:
Background: oscillatory activity, which can be separated in background and oscillatory burst pattern activities, is supposed to be representative of local synchronies of neural assemblies. Oscillatory burst events should consequently play a specific functional role, distinct from background EEG activity – especially for cognitive tasks (e.g. working memory tasks), binding mechanisms and perceptual dynamics (e.g. visual binding), or in clinical contexts (e.g. effects of brain disorders). However extracting oscillatory events in single trials, with a reliable and consistent method, is not a simple task. Results: in this work we propose a user-friendly stand-alone toolbox, which models in a reasonable time a bump time-frequency model from the wavelet representations of a set of signals. The software is provided with a Matlab toolbox which can compute wavelet representations before calling automatically the stand-alone application. Conclusion: The tool is publicly available as a freeware at the address: http:// www.bsp.brain.riken.jp/bumptoolbox/toolbox_home.html
Resumo:
[Abstract]
Resumo:
The Wigner higher order moment spectra (WHOS)are defined as extensions of the Wigner-Ville distribution (WD)to higher order moment spectra domains. A general class oftime-frequency higher order moment spectra is also defined interms of arbitrary higher order moments of the signal as generalizations of the Cohen’s general class of time-frequency representations. The properties of the general class of time-frequency higher order moment spectra can be related to theproperties of WHOS which are, in fact, extensions of the properties of the WD. Discrete time and frequency Wigner higherorder moment spectra (DTF-WHOS) distributions are introduced for signal processing applications and are shown to beimplemented with two FFT-based algorithms. One applicationis presented where the Wigner bispectrum (WB), which is aWHOS in the third-order moment domain, is utilized for thedetection of transient signals embedded in noise. The WB iscompared with the WD in terms of simulation examples andanalysis of real sonar data. It is shown that better detectionschemes can be derived, in low signal-to-noise ratio, when theWB is applied.
Resumo:
Chaotic behaviour is one of the hardest problems that can happen in nonlinear dynamical systems with severe nonlinearities. It makes the system's responses unpredictable. It makes the system's responses to behave similar to noise. In some applications it should be avoided. One of the approaches to detect the chaotic behaviour is nding the Lyapunov exponent through examining the dynamical equation of the system. It needs a model of the system. The goal of this study is the diagnosis of chaotic behaviour by just exploring the data (signal) without using any dynamical model of the system. In this work two methods are tested on the time series data collected from AMB (Active Magnetic Bearing) system sensors. The rst method is used to nd the largest Lyapunov exponent by Rosenstein method. The second method is a 0-1 test for identifying chaotic behaviour. These two methods are used to detect if the data is chaotic. By using Rosenstein method it is needed to nd the minimum embedding dimension. To nd the minimum embedding dimension Cao method is used. Cao method does not give just the minimum embedding dimension, it also gives the order of the nonlinear dynamical equation of the system and also it shows how the system's signals are corrupted with noise. At the end of this research a test called runs test is introduced to show that the data is not excessively noisy.
Resumo:
The aim of the present study was to determine the effect of volume and composition of fluid replacement on the physical performance of male football referees. Ten referees were evaluated during three official matches. In one match the participants were asked to consume mineral water ad libitum, and in the others they consumed a pre-determined volume of mineral water or a carbohydrate electrolyte solution (6.4% carbohydrate and 22 mM Na+) equivalent to 1% of their baseline body mass (half before the match and half during the interval). Total water loss, sweat rate and match physiological performance were measured. When rehydrated ad libitum (pre-match and at half time) participants lost 1.97 ± 0.18% of their pre-match body mass (2.14 ± 0.19 L). This parameter was significantly reduced when they consumed a pre-determined volume of fluid. Sweat rate was significantly reduced when the referees ingested a pre-determined volume of a carbohydrate electrolyte solution, 0.72 ± 0.12 vs 1.16 ± 0.11 L/h ad libitum. The high percentage (74.1%) of movements at low speed (walking, jogging) observed when they ingested fluid ad libitum was significantly reduced to 71% with mineral water and to 69.9% with carbohydrate solution. An increase in percent movement expended in backward running was observed when they consumed a pre-determined volume of carbohydrate solution, 7.7 ± 0.5 vs 5.5 ± 0.5% ad libitum. The improved hydration status achieved with the carbohydrate electrolyte solution reduced the length of time spent in activities at low-speed movements and increased the time spent in activities demanding high-energy expenditure.
Resumo:
Findings on the effects of weather on health, especially the effects of ambient temperature on overall morbidity, remain inconsistent. We conducted a time series study to examine the acute effects of meteorological factors (mainly air temperature) on daily hospital outpatient admissions for cardiovascular disease (CVD) in Zunyi City, China, from January 1, 2007 to November 30, 2009. We used the generalized additive model with penalized splines to analyze hospital outpatient admissions, climatic parameters, and covariate data. Results show that, in Zunyi, air temperature was associated with hospital outpatient admission for CVD. When air temperature was less than 10°C, hospital outpatient admissions for CVD increased 1.07-fold with each increase of 1°C, and when air temperature was more than 10°C, an increase in air temperature by 1°C was associated with a 0.99-fold decrease in hospital outpatient admissions for CVD over the previous year. Our analyses provided statistically significant evidence that in China meteorological factors have adverse effects on the health of the general population. Further research with consistent methodology is needed to clarify the magnitude of these effects and to show which populations and individuals are vulnerable.
Resumo:
This study aimed to examine the time course of endothelial function after a single handgrip exercise session combined with blood flow restriction in healthy young men. Nine participants (28±5.8 years) completed a single session of bilateral dynamic handgrip exercise (20 min with 60% of the maximum voluntary contraction). To induce blood flow restriction, a cuff was placed 2 cm below the antecubital fossa in the experimental arm. This cuff was inflated to 80 mmHg before initiation of exercise and maintained through the duration of the protocol. The experimental arm and control arm were randomly selected for all subjects. Brachial artery flow-mediated dilation (FMD) and blood flow velocity profiles were assessed using Doppler ultrasonography before initiation of the exercise, and at 15 and 60 min after its cessation. Blood flow velocity profiles were also assessed during exercise. There was a significant increase in FMD 15 min after exercise in the control arm compared with before exercise (64.09%±16.59%, P=0.001), but there was no change in the experimental arm (-12.48%±12.64%, P=0.252). FMD values at 15 min post-exercise were significantly higher for the control arm in comparison to the experimental arm (P=0.004). FMD returned to near baseline values at 60 min after exercise, with no significant difference between arms (P=0.424). A single handgrip exercise bout provoked an acute increase in FMD 15 min after exercise, returning to near baseline values at 60 min. This response was blunted by the addition of an inflated pneumatic cuff to the exercising arm.
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.
Resumo:
We report time resolved study of C2 emission from laser produced carbon plasma in presence of ambient helium gas. The 1.06µm: radiation from a Nd:YAG laser was focused onto a graphite target where it·produced a transient plasma. We observed double peak structure in the time profile of C2 species. The twin peaks were observed only after a threshold laser fluence. It is proposed that the faster velocity component in the temporal profiles originates mainly due to recombination processes. The laser fluence and ambient gas dependence of the double peak intensity distribution is also reported.
Resumo:
Analysis of the emission bands of the CN molecules in the plasma generated from a graphite target irradiated with 1-06/~m radiation pulses from a Q-switched Nd:YAG laser has been done. Depending on the position of the sampled volume of the plasma plume, the intensity distribution in the emission spectra is found to change drastically. The vibrational temperature and population distribution in the different vibrational levels have been studied as function of distance from the target for different time delays with respect to the incidence of the laser pulse. The translational temperature calculated from time of flight is found to be higher than the observed vibrational temperature for CN molecules and the reason for this is explained.
Resumo:
Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.
Resumo:
Soil fertility constraints to crop production have been recognized widely as a major obstacle to food security and agro-ecosystem sustainability in sub-Saharan West Africa. As such, they have led to a multitude of research projects and policy debates on how best they should be overcome. Conclusions, based on long-term multi-site experiments, are lacking with respect to a regional assessment of phosphorus and nitrogen fertilizer effects, surface mulched crop residues, and legume rotations on total dry matter of cereals in this region. A mixed model time-trend analysis was used to investigate the effects of four nitrogen and phosphorus rates, annually applied crop residue dry matter at 500 and 2000 kg ha^-1, and cereal-legume rotation versus continuous cereal cropping on the total dry matter of cereals and legumes. The multi-factorial experiment was conducted over four years at eight locations, with annual rainfall ranging from 510 to 1300 mm, in Niger, Burkina Faso, and Togo. With the exception of phosphorus, treatment effects on legume growth were marginal. At most locations, except for typical Sudanian sites with very low base saturation and high rainfall, phosphorus effects on cereal total dry matter were much lower with rock phosphate than with soluble phosphorus, unless the rock phosphate was combined with an annual seed-placement of 4 kg ha^-1 phosphorus. Across all other treatments, nitrogen effects were negligible at 500 mm annual rainfall but at 900 mm, the highest nitrogen rate led to total dry matter increases of up to 77% and, at 1300 mm, to 183%. Mulch-induced increases in cereal total dry matter were larger with lower base saturation, reaching 45% on typical acid sandy Sahelian soils. Legume rotation effects tended to increase over time but were strongly species-dependent.
Resumo:
A compositional time series is obtained when a compositional data vector is observed at different points in time. Inherently, then, a compositional time series is a multivariate time series with important constraints on the variables observed at any instance in time. Although this type of data frequently occurs in situations of real practical interest, a trawl through the statistical literature reveals that research in the field is very much in its infancy and that many theoretical and empirical issues still remain to be addressed. Any appropriate statistical methodology for the analysis of compositional time series must take into account the constraints which are not allowed for by the usual statistical techniques available for analysing multivariate time series. One general approach to analyzing compositional time series consists in the application of an initial transform to break the positive and unit sum constraints, followed by the analysis of the transformed time series using multivariate ARIMA models. In this paper we discuss the use of the additive log-ratio, centred log-ratio and isometric log-ratio transforms. We also present results from an empirical study designed to explore how the selection of the initial transform affects subsequent multivariate ARIMA modelling as well as the quality of the forecasts