44 resultados para Mean Power Frequency
Resumo:
The western North Pacific (WNP) is the area of the world most frequently affected by tropical cyclones (TCs). However, little is known about the socio-economic impacts of TCs in this region, probably because of the limited relevant loss data. Here, loss data from Munich RE's NatCatSERVICE database is used, a high-quality and widely consulted database of natural disasters. In the country-level loss normalisation technique we apply, the original loss data are normalised to present-day exposure levels by using the respective country's nominal gross domestic product at purchasing power parity as a proxy for wealth. The main focus of our study is on the question of whether the decadal-scale TC variability observed in the Northwest Pacific region in recent decades can be shown to manifest itself economically in an associated variability in losses. It is shown that since 1980 the frequency of TC-related loss events in the WNP exhibited, apart from seasonal and interannual variations, interdecadal variability with a period of about 22 yr – driven primarily by corresponding variations of Northwest Pacific TCs. Compared to the long-term mean, the number of loss events was found to be higher (lower) by 14% (9%) in the positive (negative) phase of the decadal-scale WNP TC frequency variability. This was identified for the period 1980–2008 by applying a wavelet analysis technique. It was also possible to demonstrate the same low-frequency variability in normalised direct economic losses from TCs in the WNP region. The identification of possible physical mechanisms responsible for the observed decadal-scale Northwest Pacific TC variability will be the subject of future research, even if suggestions have already been made in earlier studies.
Resumo:
Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.
Resumo:
The topic of this study was to evaluate state-dependent effects of diazepam on the frequency characteristics of 47-channel spontaneous EEG maps. A novel method, the FFT-Dipole-Approximation (Lehmann and Michel, 1990), was used to study effects on the strength and the topography of the maps in the different frequency bands. Map topography was characterized by the 3-dimensional location of the equivalent dipole source and map strength was defined as the spatial standard deviation (the Global Field Power) of the maps of each frequency point. The Global Field Power can be considered as a measure of the amount of energy produced by the system, while the source location gives an estimate of the center of gravity of all sources in the brain that were active at a certain frequency. State-dependency was studied by evaluating the drug effects before and after a continuous performance task of 25 min duration. Clear interactions between drug (diazepam vs. placebo) and time after drug intake (before and after the task) were found, especially in the inferior-superior location of the dipole sources. It supports the hypothesis that diazepam, like other drugs, has different effects on brain functions depending on the momentary functional state of the brain. In addition to the drug effects, clearly different source locations and Global Field Power were found for the different frequency bands, replicating earlier reports (Michel et al., 1992).
Resumo:
We investigated brain electric field signatures of subjective feelings after chewing regular gum or gum base without flavor. 19-channel eyes-closed EEG from 20 healthy males before and after 5 minutes of chewing the two gum types in random sequence was source modeled in the frequency domain using the FFT-Dipole-Approximation. 3-dimensional brain locations and strengths (Global Field Power, GFP) of the equivalent sources of five frequency bands were computed as changes from pre-chewing baseline. Gum types differed (ANOVA) in pre-post changes of source locations for the alpha-2 band (to anterior and right after regular gum, opposite after gum base) and beta-2 band (to anterior and inferior after regular gum, opposite after gum base), and of GFP for delta-theta, alpha-2 and beta-1 (regular gum: increase, gum base: decrease). Subjective feeling changed to more positive values after regular gum than gum base (ANOVA).—Thus, chewing gum with and without taste-smell activates different brain neuronal populations.
Resumo:
The comprehension of stories requires the reader to imagine the cognitive and affective states of the characters. The content of many stories is unpleasant, as they often deal with conflict, disturbance or crisis. Nevertheless, unpleasant stories can be liked and enjoyed. In this fMRI study, we used a parametric approach to examine (1) the capacity of increasing negative valence of story contents to activate the mentalizing network (cognitive and affective theory of mind, ToM), and (2) the neural substrate of liking negatively valenced narratives. A set of 80 short narratives was compiled, ranging from neutral to negative emotional valence. For each story mean rating values on valence and liking were obtained from a group of 32 participants in a prestudy, and later included as parametric regressors in the fMRI analysis. Another group of 24 participants passively read the narratives in a three Tesla MRI scanner. Results revealed a stronger engagement of affective ToM-related brain areas with increasingly negative story valence. Stories that were unpleasant, but simultaneously liked, engaged the medial prefrontal cortex (mPFC), which might reflect the moral exploration of the story content. Further analysis showed that the more the mPFC becomes engaged during the reading of negatively valenced stories, the more coactivation can be observed in other brain areas related to the neural processing of affective ToM and empathy.
Resumo:
Repetitive transcranial magnetic stimulation (rTMS) is a novel research tool in neurology and psychiatry. It is currently being evaluated as a conceivable alternative to electroconvulsive therapy for the treatment of mood disorders. Eight healthy young (age range 21-25 years) right-handed men without sleep complaints participated in the study. Two sessions at a 1-week interval, each consisting of an adaptation night (sham stimulation) and an experimental night (rTMS in the left dorsolateral prefrontal cortex or sham stimulation; crossover design), were scheduled. In each subject, 40 trains of 2-s duration of rTMS (inter-train interval 28 s) were applied at a frequency of 20 Hz (i.e. 1600 pulses per session) and at an intensity of 90% of the motor threshold. Stimulations were scheduled 80 min before lights off. The waking EEG was recorded for 10-min intervals approximately 30 min prior to and after the 20-min stimulations, and polysomnographic recordings were obtained during the subsequent sleep episode (23.00-07.00 h). The power spectra of two referential derivations, as well as of bipolar derivations along the antero-posterior axis over the left and right hemispheres, were analyzed. rTMS induced a small reduction of sleep stage 1 (in min and percentage of total sleep time) over the whole night and a small enhancement of sleep stage 4 during the first non-REM sleep episode. Other sleep variables were not affected. rTMS of the left dorsolateral cortex did not alter the topography of EEG power spectra in waking following stimulation, in the all-night sleep EEG, or during the first non-REM sleep episode. Our results indicate that a single session of rTMS using parameters like those used in depression treatment protocols has no detectable side effects with respect to sleep in young healthy males.
Resumo:
Home dream recall frequencies and nightmare frequencies show great inter-individual differences. Most of the studies trying to explain these differences, however, studied young participants, so these findings might not be true for persons older than 25 years. The present study investigated the relationship between dream recall, nightmare frequency, age, gender, sleep parameters, stress, and subjective health in a community-based sample (N = 455) with a mean age of about 55 years. Some of the factors that have been shown to be associated with dream recall and nightmare frequency were also associated with these variables in non-student sample like frequency of nocturnal awakenings, current stress, and tiredness during the day. We were not able to replicate the effect of sex-role orientation on dream recall and nightmare frequency, supporting the idea that age might mediate the effect of daytime variables on dream recall and nightmare frequency. As nightmare frequency was related to sleep quality, stress, health problems, and tiredness during the day, it would be desirable that clinicians include a question about nightmares in their anamneses.
Resumo:
We invoke the ideal of tolerance in response to conflict, but what does it mean to answer conflict with a call for tolerance? Is tolerance a way of resolving conflicts or a means of sustaining them? Does it transform conflicts into productive tensions, or does it perpetuate underlying power relations? To what extent does tolerance hide its involvement with power and act as a form of depoliticization? Wendy Brown and Rainer Forst debate the uses and misuses of tolerance, an exchange that highlights the fundamental differences in their critical practice despite a number of political similarities. Both scholars address the normative premises, limits, and political implications of various conceptions of tolerance. Brown offers a genealogical critique of contemporary discourses on tolerance in Western liberal societies, focusing on their inherent ties to colonialism and imperialism, and Forst reconstructs an intellectual history of tolerance that attempts to redeem its political virtue in democratic societies. Brown and Forst work from different perspectives and traditions, yet they each remain wary of the subjection and abnegation embodied in toleration discourses, among other issues. The result is a dialogue rich in critical and conceptual reflections on power, justice, discourse, rationality, and identity.
Resumo:
RATIONALE In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. OBJECTIVE To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. METHODS AND RESULTS All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). LIMITATIONS The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. CONCLUSION In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.
Resumo:
BACKGROUND For patients with acute iliofemoral deep vein thrombosis, it remains unclear whether the addition of intravascular high-frequency, low-power ultrasound energy facilitates the resolution of thrombosis during catheter-directed thrombolysis. METHODS AND RESULTS In a controlled clinical trial, 48 patients (mean age 50±21 years, 52% women) with acute iliofemoral deep vein thrombosis were randomized to receive ultrasound-assisted catheter-directed thrombolysis (N=24) or conventional catheter-directed thrombolysis (N=24). Thrombolysis regimen (20 mg r-tPA over 15 hours) was identical in all patients. The primary efficacy end point was the percentage of thrombus load reduction from baseline to 15 hours according to the length-adjusted thrombus score, obtained from standardized venograms and evaluated by a core laboratory blinded to group assignment. The percentage of thrombus load reduction was 55%±27% in the ultrasound-assisted catheter-directed thrombolysis group and 54%±27% in the conventional catheter-directed thrombolysis group (P=0.91). Adjunctive angioplasty and stenting was performed in 19 (80%) patients and in 20 (83%) patients, respectively (P>0.99). Treatment-related complications occurred in 3 (12%) and 2 (8%) patients, respectively (P>0.99). At 3-month follow-up, primary venous patency was 100% in the ultrasound-assisted catheter-directed thrombolysis group and 96% in the conventional catheter-directed thrombolysis group (P=0.33), and there was no difference in the severity of the post-thrombotic syndrome (mean Villalta score: 3.0±3.9 [range 0-15] versus 1.9±1.9 [range 0-7]; P=0.21), respectively. CONCLUSIONS In this randomized controlled clinical trial of patients with acute iliofemoral deep vein thrombosis treated with a fixed-dose catheter thrombolysis regimen, the addition of intravascular ultrasound did not facilitate thrombus resolution. CLINICAL TRIAL REGISTRATION URL http://www.clinicaltrials.gov. Unique identifier: NCT01482273.
Resumo:
ntroduction: The ProAct study has shown that a pump switch to the Accu-Chek® Combo system (Roche Diagnostics Deutschland GmbH, Mannheim, Germany) in type 1 diabetes patients results in stable glycemic control with significant improvements in glycated hemoglobin (HbA1c) in patients with unsatisfactory baseline HbA1c and shorter pump usage time. Patients and Methods: In this post hoc analysis of the ProAct database, we investigated the glycemic control and glycemic variability at baseline by determination of several established parameters and scores (HbA1c, hypoglycemia frequency, J-score, Hypoglycemia and Hyperglycemia Indexes, and Index of Glycemic Control) in participants with different daily bolus and blood glucose measurement frequencies (less than four day, four or five per day, and more than five per day, in both cases). The data were derived from up to 299 patients (172 females, 127 males; age [mean±SD], 39.4±15.2 years; pump treatment duration, 7.0±5.2 years). Results: Participants with frequent glucose readings had better glycemic control than those with few readings (more than five readings per day vs. less than four readings per day: HbA1c, 7.2±1.1% vs. 8.0±0.9%; mean daily blood glucose, 151±22 mg/dL vs. 176±30 mg/dL; percentage of readings per month >300 mg/dL, 10±4% vs. 14±5%; percentage of readings in target range [80-180 mg/dL], 59% vs. 48% [P<0.05 in all cases]) and had a lower glycemic variability (J-score, 49±13 vs. 71±25 [P<0.05]; Hyperglycemia Index, 0.9±0.5 vs. 1.9±1.2 [P<0.05]; Index of Glycemic Control, 1.9±0.8 vs. 3.1±1.6 [P<0.05]; Hypoglycemia Index, 0.9±0.8 vs. 1.2±1.3 [not significant]). Frequent self-monitoring of blood glucose was associated with a higher number of bolus applications (6.1±2.2 boluses/day vs. 4.5±2.0 boluses/day [P<0.05]). Therefore, a similar but less pronounced effect on glycemic variability in favor of more daily bolus applications was observed (more than five vs. less than four bolues per day: J-score, 57±17 vs. 63±25 [not significant]; Hypoglycemia Index, 1.0±1.0 vs. 1.5±1.4 [P<0.05]; Hyperglycemia Index, 1.3±0.6 vs. 1.6±1.1 [not significant]; Index of Glycemic Control, 2.3±1.1 vs. 3.1±1.7 [P<0.05]). Conclusions: Pump users who perform frequent daily glucose readings have a better glycemic control with lower glycemic variability.
Resumo:
BACKGROUND: Several parameters of heart rate variability (HRV) have been shown to predict the risk of sudden cardiac death (SCD) in cardiac patients. There is consensus that risk prediction is increased when measuring HRV during specific provocations such as orthostatic challenge. For the first time, we provide data on reproducibility of such a test in patients with a history of acute coronary syndrome. METHODS: Sixty male patients (65+/-8years) with a history of acute coronary syndrome on stable medication were included. HRV was measured in supine (5min) and standing (5min) position on 2 occasions separated by two weeks. For risk assessment relevant time-domain [standard deviation of all R-R intervals (SDNN) and root mean squared standard differences between adjacent R-R intervals (RMSSD)], frequency domain [low-frequency power (LF), high-frequency power (HF) and LF/HF power ratio] and short-term fractal scaling component (DF1) were computed. Absolute reproducibility was assessed with the standard errors of the mean (SEM) and 95% limits of random variation, and relative reproducibility by the intraclass correlation coefficient (ICC). RESULTS: We found comparable SEMs and ICCs in supine position and after an orthostatic challenge test. All ICCs were good to excellent (ICCs between 0.636 and 0.869). CONCLUSIONS: Reproducibility of HRV parameters during orthostatic challenge is good and comparable with supine position.
Resumo:
We describe the recovery of three daily meteorological records for the southern Alps (Domodossola, Riva del Garda, and Rovereto), all starting in the second half of the nineteenth century. We use these new data, along with additional records, to study regional changes in the mean temperature and extreme indices of heat waves and cold spells frequency and duration over the period 1874–2015. The records are homogenized using subdaily cloud cover observations as a constraint for the statistical model, an approach that has never been applied before in the literature. A case study based on a record of parallel observations between a traditional meteorological window and a modern screen shows that the use of cloud cover can reduce the root-mean-square error of the homogenization by up to 30% in comparison to an unaided statistical correction. We find that mean temperature in the southern Alps has increased by 1.4°C per century over the analyzed period, with larger increases in daily minimum temperatures than maximum temperatures. The number of hot days in summer has more than tripled, and a similar increase is observed in duration of heat waves. Cold days in winter have dropped at a similar rate. These trends are mainly caused by climate change over the last few decades.
Resumo:
Aims. We derive for the first time the size-frequency distribution of boulders on a comet, 67P/Churyumov-Gerasimenko (67P), computed from the images taken by the Rosetta/OSIRIS imaging system. We highlight the possible physical processes that lead to these boulder size distributions. Methods. We used images acquired by the OSIRIS Narrow Angle Camera, NAC, on 5 and 6 August 2014. The scale of these images (2.44−2.03 m/px) is such that boulders ≥7 m can be identified and manually extracted from the datasets with the software ArcGIS. We derived both global and localized size-frequency distributions. The three-pixel sampling detection, coupled with the favorable shadowing of the surface (observation phase angle ranging from 48° to 53°), enables unequivocally detecting boulders scattered all over the illuminated side of 67P. Results. We identify 3546 boulders larger than 7 m on the imaged surface (36.4 km2), with a global number density of nearly 100/km2 and a cumulative size-frequency distribution represented by a power-law with index of −3.6 +0.2/−0.3. The two lobes of 67P appear to have slightly different distributions, with an index of −3.5 +0.2/−0.3 for the main lobe (body) and −4.0 +0.3/−0.2 for the small lobe (head). The steeper distribution of the small lobe might be due to a more pervasive fracturing. The difference of the distribution for the connecting region (neck) is much more significant, with an index value of −2.2 +0.2/−0.2. We propose that the boulder field located in the neck area is the result of blocks falling from the contiguous Hathor cliff. The lower slope of the size-frequency distribution we see today in the neck area might be due to the concurrent processes acting on the smallest boulders, such as i) disintegration or fragmentation and vanishing through sublimation; ii) uplifting by gas drag and consequent redistribution; and iii) burial beneath a debris blanket. We also derived the cumulative size-frequency distribution per km2 of localized areas on 67P. By comparing the cumulative size-frequency distributions of similar geomorphological settings, we derived similar power-law index values. This suggests that despite the selected locations on different and often opposite sides of the comet, similar sublimation or activity processes, pit formation or collapses, as well as thermal stresses or fracturing events occurred on multiple areas of the comet, shaping its surface into the appearance we see today.