983 resultados para Power Mean
Resumo:
Bullying is characterized by an inequality of power between perpetrator and target. Findings that bullies can be highly popular have helped redefine the old conception of the maladjusted school bully into a powerful individual exerting influence on his peers from the top of the peer status hierarchy. Study I is a conceptual paper that explores the conditions under which a skillful, socially powerful bully can use the peer group as a means of aggression and suggests that low cohesion and low quality of friendships make groups easier to manipulate. School bullies’ high popularity should be a major obstacle for antibullying efforts, as bullies are unlikely to cease negative actions that are rewarding, and their powerful position could discourage bystanders from interfering. Using data from the Finnish program KiVa, Study II supported the hypothesis that antibullying interventions are less effective with popular bullies in comparison to their unpopular counterparts. In order to design interventions that can address the positive link between popularity and aggression, it is necessary to determine in which contexts bullies achieve higher status. Using an American sample, Study III examined the effects of five classroom features on the social status that peers accord to aggressive children, including classroom status hierarchy, academic level and grade level, controlling for classroom mean levels of aggression and ethnic distribution. Aggressive children were more popular and better liked in fifth grade relative to fourth grade and in classrooms of higher status hierarchy. Surprisingly, the natural emergence of status hierarchies in children’s peer groups has long been assumed to minimize aggression. Whether status hierarchies hinder or promote bullying is a controversial question in the peer relations’ literature. Study IV aimed at clarifying this debate by testing the effects of the degree of classroom status hierarchy on bullying. Higher hierarchy was concrrently associated with bullying and predictive of higher bullying six months later. As bullies’ quest for power is increasingly acknowledged, some researchers suggest teaching bullies to attain the elevated status they yearn for through prosocial acts. Study V cautions against such solutions by reviewing evidence that prosocial behaviors enacted with the intention of controlling others can be as harmful as aggression.
Resumo:
Nineteen-channel EEGs were recorded from the scalp surface of 30 healthy subjects (16 males and 14 females, mean age: 34 years, SD: 11.7 years) at rest and under trains of intermittent photic stimulation (IPS) at rates of 5, 10 and 20 Hz. Digitalized data were submitted to spectral analysis with fast fourier transformation providing the basis for the computation of global field power (GFP). For quantification, GFP values in the frequency ranges of 5, 10 and 20 Hz at rest were divided by the corresponding data obtained under IPS. All subjects showed a photic driving effect at each rate of stimulation. GFP data were normally distributed, whereas ratios from photic driving effect data showed no uniform behavior due to high interindividual variability. Suppression of alpha-power after IPS with 10 Hz was observed in about 70% of the volunteers. In contrast, ratios of alpha-power were unequivocal in all subjects: IPS at 20 Hz always led to a suppression of alpha-power. Dividing alpha-GFP with 20-Hz IPS by alpha-GFP at rest (R = alpha-GFP IPS/alpha-GFPrest) thus resulted in ratios lower than 1. We conclude that ratios from GFP data with 20-Hz IPS may provide a suitable paradigm for further investigations.
Resumo:
Increased heart rate variability (HRV) and high-frequency content of the terminal region of the ventricular activation of signal-averaged ECG (SAECG) have been reported in athletes. The present study investigates HRV and SAECG parameters as predictors of maximal aerobic power (VO2max) in athletes. HRV, SAECG and VO2max were determined in 18 high-performance long-distance (25 ± 6 years; 17 males) runners 24 h after a training session. Clinical visits, ECG and VO2max determination were scheduled for all athletes during thew training period. A group of 18 untrained healthy volunteers matched for age, gender, and body surface area was included as controls. SAECG was acquired in the resting supine position for 15 min and processed to extract average RR interval (Mean-RR) and root mean squared standard deviation (RMSSD) of the difference of two consecutive normal RR intervals. SAECG variables analyzed in the vector magnitude with 40-250 Hz band-pass bi-directional filtering were: total and 40-µV terminal (LAS40) duration of ventricular activation, RMS voltage of total (RMST) and of the 40-ms terminal region of ventricular activation. Linear and multivariate stepwise logistic regressions oriented by inter-group comparisons were adjusted in significant variables in order to predict VO2max, with a P < 0.05 considered to be significant. VO2max correlated significantly (P < 0.05) with RMST (r = 0.77), Mean-RR (r = 0.62), RMSSD (r = 0.47), and LAS40 (r = -0.39). RMST was the independent predictor of VO2max. In athletes, HRV and high-frequency components of the SAECG correlate with VO2max and the high-frequency content of SAECG is an independent predictor of VO2max.
Resumo:
This study aimed to analyze the agreement between measurements of unloaded oxygen uptake and peak oxygen uptake based on equations proposed by Wasserman and on real measurements directly obtained with the ergospirometry system. We performed an incremental cardiopulmonary exercise test (CPET), which was applied to two groups of sedentary male subjects: one apparently healthy group (HG, n=12) and the other had stable coronary artery disease (n=16). The mean age in the HG was 47±4 years and that in the coronary artery disease group (CG) was 57±8 years. Both groups performed CPET on a cycle ergometer with a ramp-type protocol at an intensity that was calculated according to the Wasserman equation. In the HG, there was no significant difference between measurements predicted by the formula and real measurements obtained in CPET in the unloaded condition. However, at peak effort, a significant difference was observed between oxygen uptake (V˙O2)peak(predicted)and V˙O2peak(real)(nonparametric Wilcoxon test). In the CG, there was a significant difference of 116.26 mL/min between the predicted values by the formula and the real values obtained in the unloaded condition. A significant difference in peak effort was found, where V˙O2peak(real)was 40% lower than V˙O2peak(predicted)(nonparametric Wilcoxon test). There was no agreement between the real and predicted measurements as analyzed by Lin’s coefficient or the Bland and Altman model. The Wasserman formula does not appear to be appropriate for prediction of functional capacity of volunteers. Therefore, this formula cannot precisely predict the increase in power in incremental CPET on a cycle ergometer.
Resumo:
Regulatory light chain (RLC) phosphorylation in fast twitch muscle is catalyzed by skeletal myosin light chain kinase (skMLCK), a reaction known to increase muscle force, work, and power. The purpose of this study was to explore the contribution of RLC phosphorylation on the power of mouse fast muscle during high frequency (100 Hz) concentric contractions. To determine peak power shortening ramps (1.05 to 0.90 Lo) were applied to Wildtype (WT) and skMLCK knockout (skMLCK-/-) EDL muscles at a range of shortening velocities between 0.05-0.65 of maximal shortening velocity (Vmax), before and after a conditioning stimulus (CS). As a result, mean power was increased to 1.28 ± 0.05 and 1.11 ± .05 of pre-CS values, when collapsed for shortening velocity in WT and skMLCK-/-, respectively (n = 10). In addition, fitting each data set to a second order polynomial revealed that WT mice had significantly higher peak power output (27.67 ± 1.12 W/ kg-1) than skMLCK-/- (25.97 ± 1.02 W/ kg-1), (p < .05). No significant differences in optimal velocity for peak power were found between conditions and genotypes (p > .05). Analysis with Urea Glycerol PAGE determined that RLC phosphate content had been elevated in WT muscles from 8 to 63 % while minimal changes were observed in skMLCK-/- muscles: 3 and 8 %, respectively. Therefore, the lack of stimulation induced increase in RLC phosphate content resulted in a ~40 % smaller enhancement of mean power in skMLCK-/-. The increase in power output in WT mice suggests that RLC phosphorylation is a major potentiating component required for achieving peak muscle performance during brief high frequency concentric contractions.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
Adaptive filter is a primary method to filter Electrocardiogram (ECG), because it does not need the signal statistical characteristics. In this paper, an adaptive filtering technique for denoising the ECG based on Genetic Algorithm (GA) tuned Sign-Data Least Mean Square (SD-LMS) algorithm is proposed. This technique minimizes the mean-squared error between the primary input, which is a noisy ECG, and a reference input which can be either noise that is correlated in some way with the noise in the primary input or a signal that is correlated only with ECG in the primary input. Noise is used as the reference signal in this work. The algorithm was applied to the records from the MIT -BIH Arrhythmia database for removing the baseline wander and 60Hz power line interference. The proposed algorithm gave an average signal to noise ratio improvement of 10.75 dB for baseline wander and 24.26 dB for power line interference which is better than the previous reported works
Resumo:
Over recent years there has been an increasing deployment of renewable energy generation technologies, particularly large-scale wind farms. As wind farm deployment increases, it is vital to gain a good understanding of how the energy produced is affected by climate variations, over a wide range of time-scales, from short (hours to weeks) to long (months to decades) periods. By relating wind speed at specific sites in the UK to a large-scale climate pattern (the North Atlantic Oscillation or "NAO"), the power generated by a modelled wind turbine under three different NAO states is calculated. It was found that the wind conditions under these NAO states may yield a difference in the mean wind power output of up to 10%. A simple model is used to demonstrate that forecasts of future NAO states can potentially be used to improve month-ahead statistical forecasts of monthly-mean wind power generation. The results confirm that the NAO has a significant impact on the hourly-, daily- and monthly-mean power output distributions from the turbine with important implications for (a) the use of meteorological data (e.g. their relationship to large scale climate patterns) in wind farm site assessment and, (b) the utilisation of seasonal-to-decadal climate forecasts to estimate future wind farm power output. This suggests that further research into the links between large-scale climate variability and wind power generation is both necessary and valuable.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
A recently proposed mean-field theory of mammalian cortex rhythmogenesis describes the salient features of electrical activity in the cerebral macrocolumn, with the use of inhibitory and excitatory neuronal populations (Liley et al 2002). This model is capable of producing a range of important human EEG (electroencephalogram) features such as the alpha rhythm, the 40 Hz activity thought to be associated with conscious awareness (Bojak & Liley 2007) and the changes in EEG spectral power associated with general anesthetic effect (Bojak & Liley 2005). From the point of view of nonlinear dynamics, the model entails a vast parameter space within which multistability, pseudoperiodic regimes, various routes to chaos, fat fractals and rich bifurcation scenarios occur for physiologically relevant parameter values (van Veen & Liley 2006). The origin and the character of this complex behaviour, and its relevance for EEG activity will be illustrated. The existence of short-lived unstable brain states will also be discussed in terms of the available theoretical and experimental results. A perspective on future analysis will conclude the presentation.
Resumo:
With a rapidly increasing fraction of electricity generation being sourced from wind, extreme wind power generation events such as prolonged periods of low (or high) generation and ramps in generation, are a growing concern for the efficient and secure operation of national power systems. As extreme events occur infrequently, long and reliable meteorological records are required to accurately estimate their characteristics. Recent publications have begun to investigate the use of global meteorological “reanalysis” data sets for power system applications, many of which focus on long-term average statistics such as monthly-mean generation. Here we demonstrate that reanalysis data can also be used to estimate the frequency of relatively short-lived extreme events (including ramping on sub-daily time scales). Verification against 328 surface observation stations across the United Kingdom suggests that near-surface wind variability over spatiotemporal scales greater than around 300 km and 6 h can be faithfully reproduced using reanalysis, with no need for costly dynamical downscaling. A case study is presented in which a state-of-the-art, 33 year reanalysis data set (MERRA, from NASA-GMAO), is used to construct an hourly time series of nationally-aggregated wind power generation in Great Britain (GB), assuming a fixed, modern distribution of wind farms. The resultant generation estimates are highly correlated with recorded data from National Grid in the recent period, both for instantaneous hourly values and for variability over time intervals greater than around 6 h. This 33 year time series is then used to quantify the frequency with which different extreme GB-wide wind power generation events occur, as well as their seasonal and inter-annual variability. Several novel insights into the nature of extreme wind power generation events are described, including (i) that the number of prolonged low or high generation events is well approximated by a Poission-like random process, and (ii) whilst in general there is large seasonal variability, the magnitude of the most extreme ramps is similar in both summer and winter. An up-to-date version of the GB case study data as well as the underlying model are freely available for download from our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/.
Resumo:
Objective: The aim of this study was to verify the discriminative power of the most widely used pain assessment instruments. Methods: The sample consisted of 279 subjects divided into Fibromyalgia Group (FM- 205 patients with fibromyalgia) and Control Group (CG-74 healthy subjects), mean age 49.29 +/- 10.76 years. Only 9 subjects were male, 6 in FM and 3 in CG. FM were outpatients from the Rheumatology Clinic of the University of Sao Paulo - Hospital das Clinicas (HCFMUSP); the CG included people accompanying patients and hospital staff with similar socio-demographic characteristics. Three instruments were used to assess pain: the McGill Pain Questionnaire (MPQ), the Visual Analog Scale (VAS), and the Dolorimetry, to measure pain threshold on tender points (generating the TP index). In order to assess the discriminative power of the instruments, the measurements obtained were submitted to descriptive analysis and inferential analysis using ROC Curve - sensibility (S), specificity (S I) and area under the curve (AUC) - and Contingence tables with Chi-square Test and odds ratio. Significance level was 0.05. Results: Higher sensibility, specificity and area under the curve was obtained by VAS (80%, 80% and 0.864, respectively), followed by Dolorimetry (S 77%, S177% and AUC 0.851), McGill Sensory (S 72%, S167% and AUC 0.765) and McGill Affective (S 69%, S1 67% and AUC 0.753). Conclusions: VAS presented the higher sensibility, specificity and AUC, showing the greatest discriminative power among the instruments. However, these values are considerably similar to those of Dolorimetry.
Resumo:
This paper presents a method for calculating the power flow in distribution networks considering uncertainties in the distribution system. Active and reactive power are used as uncertain variables and probabilistically modeled through probability distribution functions. Uncertainty about the connection of the users with the different feeders is also considered. A Monte Carlo simulation is used to generate the possible load scenarios of the users. The results of the power flow considering uncertainty are the mean values and standard deviations of the variables of interest (voltages in all nodes, active and reactive power flows, etc.), giving the user valuable information about how the network will behave under uncertainty rather than the traditional fixed values at one point in time. The method is tested using real data from a primary feeder system, and results are presented considering uncertainty in demand and also in the connection. To demonstrate the usefulness of the approach, the results are then used in a probabilistic risk analysis to identify potential problems of undervoltage in distribution systems. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The purpose of this study was to evaluate the influence of different light sources for in-office bleaching on surface microhardness of human enamel. One hundred and five blocks of third molars were distributed among seven groups. The facial enamel surface of each block was polished and baseline Knoop microhardness of enamel was assessed with a load of 25 g for 5 s. Subsequently, the enamel was treated with 35% hydrogen peroxide bleaching agent and photo-activated with halogen light (group A) during 38 s, LED (group B) during 360 s, and high intensity diode laser (group C) during 4 s. The groups D (38 s), E (360 s), and F (4 s) were treated with the bleaching agent without photo-activated. The control (group G) was only kept in saliva without any treatment. Microhardness was reassessed after 1 day of the bleaching treatment, and after 7 and 21 days storage in artificial saliva. The mean percentage and standard deviation of microhardness in Knoop Hardness Number were: A 97.8 +/- 13.1 KHN; B 95.5 +/- 12.7 KHN; C 84.2 +/- 13.6 KHN; D 128.6 +/- 20.5 KHN; E 133.9 +/- 14.2 KHN; F 123.9 +/- 14.2 KHN; G 129.8 +/- 18.8 KHN. Statistical analysis (p < 0.05; Tukey test) showed that microhardness percentage values were significantly lower in the groups irradiated with light when compared with the non-irradiated groups. Furthermore, the non-irradiated groups showed that saliva was able to enhance the microhardness during the measurement times. The enamel microhardness was decreased when light sources were used during the bleaching process and the artificial saliva was able to increase microhardness when no light was used.