991 resultados para Error Vector Magnitude (EVM)
Resumo:
BACKGROUND: Traditionally, epidemiologists have considered electrification to be a positive factor. In fact, electrification and plumbing are typical initiatives that represent the integration of an isolated population into modern society, ensuring the control of pathogens and promoting public health. Nonetheless, electrification is always accompanied by night lighting that attracts insect vectors and changes people's behavior. Although this may lead to new modes of infection and increased transmission of insect-borne diseases, epidemiologists rarely consider the role of night lighting in their surveys. OBJECTIVE: We reviewed the epidemiological evidence concerning the role of lighting in the spread of vector-borne diseases to encourage other researchers to consider it in future studies. DISCUSSION: We present three infectious vector-borne diseases-Chagas, leishmaniasis, and malaria-and discuss evidence that suggests that the use of artificial lighting results in behavioral changes among human populations and changes in the prevalence of vector species and in the modes of transmission. CONCLUSION: Despite a surprising lack of studies, existing evidence supports our hypothesis that artificial lighting leads to a higher risk of infection from vector-borne diseases. We believe that this is related not only to the simple attraction of traditional vectors to light sources but also to changes in the behavior of both humans and insects that result in new modes of disease transmission. Considering the ongoing expansion of night lighting in developing countries, additional research on this subject is urgently needed.
Resumo:
The highly expressed D7 protein family of mosquito saliva has previously been shown to act as an anti-inflammatory mediator by binding host biogenic amines and cysteinyl leukotrienes (CysLTs). In this study we demonstrate that AnSt-D7L1, a two-domain member of this group from Anopheles stephensi, retains the CysLT binding function seen in the homolog AeD7 from Aedes aegypti but has lost the ability to bind biogenic amines. Unlike any previously characterized members of the D7 family, AnSt-D7L1 has acquired the important function of binding thromboxane A(2) (TXA(2)) and its analogs with high affinity. When administered to tissue preparations, AnSt-D7L1 abrogated Leukotriene C(4) (LTC(4))-induced contraction of guinea pig ileum and contraction of rat aorta by the TXA(2) analog U46619. The protein also inhibited platelet aggregation induced by both collagen and U46619 when administered to stirred platelets. The crystal structure of AnSt-D7L1 contains two OBP-like domains and has a structure similar to AeD(7). In AnSt-D7L1, the binding pocket of the C-terminal domain has been rearranged relative to AeD7, making the protein unable to bind biogenic amines. Structures of the ligand complexes show that CysLTs and TXA(2) analogs both bind in the same hydrophobic pocket of the N-terminal domain. The TXA(2) analog U46619 is stabilized by hydrogen bonding interactions of the omega-5 hydroxyl group with the phenolic hydroxyl group of Tyr 52. LTC(4) and occupies a very similar position to LTE(4) in the previously determined structure of its complex with AeD7. As yet, it is not known what, if any, new function has been acquired by the rearranged C-terminal domain. This article presents, to our knowledge, the first structural characterization of a protein from mosquito saliva that inhibits collagen mediated platelet activation.
Resumo:
Using the published KTeV samples of K(L) -> pi(+/-)e(-/+)nu and K(L) -> pi(+/-)mu(-/+)nu decays, we perform a reanalysis of the scalar and vector form factors based on the dispersive parametrization. We obtain phase-space integrals I(K)(e) = 0.15446 +/- 0.00025 and I(K)(mu) = 0.10219 +/- 0.00025. For the scalar form factor parametrization, the only free parameter is the normalized form factor value at the Callan-Treiman point (C); our best-fit results in InC = 0.1915 +/- 0.0122. We also study the sensitivity of C to different parametrizations of the vector form factor. The results for the phase-space integrals and C are then used to make tests of the standard model. Finally, we compare our results with lattice QCD calculations of F(K)/F(pi) and f(+)(0).
Resumo:
We calculate the nuclear cross section for coherent and incoherent vector meson production within the QCD color dipole picture, including saturation effects. Theoretical estimates for scattering on both light and heavy nuclei are given over a wide range of energy.
Resumo:
The energy barrier distribution E(b) of five samples with different concentrations x of Ni nanoparticles using scaling plots from ac magnetic susceptibility data has been determined. The scaling of the imaginary part of the susceptibility chi""(v, T) versus T ln (iota t/tau(0)) remains valid for all samples, which display Ni nanoparticles with similar shape and size. The mean value < E(b)> increases appreciably with increasing x, or more appropriately with increasing dipolar interactions between Ni nanoparticles. We argue that such an increase in < E(b)> constitutes a powerful tool for quality control in magnetic recording media technology where the dipolar interaction plays an important role. (c) 2011 American Institute of Physics. [doi: 10.1063/1.3533911]
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
ARTIOLI, G. G., B. GUALANO, E. FRANCHINI, F. B. SCAGLIUSI, M. TAKESIAN, M. FUCHS, and A. H. LANCHA. Prevalence, Magnitude, and Methods of Rapid Weight Loss among Judo Competitors. Med. Sci. Sports Exerc., Vol. 42, No. 3, pp. 436-442, 2010. Purpose: To identify the prevalence, magnitude, and methods of rapid weight loss among judo competitors. Methods: Athletes (607 males and 215 females; age = 19.3 +/- 5.3 yr, weight = 70 +/- 7.5 kg, height = 170.6 +/- 9.8 cm) completed a previously validated questionnaire developed to evaluate rapid weight loss in judo athletes, which provides a score. The higher the score obtained, the more aggressive the weight loss behaviors. Data were analyzed using descriptive statistics and frequency analyses. Mean scores obtained in the questionnaire were used to compare specific groups of athletes using, when appropriate, Mann-Whitney U-test or general linear model one-way ANOVA followed by Tamhane post hoc test. Results: Eighty-six percent of athletes reported that have already lost weight to compete. When heavyweights are excluded, this percentage rises to 89%. Most athletes reported reductions of up to 5% of body weight (mean +/- SD: 2.5 +/- 2.3%). The most weight ever lost was 2%-5%, whereas a great part of athletes reported reductions of 5%-10% (mean +/- SD: 6 +/- 4%). The number of reductions underwent in a season was 3 +/- 5. The reductions usually occurred within 7 +/- 7 d. Athletes began cutting weight at 12.6 +/- 6.1 yr. No significant differences were found in the score obtained by male versus female athletes as well as by athletes from different weight classes. Elite athletes scored significantly higher in the questionnaire than nonelite. Athletes who began cutting weight earlier also scored higher than those who began later. Conclusions: Rapid weight loss is highly prevalent in judo competitors. The level of aggressiveness in weight management behaviors seems to not be influenced by the gender or by the weight class, but it seems to be influenced by competitive level and by the age at which athletes began cutting weight.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
State of Sao Paulo Research Foundation (FAPESP)
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
Artesian confined aquifers do not need pumping energy, and water from the aquifer flows naturally at the wellhead. This study proposes correcting the method for analyzing flowing well tests presented by Jacob and Lohman (1952) by considering the head losses due to friction in the well casing. The application of the proposed correction allowed the determination of a transmissivity (T = 411 m(2)/d) and storage coefficient (S = 3 x 10(-4)) which appear to be representative for the confined Guarani Aquifer in the study area. Ignoring the correction due to head losses in the well casing, the error in transmissivity evaluation is about 18%. For the storage coefficient the error is of 5 orders of magnitude, resulting in physically unacceptable value. The effect of the proposed correction on the calculated radius of the cone of depression and corresponding well interference is also discussed.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,
Resumo:
The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.