87 resultados para Linear Entropy
Resumo:
Objective To evaluate the influence of oral contraceptives (OCs) containing 20 mu mu g ethinylestradiol (EE) and 150 mu mu g gestodene (GEST) on the autonomic modulation of heart rate (HR) in women. Methods One-hundred and fifty-five women aged 24 +/-+/- 2 years were divided into four groups according to their physical activity and the use or not of an OC: active-OC, active-non-OC (NOC), sedentary-OC, and sedentary-NOC. The heart rate was registered in real time based on the electrocardiogram signal for 15 minutes, in the supine-position. The heart rate variability (HRV) was analysed using Shannon`s entropy (SE), conditional entropy (complexity index [CInd] and normalised CInd [NCI]), and symbolic analysis (0V%, 1V%, 2LV%, and 2ULV%). For statistical analysis the Kruskal-Wallis test with Dunn post hoc and the Wilcoxon test (p < 0.05 was considered significant) were applied. Results Treatment with this COC caused no significant changes in SE, CInd, NCI, or symbolic analysis in either active or sedentary groups. Active groups presented higher values for SE and 2ULV%, and lower values for 0V% when compared to sedentary groups (p < 0.05). Conclusion HRV patterns differed depending on life style; the non-linear method applied was highly reliable for identifying these changes. The use of OCs containing 20 mu mu g EE and 150 mu mu g GEST does not influence HR autonomic modulation.
Resumo:
Context: Melanocortin receptor 4 (MC4R) deficiency is characterized by increased linear growth greater than expected for the degree of obesity. Objective: The objective of the investigation was to study the somatotroph axis in obese MC4R-deficient patients and equally obese controls. Patients and Methods: We obtained anthropometric measurements and insulin concentrations in 153 MC4R-deficient subjects and 1392 controls matched for age and severity of obesity. We measured fasting IGF-I, IGF-II, IGF binding protein (IGFBP)-1, IGFBP-3, and acid-labile subunit levels in a subset of 33 MC4R-deficient patients and 36 control subjects. We examined pulsatile GH secretion in six adult MC4R-deficient subjects and six obese controls. Results: Height so score was significantly greater in MC4R-deficient children under 5 yr of age compared with controls (mean +/- SEM: 2.3 +/- 0.06 vs. 1.8 +/- 0.04, P < 0.001), an effect that persisted throughout childhood. Final height (cm) was greater in MC4R-deficient men (mean +/- SEM 173 +/- 2.5 vs. 168 +/- 2.1, P < 0.001) and women (mean 165 +/- 2.1 vs. 158 +/- 1.9, P < 0.001). Fasting IGF-I, IGF-II, acid-labile subunit, and IGFBP-3 concentrations were similar in the two groups. GH levels were markedly suppressed in obese controls, but pulsatile GH secretion was retained in MC4R deficiency. The mean maximal GH secretion rate per burst (P < 0.05) and mass per burst (P < 0.05) were increased in MC4R deficiency, consistent with increased pulsatile and total GH secretion. Fasting insulin levels were markedly elevated in MC4R-deficient children. Conclusions: In MC4R deficiency, increased linear growth in childhood leads to increased adult final height, greater than predicted by obesity alone. GH pulsatility is maintained in MC4R deficiency, a finding consistent with animal studies, suggesting a role for MC4R in controlling hypothalamic somatostatinergic tone. Fasting insulin levels are significantly higher in children carrying MC4R mutations. Both of these factors may contribute to the accelerated growth phenotype characteristic of MC4R deficiency. (J Clin Endocrinol Metab 96: E181-E188, 2011)
Resumo:
Background and Purpose-Functional MRI is a powerful tool to investigate recovery of brain function in patients with stroke. An inherent assumption in functional MRI data analysis is that the blood oxygenation level-dependent (BOLD) signal is stable over the course of the examination. In this study, we evaluated the validity of such assumption in patients with chronic stroke. Methods-Fifteen patients performed a simple motor task with repeated epochs using the paretic and the unaffected hand in separate runs. The corresponding BOLD signal time courses were extracted from the primary and supplementary motor areas of both hemispheres. Statistical maps were obtained by the conventional General Linear Model and by a parametric General Linear Model. Results-Stable BOLD amplitude was observed when the task was executed with the unaffected hand. Conversely, the BOLD signal amplitude in both primary and supplementary motor areas was progressively attenuated in every patient when the task was executed with the paretic hand. The conventional General Linear Model analysis failed to detect brain activation during movement of the paretic hand. However, the proposed parametric General Linear Model corrected the misdetection problem and showed robust activation in both primary and supplementary motor areas. Conclusions-The use of data analysis tools that are built on the premise of a stable BOLD signal may lead to misdetection of functional regions and underestimation of brain activity in patients with stroke. The present data urge the use of caution when relying on the BOLD response as a marker of brain reorganization in patients with stroke. (Stroke. 2010; 41:1921-1926.)
Resumo:
Background: This study of a chronic porcine postinfarction model examined whether linear epicardial cryoablation was capable of creating large, homogenous lesions in regions of the myocardium including scarred ventricle. Endocardial and epicardial focal cryolesions were also compared to determine if there were significant differences in lesion characteristics. Methods: Eighty focal endocardial and 28 focal epicardial cryoapplications were delivered to eight normal caprine and four normal porcine ventricular myocardium, and 21 linear cryolesions were applied along the border of infarcted epicardial tissue in a chronic porcine infarct model in six swines. Results: Focal endocardial cryolesions in normal animals measured 9.7 +/- 0.4 mm (length) by 7.3 +/- 1.4 mm (width) by 4.8 +/- 0.2 mm (depth), while epicardial lesions measured 10.2 +/- 1.4 mm (length) by 7.7 +/- 2 mm (width) by 4.6 +/- 0.9 mm (depth); P > 0.05. Linear epicardial cryolesions in the chronic porcine infarct model measured 36.5 +/- 7.8 mm (length) by 8.2 +/- 1.3 mm (width) by 6.0 +/- 1.2 mm (depth). The mean depth of linear cryolesions applied to the border of the infarct scar was 7 +/- 0.7 mm, as measured by magnetic resonance imaging. Conclusions:Cryoablation can create deep lesions when delivered to the ventricular epicardium. Endocardial and epicardial cryolesions created by a focal cryoablation catheter are similar in size and depth. The ability to rapidly create deep linear cryolesions may prove to be beneficial in substrate-based catheter ablation of ventricular arrhythmias.
Resumo:
The objective was to evaluate the influence of dental metallic artefacts on implant sites using multislice and cone-beam computed tomography techniques. Ten dried human mandibles were scanned twice by each technique, with and without dental metallic artefacts. Metallic restorations were placed at the top of the alveolar ridge adjacent to the mental foramen region for the second scanning. Linear measurements (thickness and height) for each cross-section were performed by a single examiner using computer software. All mandibles were analysed at both the right and the left mental foramen regions. For the multislice technique, dental metallic artefact produced an increase of 5% in bone thickness and a reduction of 6% in bone height; no significant differences (p > 0.05) were detected when comparing measurements performed with and without metallic artefacts. With respect to the cone-beam technique, dental metallic artefact produced an increase of 6% in bone thickness and a reduction of 0.68% in bone height. No significant differences (p > 0.05) were observed when comparing measurements performed with and without metallic artefacts. The presence of dental metallic artefacts did not alter the linear measurements obtained with both techniques, although its presence made the location of the alveolar bone crest more difficult.
Resumo:
Objective. The purpose of this research was to provide further evidence to demonstrate the precision and accuracy of maxillofacial linear and angular measurements obtained by cone-beam computed tomography (CBCT) images. Study design. The study population consisted of 15 dry human skulls that were submitted to CBCT, and 3-dimensional (3D) images were generated. Linear and angular measurements based on conventional craniometric anatomical landmarks, and were identified in 3D-CBCT images by 2 radiologists twice each independently. Subsequently, physical measurements were made by a third examiner using a digital caliper and a digital goniometer. Results. The results demonstrated no statistically significant difference between inter-and intra-examiner analysis. Regarding accuracy test, no statistically significant differences were found of the comparison between the physical and CBCT-based linear and angular measurements for both examiners (P = .968 and .915, P = .844 and .700, respectively). Conclusions. 3D-CBCT images can be used to obtain dimensionally accurate linear and angular measurements from bony maxillofacial structures and landmarks. (Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2009; 108: 430-436)
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Electromagnetic induction (EMI) method results are shown for vertical magnetic dipole (VMD) configuration by using the EM38 equipment. Performance in the location of metallic pipes and electrical cables is compared as a function of instrumental drift correction by linear and quadratic adjusting under controlled conditions. Metallic pipes and electrical cables are buried at the IAG/USP shallow geophysical test site in Sao Paulo City. Brazil. Results show that apparent electrical conductivity and magnetic susceptibility data were affected by ambient temperature variation. In order to obtain better contrast between background and metallic targets it was necessary to correct the drift. This correction was accomplished by using linear and quadratic relation between conductivity/susceptibility and temperature intending comparative studies. The correction of temperature drift by using a quadratic relation was effective, showing that all metallic targets were located as well deeper targets were also improved. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Prestes, J, Frollini, AB, De Lima, C, Donatto, FF, Foschini, D, de Marqueti, RC, Figueira Jr, A, and Fleck, SJ. Comparison between linear and daily undulating periodized resistance training to increase strength. J Strength Cond Res 23(9): 2437-2442, 2009-To determine the most effective periodization model for strength and hypertrophy is an important step for strength and conditioning professionals. The aim of this study was to compare the effects of linear (LP) and daily undulating periodized (DUP) resistance training on body composition and maximal strength levels. Forty men aged 21.5 +/- 8.3 and with a minimum 1-year strength training experience were assigned to an LP (n = 20) or DUP group (n = 20). Subjects were tested for maximal strength in bench press, leg press 45 degrees, and arm curl (1 repetition maximum [RM]) at baseline (T1), after 8 weeks (T2), and after 12 weeks of training (T3). Increases of 18.2 and 25.08% in bench press 1 RM were observed for LP and DUP groups in T3 compared with T1, respectively (p <= 0.05). In leg press 45 degrees, LP group exhibited an increase of 24.71% and DUP of 40.61% at T3 compared with T1. Additionally, DUP showed an increase of 12.23% at T2 compared with T1 and 25.48% at T3 compared with T2. For the arm curl exercise, LP group increased 14.15% and DUP 23.53% at T3 when compared with T1. An increase of 20% was also found at T2 when compared with T1, for DUP. Although the DUP group increased strength the most in all exercises, no statistical differences were found between groups. In conclusion, undulating periodized strength training induced higher increases in maximal strength than the linear model in strength-trained men. For maximizing strength increases, daily intensity and volume variations were more effective than weekly variations.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.
Resumo:
Linear mixed models were developed to handle clustered data and have been a topic of increasing interest in statistics for the past 50 years. Generally. the normality (or symmetry) of the random effects is a common assumption in linear mixed models but it may, sometimes, be unrealistic, obscuring important features of among-subjects variation. In this article, we utilize skew-normal/independent distributions as a tool for robust modeling of linear mixed models under a Bayesian paradigm. The skew-normal/independent distributions is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal distribution, skew-t, skew-slash and the skew-contaminated normal distributions as special cases, providing an appealing robust alternative to the routine use of symmetric distributions in this type of models. The methods developed are illustrated using a real data set from Framingham cholesterol study. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
We introduce a problem called maximum common characters in blocks (MCCB), which arises in applications of approximate string comparison, particularly in the unification of possibly erroneous textual data coming from different sources. We show that this problem is NP-complete, but can nevertheless be solved satisfactorily using integer linear programming for instances of practical interest. Two integer linear formulations are proposed and compared in terms of their linear relaxations. We also compare the results of the approximate matching with other known measures such as the Levenshtein (edit) distance. (C) 2008 Elsevier B.V. All rights reserved.