69 resultados para linear complexity
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
To analyze the differential recruitment of the raphe nuclei during different phases of feeding behavior, rats were subjected to a food restriction schedule (food for 2 h/day, during 15 days). The animals were submitted to different feeding conditions, constituting the experimental groups: search for food (MFS), food ingestion (MFI), satiety (MFSa) and food restriction control (MFC). A baseline condition (BC) group was included as further control. The MFI and MFC groups, which presented greater autonomic and somatic activation, had more FOS-immunoreactive (FOS-IR) neurons. The MFI group presented more labeled cells in the linear (LRN) and dorsal (DRN) nuclei; the MFC group showed more labeling in the median (MRN), pontine (PRN), magnus (NRM) and obscurus (NRO) nuclei; and the MFSa group had more labeled cells in the pallidus (NRP). The BC exhibited the lowest number of reactive cells. The PRN presented the highest percentage of activation in the raphe while the DRN the lowest. Additional experiments revealed few double-labeled (FOS-IR+ 5-HT-IR) cells within the raphe nuclei in the MFI group, suggesting little serotonergic activation in the raphe during food ingestion. These findings suggest a differential recruitment of raphe nuclei during various phases of feeding behavior. Such findings may reflect changes in behavioral state (e.g., food-induced arousal versus sleep) that lead to greater motor activation, and consequently increased FOS expression. While these data are consistent with the idea that the raphe system acts as gain setter for autonomic and somatic activities, the functional complexity of the raphe is not completely understood. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Prestes, J, Frollini, AB, De Lima, C, Donatto, FF, Foschini, D, de Marqueti, RC, Figueira Jr, A, and Fleck, SJ. Comparison between linear and daily undulating periodized resistance training to increase strength. J Strength Cond Res 23(9): 2437-2442, 2009-To determine the most effective periodization model for strength and hypertrophy is an important step for strength and conditioning professionals. The aim of this study was to compare the effects of linear (LP) and daily undulating periodized (DUP) resistance training on body composition and maximal strength levels. Forty men aged 21.5 +/- 8.3 and with a minimum 1-year strength training experience were assigned to an LP (n = 20) or DUP group (n = 20). Subjects were tested for maximal strength in bench press, leg press 45 degrees, and arm curl (1 repetition maximum [RM]) at baseline (T1), after 8 weeks (T2), and after 12 weeks of training (T3). Increases of 18.2 and 25.08% in bench press 1 RM were observed for LP and DUP groups in T3 compared with T1, respectively (p <= 0.05). In leg press 45 degrees, LP group exhibited an increase of 24.71% and DUP of 40.61% at T3 compared with T1. Additionally, DUP showed an increase of 12.23% at T2 compared with T1 and 25.48% at T3 compared with T2. For the arm curl exercise, LP group increased 14.15% and DUP 23.53% at T3 when compared with T1. An increase of 20% was also found at T2 when compared with T1, for DUP. Although the DUP group increased strength the most in all exercises, no statistical differences were found between groups. In conclusion, undulating periodized strength training induced higher increases in maximal strength than the linear model in strength-trained men. For maximizing strength increases, daily intensity and volume variations were more effective than weekly variations.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.
Resumo:
Linear mixed models were developed to handle clustered data and have been a topic of increasing interest in statistics for the past 50 years. Generally. the normality (or symmetry) of the random effects is a common assumption in linear mixed models but it may, sometimes, be unrealistic, obscuring important features of among-subjects variation. In this article, we utilize skew-normal/independent distributions as a tool for robust modeling of linear mixed models under a Bayesian paradigm. The skew-normal/independent distributions is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal distribution, skew-t, skew-slash and the skew-contaminated normal distributions as special cases, providing an appealing robust alternative to the routine use of symmetric distributions in this type of models. The methods developed are illustrated using a real data set from Framingham cholesterol study. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
We introduce a problem called maximum common characters in blocks (MCCB), which arises in applications of approximate string comparison, particularly in the unification of possibly erroneous textual data coming from different sources. We show that this problem is NP-complete, but can nevertheless be solved satisfactorily using integer linear programming for instances of practical interest. Two integer linear formulations are proposed and compared in terms of their linear relaxations. We also compare the results of the approximate matching with other known measures such as the Levenshtein (edit) distance. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model, Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin er al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214-223]. The usefulness of these models is illustrated in a Simulation study and in applications to three real data sets. (C) 2009 Elsevier B.V. All rights reserved.