940 resultados para linear model
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.
Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.
The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.
The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.
All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.
Resumo:
It is crucial to understand the role that labor market positions might play in creating gender differences in work–life balance. One theoretical approach to understanding this relationship is the spillover theory. The spillover theory argues that an individual’s life domains are integrated; meaning that well-being can be transmitted between life domains. Based on data collected in Hungary in 2014, this paper shows that work-to-family spillover does not affect both genders the same way. The effect of work on family life tends to be more negative for women than for men. Two explanations have been formulated in order to understand this gender inequality. According to the findings of the analysis, gender is conditionally independent of spillover if financial status and flexibility of work are also incorporated into the analysis. This means that the relative disadvantage for women in terms of spillover can be attributed to their lower financial status and their relatively low access to flexible jobs. In other words, the gender inequalities in work-to-family spillover are deeply affected by individual labor market positions. The observation of the labor market’s effect on work–life balance is especially important in Hungary since Hungary has one of the least flexible labor arrangements in Europe. A marginal log-linear model, which is a method for categorical multivariate analysis, has been applied in this analysis.
Resumo:
LINS, Filipe C. A. et al. Modelagem dinâmica e simulação computacional de poços de petróleo verticais e direcionais com elevação por bombeio mecânico. In: CONGRESSO BRASILEIRO DE PESQUISA E DESENVOLVIMENTO EM PETRÓLEO E GÁS, 5. 2009, Fortaleza, CE. Anais... Fortaleza: CBPDPetro, 2009.
Resumo:
We analyze a real data set pertaining to reindeer fecal pellet-group counts obtained from a survey conducted in a forest area in northern Sweden. In the data set, over 70% of counts are zeros, and there is high spatial correlation. We use conditionally autoregressive random effects for modeling of spatial correlation in a Poisson generalized linear mixed model (GLMM), quasi-Poisson hierarchical generalized linear model (HGLM), zero-inflated Poisson (ZIP), and hurdle models. The quasi-Poisson HGLM allows for both under- and overdispersion with excessive zeros, while the ZIP and hurdle models allow only for overdispersion. In analyzing the real data set, we see that the quasi-Poisson HGLMs can perform better than the other commonly used models, for example, ordinary Poisson HGLMs, spatial ZIP, and spatial hurdle models, and that the underdispersed Poisson HGLMs with spatial correlation fit the reindeer data best. We develop R codes for fitting these models using a unified algorithm for the HGLMs. Spatial count response with an extremely high proportion of zeros, and underdispersion can be successfully modeled using the quasi-Poisson HGLM with spatial random effects.
Resumo:
Objective: 1) to assess the preparedness to practice and satisfaction in learning environment amongst new graduates from European osteopathic institutions; 2) to compare the results of preparedness to practice and satisfaction in learning environment between and within countries where osteopathy is regulated and where regulation is still to be achieved; 3) to identify possible correlations between learning environment and preparedness to practice. Method: Osteopathic education providers of full-time education located in Europe were enrolled, and their final year students were contacted to complete a survey. Measures used were: Dundee Ready Educational Environment Measure (DREEM), the Association of American Medical Colleges (AAMC) and a demographic questionnaire. Scores were compared across institutions using one-way ANOVA and generalised linear model. Results: Nine European osteopathic education institutions participated in the study (4 located in Italy, 2 in the UK, 1 in France, 1 in Belgium and 1 in the Netherlands) and 243 (77%) of their final-year students completed the survey. The DREEM total score mean was 121.4 (SEM: 1.66) whilst the AAMC was 17.58 (SEM:0.35). A generalised linear model found a significant association between not-regulated countries and total score as well as subscales DREEM scores (p<0.001). Learning environment and preparedness to practice were significantly positively correlated (r=0.76; p<0.01). Discussion: A perceived higher level of preparedness and satisfaction was found amongst students from osteopathic institutions located in countries without regulation compared to those located in countries where osteopathy is regulated; however, all institutions obtained a 'more positive than negative' result. Moreover, in general, cohorts with fewer than 20 students scored significantly higher compared to larger student cohorts. Finally, an overall positive correlation between students' preparedness and satisfaction were found across all institutions recruited.
Resumo:
Estuaries are areas which, from their structure, their fonctioning, and their localisation, are subject to significant contribution of nutrients. One of the objectif of the RNO, the French network for coastal water quality monitoring, is to assess the levels and trends of nutrient concentrations in estuaries. A linear model was used in order to describe and to explain the total dissolved nitrogen concentration evolution in the three most important estuaries on the Chanel-Atlantic front (Seine, Loire and Gironde). As a first step, the selection of a reliable data set was performed. Then total dissolved nitrogen evolution schemes in estuary environment were graphically studied, and allowed a resonable choice of covariables. The salinity played a major role in explaining nitrogen concentration variability in estuary, and dilution lines were proved to be a useful tool to detect outlying observations and to model the nitrogenlsalinity relation. Increasing trends were detected by the model, with a high magnitude in Seine, intermediate in Loire, and lower in Gironde. The non linear trend estimated in Loire and Seine estuaries could be due to important interannual variations as suggest in graphics. In the objective of the QUADRIGE database valorisation, a discussion on the statistical model, and on the RNO hydrological data sampling strategy, allowed to formulate suggestions towards a better exploitation of nutrient data.
Resumo:
Maps depicting spatial pattern in the stability of summer greenness could advance understanding of how forest ecosystems will respond to global changes such as a longer growing season. Declining summer greenness, or “greendown”, is spectrally related to declining near-infrared reflectance and is observed in most remote sensing time series to begin shortly after peak greenness at the end of spring and extend until the beginning of leaf coloration in autumn,. Understanding spatial patterns in the strength of greendown has recently become possible with the advancement of Landsat phenology products, which show that greendown patterns vary at scales appropriate for linking these patterns to proposed environmental forcing factors. This study tested two non-mutually exclusive hypotheses for how leaf measurements and environmental factors correlate with greendown and decreasing NIR reflectance across sites. At the landscape scale, we used linear regression to test the effects of maximum greenness, elevation, slope, aspect, solar irradiance and canopy rugosity on greendown. Secondly, we used leaf chemical traits and reflectance observations to test the effect of nitrogen availability and intrinsic water use efficiency on leaf-level greendown, and landscape-level greendown measured from Landsat. The study was conducted using Quercus alba canopies across 21 sites of an eastern deciduous forest in North America between June and August 2014. Our linear model explained greendown variance with an R2=0.47 with maximum greenness as the greatest model effect. Subsequent models excluding one model effect revealed elevation and aspect were the two topographic factors that explained the greatest amount of greendown variance. Regression results also demonstrated important interactions between all three variables, with the greatest interaction showing that aspect had greater influence on greendown at sites with steeper slopes. Leaf-level reflectance was correlated with foliar δ13C (proxy for intrinsic water use efficiency), but foliar δ13C did not translate into correlations with landscape-level variation in greendown from Landsat. Therefore, we conclude that Landsat greendown is primarily indicative of landscape position, with a small effect of canopy structure, and no measureable effect of leaf reflectance. With this understanding of Landsat greendown we can better explain the effects of landscape factors on vegetation reflectance and perhaps on phenology, which would be very useful for studying phenology in the context of global climate change
Resumo:
LINS, Filipe C. A. et al. Modelagem dinâmica e simulação computacional de poços de petróleo verticais e direcionais com elevação por bombeio mecânico. In: CONGRESSO BRASILEIRO DE PESQUISA E DESENVOLVIMENTO EM PETRÓLEO E GÁS, 5. 2009, Fortaleza, CE. Anais... Fortaleza: CBPDPetro, 2009.
Resumo:
Neste estudo foi investigado como a distribuição das espécies e a produção de biomassa de macrófitas aquáticas são influenciadas pelas condições físico-químicas do ambiente. Também foi avaliado como uma espécie com maior potencial competitivo pode interferir na diversidade de espécies da comunidade macrofítica. Para tanto, em cada um dos três arroios, foram dispostos seis transecções, perpendiculares à margem. Em cada transecção foram demarcadas três unidades amostrais de 1m², nas quais foram registrados os parâmetros fitossociológicos cobertura e frequência relativas e valor de importância. A diversidade de espécies foi estimada pelo índice de Shannon, utilizando os valores de cobertura de espécies. Para determinar a biomassa das macrófitas aquáticas foram usados quadrats de 0,25m², alocados dentro da unidade amostral de 1m² usadas para quantificar os dados fitossociológicos, nos mesmos pontos onde foi feito o levantamento de cobertura da vegetação. Utilizamos como variáveis preditoras a velocidade da corrente, radiação solar incidente, coeficiente de sombreamento, vegetação ripária arbórea adjacente, nitrogênio orgânico dissolvido, carbono orgânico dissolvido e condutividade elétrica. Foram registradas 32 espécies de macrófitas aquáticas, distribuídas em 19 famílias e 28 gêneros. Conforme Análise de Correspondência Canônica (CCA), as espécies com maiores valores de biomassa foram relacionadas a unidades amostrais com alta incidência luminosa. As unidades amostrais com dominância de Pistia stratiotes apresentaram menor diversidade de espécies indicando que esta espécie, quando encontra condições que permitam sua proliferação, pode excluir espécies de menor potencial competitivo. De acordo com GLM (Generalized Linear Model), a ausência de vegetação ripária ou presente em apenas uma das margens e baixas velocidades de corrente configura-se em condições favoráveis ao estabelecimento e desenvolvimento de macrófitas aquáticas, possibilitando produção maiores valores de biomassa.
Resumo:
Breast milk is regarded as an ideal source of nutrients for the growth and development of neonates, but it can also be a potential source of pollutants. Mothers can be exposed to different contaminants as a result of their lifestyle and environmental pollution. Mercury (Hg) and arsenic (As) could adversely affect the development of fetal and neonatal nervous system. Some fish and shellfish are rich in selenium (Se), an essential trace element that forms part of several enzymes related to the detoxification process, including glutathione S-transferase (GST). The goal of this study was to determine the interaction between Hg, As and Se and analyze its effect on the activity of GST in breast milk. Milk samples were collected from women between day 7 and 10 postpartum. The GST activity was determined spectrophotometrically; total Hg, As and Se concentrations were measured by atomic absorption spectrometry. To explain the possible association of Hg, As and Se concentrations with GST activity in breast milk, generalized linear models were constructed. The model explained 44% of the GST activity measured in breast milk. The GLM suggests that GST activity was positively correlated with Hg, As and Se concentrations. The activity of the enzyme was also explained by the frequency of consumption of marine fish and shellfish in the diet of the breastfeeding women.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
This dissertation deals with the conceptions of the relationship between science, technology, innovation, development and society. We aimed to analyze the ways to conceive these relations in documents of the Entrepreneurship and Innovation Programme (Proem) of the Federal Technological University of Paraná (UTFPR) and analyze the conceptions of these relations in view of managers and participants of the Program on Campus Cornélio Procópio, locus of research. It is recognized that the concepts of science, technology, innovation and development are polysemic, so varied are the definitions given by different theorists. For the research were characterized these conceptions into two streams, one called traditional or conservative current, which is supported by the classical theories, that is, those that were developed by authors recognized as classics, and the other current, referred to as critical concepts that sustains it are presented by authors recognized as critical, among which houses the Science studies, Technology and Society (CTS). This categorization buoyed analysis of Proem documents and analysis of the statements collected through interviews with three managers of Proem and eleven participants in the Program. As a result of the research, in general, although there is evidence in Proem documents of a social concern in relation to its role in society, it was observed that the program is based on the traditional and hegemonic view on the subject. In the documents analysis, it was noticed that the conceptions arranged CTS studies are present in relation to the multidimensional development concept. However, the texts analyzed, mostly were identified strong indications of thought supported by logical positivism, in propositions that refer to marketing issues, a proposal to generate an entrepreneurial culture guided by the development of technological innovations designed to meet and / or induce market demands, through production methods for popular goods. As for how managers and participants Proem in Campus Cornelius conceive the relationship between science, technology, innovation, development and society, also was identified closer to the classical view, although the respondents have pointed out many times in their speak apparent concern with social issues, like designing the development from a multidimensional view as cover critical studies CTS. The views of the participants, it was possible to link the strengthening of the concept connected to the linear model of development, in which the more it generates science, more is generated technology and more technology therefore produces more wealth, which in turn, the Schumpeterian view, is the basis of social welfare.
Resumo:
Since turning professional in 1995 there have been considerable advances in the research on the demands of rugby union, largely using Global Positioning System (GPS) analysis over the last 10 years. A systematic review on the use of GPS, particularly the setting of absolute (ABS) and individual (IND) velocity bands in field based, intermittent, high-intensity (HI) team sports was undertaken. From 3669 records identified, 38 studies were included for qualitative analysis. Little agreement on the definition of movement intensities within team sports was found, only three papers, all on rugby union, had used IND bands, with only one comparing ABS and IND methods. Thus, the aim of this study was to determine if there is a difference in the demands within positions when comparing ABS and IND methods for GPS analysis and if these differences are significantly different between the forward and back positional groups. A total of 214 data files were recorded from 26 players in 17 matches of the 2015/2016 Scottish BT Premiership. ABS velocity zones 1-7 were set at 1) 0-6, 2) 6.1-11, 3) 11.1-15, 4) 15.1-18, 5) 18.1-21, 6) 21.1-15 and 7) 25.1-40km.h-1 while IND zones 1-7 were 1) <20, 2) 20-40, 3) 40-50, 4) 50-70, 5) 70-80, 6) 80-95 and 7) 95-100% of player’s individually determined maximum velocity (Vmax). A 40m sprint test measured Vmax using OptaPro S4 10 Hz (catapult, Australia) GPS units to derive IND bands. The same GPS units were worn during matches. GPS outputs analysed were % distance, % time, high intensity efforts (HIEs) over 18.1 km.h-1 / 70% max velocity and repeated high intensity efforts (RHIEs) which consists of three HIEs in 21secs. General linear model (GLM) analysis identified a significant difference in the measurement of % total distance covered, between the ABS and IND methods in all zones for forwards (p<0.05) and backs (p<0.05). This difference was also significant between forwards and backs in zones 1, shown as mean difference ± standard deviation (3.7±0.7%), 6 (1.2±0.4%) and 7 (1.0±0.0%) respectively (p<0.05). Percentage time estimations were significantly different between ABS and IND analysis within forwards in zones 1 (1.7±1.7%), 2 (-2.9±1.3%), 3 (1.9±0.8%), 4 (-1.4±0.8%) and 5 (0.2±0.4%), and within backs in zones 1 (-10±1.5%), 2 (-1.2±1.1%), 3 (1.8±0.9%) and 5 (0.6±0.5%) (p<0.05). The difference between groups was significant in zones 1, 2, 4 and 5 (p<0.05). The number of HIEs was significantly different between forwards and backs in zones 6 (6±2) and 7 (3±2). RHIEs were significantly different between ABS and IND for forwards (1±2, p<0.05) although not between groups. Until more research on the differences in ABS and IND methods is carried out, then neither can be deemed a criterion method. In conclusion, there are significant differences between the ABS and IND methods of GPS analysis of the physical demands of rugby union, which must be considered when used to inform training load and recovery to improve performance and reduce injuries.