919 resultados para hierarchical generalized linear model
Resumo:
Dans cette thèse on s’intéresse à la modélisation de la dépendance entre les risques en assurance non-vie, plus particulièrement dans le cadre des méthodes de provisionnement et en tarification. On expose le contexte actuel et les enjeux liés à la modélisation de la dépendance et l’importance d’une telle approche avec l’avènement des nouvelles normes et exigences des organismes réglementaires quant à la solvabilité des compagnies d’assurances générales. Récemment, Shi et Frees (2011) suggère d’incorporer la dépendance entre deux lignes d’affaires à travers une copule bivariée qui capture la dépendance entre deux cellules équivalentes de deux triangles de développement. Nous proposons deux approches différentes pour généraliser ce modèle. La première est basée sur les copules archimédiennes hiérarchiques, et la deuxième sur les effets aléatoires et la famille de distributions bivariées Sarmanov. Nous nous intéressons dans un premier temps, au Chapitre 2, à un modèle utilisant la classe des copules archimédiennes hiérarchiques, plus précisément la famille des copules partiellement imbriquées, afin d’inclure la dépendance à l’intérieur et entre deux lignes d’affaires à travers les effets calendaires. Par la suite, on considère un modèle alternatif, issu d’une autre classe de la famille des copules archimédiennes hiérarchiques, celle des copules totalement imbriquées, afin de modéliser la dépendance entre plus de deux lignes d’affaires. Une approche avec agrégation des risques basée sur un modèle formé d’une arborescence de copules bivariées y est également explorée. Une particularité importante de l’approche décrite au Chapitre 3 est que l’inférence au niveau de la dépendance se fait à travers les rangs des résidus, afin de pallier un éventuel risque de mauvaise spécification des lois marginales et de la copule régissant la dépendance. Comme deuxième approche, on s’intéresse également à la modélisation de la dépendance à travers des effets aléatoires. Pour ce faire, on considère la famille de distributions bivariées Sarmanov qui permet une modélisation flexible à l’intérieur et entre les lignes d’affaires, à travers les effets d’années de calendrier, années d’accident et périodes de développement. Des expressions fermées de la distribution jointe, ainsi qu’une illustration empirique avec des triangles de développement sont présentées au Chapitre 4. Aussi, nous proposons un modèle avec effets aléatoires dynamiques, où l’on donne plus de poids aux années les plus récentes, et utilisons l’information de la ligne corrélée afin d’effectuer une meilleure prédiction du risque. Cette dernière approche sera étudiée au Chapitre 5, à travers une application numérique sur les nombres de réclamations, illustrant l’utilité d’un tel modèle dans le cadre de la tarification. On conclut cette thèse par un rappel sur les contributions scientifiques de cette thèse, tout en proposant des angles d’ouvertures et des possibilités d’extension de ces travaux.
Resumo:
Objective: 1) to assess the preparedness to practice and satisfaction in learning environment amongst new graduates from European osteopathic institutions; 2) to compare the results of preparedness to practice and satisfaction in learning environment between and within countries where osteopathy is regulated and where regulation is still to be achieved; 3) to identify possible correlations between learning environment and preparedness to practice. Method: Osteopathic education providers of full-time education located in Europe were enrolled, and their final year students were contacted to complete a survey. Measures used were: Dundee Ready Educational Environment Measure (DREEM), the Association of American Medical Colleges (AAMC) and a demographic questionnaire. Scores were compared across institutions using one-way ANOVA and generalised linear model. Results: Nine European osteopathic education institutions participated in the study (4 located in Italy, 2 in the UK, 1 in France, 1 in Belgium and 1 in the Netherlands) and 243 (77%) of their final-year students completed the survey. The DREEM total score mean was 121.4 (SEM: 1.66) whilst the AAMC was 17.58 (SEM:0.35). A generalised linear model found a significant association between not-regulated countries and total score as well as subscales DREEM scores (p<0.001). Learning environment and preparedness to practice were significantly positively correlated (r=0.76; p<0.01). Discussion: A perceived higher level of preparedness and satisfaction was found amongst students from osteopathic institutions located in countries without regulation compared to those located in countries where osteopathy is regulated; however, all institutions obtained a 'more positive than negative' result. Moreover, in general, cohorts with fewer than 20 students scored significantly higher compared to larger student cohorts. Finally, an overall positive correlation between students' preparedness and satisfaction were found across all institutions recruited.
Resumo:
Seasonal and interannual changes (1993e2012) of water temperature and transparency, river discharge, salinity, water quality properties, chlorophyll a (chl-a) and the carbon biomass of the main taxonomical phytoplankton groups were evaluated at a shallow station (~2 m) in the subtropical Patos Lagoon Estuary (PLE), Brazil. Large variations in salinity (0e35), due to a complex balance between Patos Lagoon outflow and oceanic inflows, affected significantly other water quality variables and phytoplankton dynamics, masking seasonal and interannual variability. Therefore, salinity effect was filtered out by means of a Generalized Additive Model (GAM). River discharge and salinity had a significant negative relation, with river discharge being highest and salinity lowest during July to October. Diatoms comprised the dominant phytoplankton group, contributing substantially to the seasonal cycle of chl-a showing higher values in austral spring/summer (September to April) and lowest in autumn/winter (May to August). PLE is a nutrient-rich estuary and the phytoplankton seasonal cycle was largely driven by light availability, with few exceptions in winter. Most variables exhibited large interannual variability. When varying salinity effect was accounted for, chl-a concentration and diatom biomass showed less irregularity over time, and significant increasing trends emerged for dinoflagellates and cyanobacteria. Long-term changes in phytoplankton and water quality were strongly related to variations in salinity, largely driven by freshwater discharge influenced by climatic variability, most pronounced for ENSO events. However, the significant increasing trend of the N:P ratio indicates that important environmental changes related to anthropogenic effects are undergoing, in addition to the hydrology in the PLE.
Resumo:
Estuaries are areas which, from their structure, their fonctioning, and their localisation, are subject to significant contribution of nutrients. One of the objectif of the RNO, the French network for coastal water quality monitoring, is to assess the levels and trends of nutrient concentrations in estuaries. A linear model was used in order to describe and to explain the total dissolved nitrogen concentration evolution in the three most important estuaries on the Chanel-Atlantic front (Seine, Loire and Gironde). As a first step, the selection of a reliable data set was performed. Then total dissolved nitrogen evolution schemes in estuary environment were graphically studied, and allowed a resonable choice of covariables. The salinity played a major role in explaining nitrogen concentration variability in estuary, and dilution lines were proved to be a useful tool to detect outlying observations and to model the nitrogenlsalinity relation. Increasing trends were detected by the model, with a high magnitude in Seine, intermediate in Loire, and lower in Gironde. The non linear trend estimated in Loire and Seine estuaries could be due to important interannual variations as suggest in graphics. In the objective of the QUADRIGE database valorisation, a discussion on the statistical model, and on the RNO hydrological data sampling strategy, allowed to formulate suggestions towards a better exploitation of nutrient data.
Resumo:
Maps depicting spatial pattern in the stability of summer greenness could advance understanding of how forest ecosystems will respond to global changes such as a longer growing season. Declining summer greenness, or “greendown”, is spectrally related to declining near-infrared reflectance and is observed in most remote sensing time series to begin shortly after peak greenness at the end of spring and extend until the beginning of leaf coloration in autumn,. Understanding spatial patterns in the strength of greendown has recently become possible with the advancement of Landsat phenology products, which show that greendown patterns vary at scales appropriate for linking these patterns to proposed environmental forcing factors. This study tested two non-mutually exclusive hypotheses for how leaf measurements and environmental factors correlate with greendown and decreasing NIR reflectance across sites. At the landscape scale, we used linear regression to test the effects of maximum greenness, elevation, slope, aspect, solar irradiance and canopy rugosity on greendown. Secondly, we used leaf chemical traits and reflectance observations to test the effect of nitrogen availability and intrinsic water use efficiency on leaf-level greendown, and landscape-level greendown measured from Landsat. The study was conducted using Quercus alba canopies across 21 sites of an eastern deciduous forest in North America between June and August 2014. Our linear model explained greendown variance with an R2=0.47 with maximum greenness as the greatest model effect. Subsequent models excluding one model effect revealed elevation and aspect were the two topographic factors that explained the greatest amount of greendown variance. Regression results also demonstrated important interactions between all three variables, with the greatest interaction showing that aspect had greater influence on greendown at sites with steeper slopes. Leaf-level reflectance was correlated with foliar δ13C (proxy for intrinsic water use efficiency), but foliar δ13C did not translate into correlations with landscape-level variation in greendown from Landsat. Therefore, we conclude that Landsat greendown is primarily indicative of landscape position, with a small effect of canopy structure, and no measureable effect of leaf reflectance. With this understanding of Landsat greendown we can better explain the effects of landscape factors on vegetation reflectance and perhaps on phenology, which would be very useful for studying phenology in the context of global climate change
Resumo:
LINS, Filipe C. A. et al. Modelagem dinâmica e simulação computacional de poços de petróleo verticais e direcionais com elevação por bombeio mecânico. In: CONGRESSO BRASILEIRO DE PESQUISA E DESENVOLVIMENTO EM PETRÓLEO E GÁS, 5. 2009, Fortaleza, CE. Anais... Fortaleza: CBPDPetro, 2009.
Resumo:
Endogenous and environmental variables are fundamental in explaining variations in fish condition. Based on more than 20 yr of fish weight and length data, relative condition indices were computed for anchovy and sardine caught in the Gulf of Lions. Classification and regression trees (CART) were used to identify endogenous factors affecting fish condition, and to group years of similar condition. Both species showed a similar annual cycle with condition being minimal in February and maximal in July. CART identified 3 groups of years where the fish populations generally showed poor, average and good condition and within which condition differed between age classes but not according to sex. In particular, during the period of poor condition (mostly recent years), sardines older than 1 yr appeared to be more strongly affected than younger individuals. Time-series were analyzed using generalized linear models (GLMs) to examine the effects of oceanographic abiotic (temperature, Western Mediterranean Oscillation [WeMO] and Rhone outflow) and biotic (chlorophyll a and 6 plankton classes) factors on fish condition. The selected models explained 48 and 35% of the variance of anchovy and sardine condition, respectively. Sardine condition was negatively related to temperature but positively related to the WeMO and mesozooplankton and diatom concentrations. A positive effect of mesozooplankton and Rhone runoff on anchovy condition was detected. The importance of increasing temperatures and reduced water mixing in the NW Mediterranean Sea, affecting planktonic productivity and thus fish condition by bottom-up control processes, was highlighted by these results. Changes in plankton quality, quantity and phenology could lead to insufficient or inadequate food supply for both species.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
This dissertation deals with the conceptions of the relationship between science, technology, innovation, development and society. We aimed to analyze the ways to conceive these relations in documents of the Entrepreneurship and Innovation Programme (Proem) of the Federal Technological University of Paraná (UTFPR) and analyze the conceptions of these relations in view of managers and participants of the Program on Campus Cornélio Procópio, locus of research. It is recognized that the concepts of science, technology, innovation and development are polysemic, so varied are the definitions given by different theorists. For the research were characterized these conceptions into two streams, one called traditional or conservative current, which is supported by the classical theories, that is, those that were developed by authors recognized as classics, and the other current, referred to as critical concepts that sustains it are presented by authors recognized as critical, among which houses the Science studies, Technology and Society (CTS). This categorization buoyed analysis of Proem documents and analysis of the statements collected through interviews with three managers of Proem and eleven participants in the Program. As a result of the research, in general, although there is evidence in Proem documents of a social concern in relation to its role in society, it was observed that the program is based on the traditional and hegemonic view on the subject. In the documents analysis, it was noticed that the conceptions arranged CTS studies are present in relation to the multidimensional development concept. However, the texts analyzed, mostly were identified strong indications of thought supported by logical positivism, in propositions that refer to marketing issues, a proposal to generate an entrepreneurial culture guided by the development of technological innovations designed to meet and / or induce market demands, through production methods for popular goods. As for how managers and participants Proem in Campus Cornelius conceive the relationship between science, technology, innovation, development and society, also was identified closer to the classical view, although the respondents have pointed out many times in their speak apparent concern with social issues, like designing the development from a multidimensional view as cover critical studies CTS. The views of the participants, it was possible to link the strengthening of the concept connected to the linear model of development, in which the more it generates science, more is generated technology and more technology therefore produces more wealth, which in turn, the Schumpeterian view, is the basis of social welfare.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
Several recent offsite recreational fishing surveys have used public landline telephone directories as a sampling frame. Sampling biases inherent in this method are recognised, but are assumed to be corrected through demographic data expansion. However, the rising prevalence of mobile-only households has potentially increased these biases by skewing raw samples towards households that maintain relatively high levels of coverage in telephone directories. For biases to be corrected through demographic expansion, both the fishing participation rate and fishing activity must be similar among listed and unlisted fishers within each demographic group. In this study, we tested for a difference in the fishing activity of listed and unlisted fishers within demographic groups by comparing their avidity (number of fishing trips per year), as well as the platform used (boat or shore) and species targeted on their most recent fishing trip. 3062 recreational fishers were interviewed at 34 tackle stores across 12 residential regions of Queensland, Australia. For each fisher, data collected included their fishing avidity, the platform used and species targeted on their most recent trip, their gender, age, residential region, and whether their household had a listed telephone number. Although the most avid fishers were younger and less likely to have a listed phone number, cumulative link models revealed that avidity was not affected by an interaction of phone listing status, age group and residential region (p > 0.05). Likewise, binomial generalized linear models revealed that there was no interaction between phone listing, age group and avidity acting on platform (p > 0.05), and platform was not affected by an interaction of phone listing status, age group, and residential region (p > 0.05). Ordination of target species using Bray-Curtis dissimilarity indices found a significant but irrelevant difference (i.e. small effect size) between listed and unlisted fishers (ANOSIM R < 0.05, p < 0.05). These results suggest that, at this time, the fishing activity of listed and unlisted fishers in Queensland is similar within demographic groups. Future research seeking to validate the assumptions of recreational fishing telephone surveys should investigate fishing participation rates of listed and unlisted fishers within demographic groups.
Resumo:
My thesis consists of three essays that investigate strategic interactions between individuals engaging in risky collective action in uncertain environments. The first essay analyzes a broad class of incomplete information coordination games with a wide range of applications in economics and politics. The second essay draws from the general model developed in the first essay to study decisions by individuals of whether to engage in protest/revolution/coup/strike. The final essay explicitly integrates state response to the analysis. The first essay, Coordination Games with Strategic Delegation of Pivotality, exhaustively analyzes a class of binary action, two-player coordination games in which players receive stochastic payoffs only if both players take a ``stochastic-coordination action''. Players receive conditionally-independent noisy private signals about the normally distributed stochastic payoffs. With this structure, each player can exploit the information contained in the other player's action only when he takes the “pivotalizing action”. This feature has two consequences: (1) When the fear of miscoordination is not too large, in order to utilize the other player's information, each player takes the “pivotalizing action” more often than he would based solely on his private information, and (2) best responses feature both strategic complementarities and strategic substitutes, implying that the game is not supermodular nor a typical global game. This class of games has applications in a wide range of economic and political phenomena, including war and peace, protest/revolution/coup/ strike, interest groups lobbying, international trade, and adoption of a new technology. My second essay, Collective Action with Uncertain Payoffs, studies the decision problem of citizens who must decide whether to submit to the status quo or mount a revolution. If they coordinate, they can overthrow the status quo. Otherwise, the status quo is preserved and participants in a failed revolution are punished. Citizens face two types of uncertainty. (a) non-strategic: they are uncertain about the relative payoffs of the status quo and revolution, (b) strategic: they are uncertain about each other's assessments of the relative payoff. I draw on the existing literature and historical evidence to argue that the uncertainty in the payoffs of status quo and revolution is intrinsic in politics. Several counter-intuitive findings emerge: (1) Better communication between citizens can lower the likelihood of revolution. In fact, when the punishment for failed protest is not too harsh and citizens' private knowledge is accurate, then further communication reduces incentives to revolt. (2) Increasing strategic uncertainty can increase the likelihood of revolution attempts, and even the likelihood of successful revolution. In particular, revolt may be more likely when citizens privately obtain information than when they receive information from a common media source. (3) Two dilemmas arise concerning the intensity and frequency of punishment (repression), and the frequency of protest. Punishment Dilemma 1: harsher punishments may increase the probability that punishment is materialized. That is, as the state increases the punishment for dissent, it might also have to punish more dissidents. It is only when the punishment is sufficiently harsh, that harsher punishment reduces the frequency of its application. Punishment Dilemma 1 leads to Punishment Dilemma 2: the frequencies of repression and protest can be positively or negatively correlated depending on the intensity of repression. My third essay, The Repression Puzzle, investigates the relationship between the intensity of grievances and the likelihood of repression. First, I make the observation that the occurrence of state repression is a puzzle. If repression is to succeed, dissidents should not rebel. If it is to fail, the state should concede in order to save the costs of unsuccessful repression. I then propose an explanation for the “repression puzzle” that hinges on information asymmetries between the state and dissidents about the costs of repression to the state, and hence the likelihood of its application by the state. I present a formal model that combines the insights of grievance-based and political process theories to investigate the consequences of this information asymmetry for the dissidents' contentious actions and for the relationship between the magnitude of grievances (formulated here as the extent of inequality) and the likelihood of repression. The main contribution of the paper is to show that this relationship is non-monotone. That is, as the magnitude of grievances increases, the likelihood of repression might decrease. I investigate the relationship between inequality and the likelihood of repression in all country-years from 1981 to 1999. To mitigate specification problem, I estimate the probability of repression using a generalized additive model with thin-plate splines (GAM-TPS). This technique allows for flexible relationship between inequality, the proxy for the costs of repression and revolutions (income per capita), and the likelihood of repression. The empirical evidence support my prediction that the relationship between the magnitude of grievances and the likelihood of repression is non-monotone.
Resumo:
Undoubtedly, statistics has become one of the most important subjects in the modern world, where its applications are ubiquitous. The importance of statistics is not limited to statisticians, but also impacts upon non-statisticians who have to use statistics within their own disciplines. Several studies have indicated that most of the academic departments around the world have realized the importance of statistics to non-specialist students. Therefore, the number of students enrolled in statistics courses has vastly increased, coming from a variety of disciplines. Consequently, research within the scope of statistics education has been able to develop throughout the last few years. One important issue is how statistics is best taught to, and learned by, non-specialist students. This issue is controlled by several factors that affect the learning and teaching of statistics to non-specialist students, such as the use of technology, the role of the English language (especially for those whose first language is not English), the effectiveness of statistics teachers and their approach towards teaching statistics courses, students’ motivation to learn statistics and the relevance of statistics courses to the main subjects of non-specialist students. Several studies, focused on aspects of learning and teaching statistics, have been conducted in different countries around the world, particularly in Western countries. Conversely, the situation in Arab countries, especially in Saudi Arabia, is different; here, there is very little research in this scope, and what there is does not meet the needs of those countries towards the development of learning and teaching statistics to non-specialist students. This research was instituted in order to develop the field of statistics education. The purpose of this mixed methods study was to generate new insights into this subject by investigating how statistics courses are currently taught to non-specialist students in Saudi universities. Hence, this study will contribute towards filling the knowledge gap that exists in Saudi Arabia. This study used multiple data collection approaches, including questionnaire surveys from 1053 non-specialist students who had completed at least one statistics course in different colleges of the universities in Saudi Arabia. These surveys were followed up with qualitative data collected via semi-structured interviews with 16 teachers of statistics from colleges within all six universities where statistics is taught to non-specialist students in Saudi Arabia’s Eastern Region. The data from questionnaires included several types, so different techniques were used in analysis. Descriptive statistics were used to identify the demographic characteristics of the participants. The chi-square test was used to determine associations between variables. Based on the main issues that are raised from literature review, the questions (items scales) were grouped and five key groups of questions were obtained which are: 1) Effectiveness of Teachers; 2) English Language; 3) Relevance of Course; 4) Student Engagement; 5) Using Technology. Exploratory data analysis was used to explore these issues in more detail. Furthermore, with the existence of clustering in the data (students within departments within colleges, within universities), multilevel generalized linear models for dichotomous analysis have been used to clarify the effects of clustering at those levels. Factor analysis was conducted confirming the dimension reduction of variables (items scales). The data from teachers’ interviews were analysed on an individual basis. The responses were assigned to one of the eight themes that emerged from within the data: 1) the lack of students’ motivation to learn statistics; 2) students' participation; 3) students’ assessment; 4) the effective use of technology; 5) the level of previous mathematical and statistical skills of non-specialist students; 6) the English language ability of non-specialist students; 7) the need for extra time for teaching and learning statistics; and 8) the role of administrators. All the data from students and teachers indicated that the situation of learning and teaching statistics to non-specialist students in Saudi universities needs to be improved in order to meet the needs of those students. The findings of this study suggested a weakness in the use of statistical software applications in these courses. This study showed that there is lack of application of technology such as statistical software programs in these courses, which would allow non-specialist students to consolidate their knowledge. The results also indicated that English language is considered one of the main challenges in learning and teaching statistics, particularly in institutions where English is not used as the main language. Moreover, the weakness of mathematical skills of students is considered another major challenge. Additionally, the results indicated that there was a need to tailor statistics courses to the needs of non-specialist students based on their main subjects. The findings indicate that statistics teachers need to choose appropriate methods when teaching statistics courses.
Resumo:
Since turning professional in 1995 there have been considerable advances in the research on the demands of rugby union, largely using Global Positioning System (GPS) analysis over the last 10 years. A systematic review on the use of GPS, particularly the setting of absolute (ABS) and individual (IND) velocity bands in field based, intermittent, high-intensity (HI) team sports was undertaken. From 3669 records identified, 38 studies were included for qualitative analysis. Little agreement on the definition of movement intensities within team sports was found, only three papers, all on rugby union, had used IND bands, with only one comparing ABS and IND methods. Thus, the aim of this study was to determine if there is a difference in the demands within positions when comparing ABS and IND methods for GPS analysis and if these differences are significantly different between the forward and back positional groups. A total of 214 data files were recorded from 26 players in 17 matches of the 2015/2016 Scottish BT Premiership. ABS velocity zones 1-7 were set at 1) 0-6, 2) 6.1-11, 3) 11.1-15, 4) 15.1-18, 5) 18.1-21, 6) 21.1-15 and 7) 25.1-40km.h-1 while IND zones 1-7 were 1) <20, 2) 20-40, 3) 40-50, 4) 50-70, 5) 70-80, 6) 80-95 and 7) 95-100% of player’s individually determined maximum velocity (Vmax). A 40m sprint test measured Vmax using OptaPro S4 10 Hz (catapult, Australia) GPS units to derive IND bands. The same GPS units were worn during matches. GPS outputs analysed were % distance, % time, high intensity efforts (HIEs) over 18.1 km.h-1 / 70% max velocity and repeated high intensity efforts (RHIEs) which consists of three HIEs in 21secs. General linear model (GLM) analysis identified a significant difference in the measurement of % total distance covered, between the ABS and IND methods in all zones for forwards (p<0.05) and backs (p<0.05). This difference was also significant between forwards and backs in zones 1, shown as mean difference ± standard deviation (3.7±0.7%), 6 (1.2±0.4%) and 7 (1.0±0.0%) respectively (p<0.05). Percentage time estimations were significantly different between ABS and IND analysis within forwards in zones 1 (1.7±1.7%), 2 (-2.9±1.3%), 3 (1.9±0.8%), 4 (-1.4±0.8%) and 5 (0.2±0.4%), and within backs in zones 1 (-10±1.5%), 2 (-1.2±1.1%), 3 (1.8±0.9%) and 5 (0.6±0.5%) (p<0.05). The difference between groups was significant in zones 1, 2, 4 and 5 (p<0.05). The number of HIEs was significantly different between forwards and backs in zones 6 (6±2) and 7 (3±2). RHIEs were significantly different between ABS and IND for forwards (1±2, p<0.05) although not between groups. Until more research on the differences in ABS and IND methods is carried out, then neither can be deemed a criterion method. In conclusion, there are significant differences between the ABS and IND methods of GPS analysis of the physical demands of rugby union, which must be considered when used to inform training load and recovery to improve performance and reduce injuries.