932 resultados para Multilevel linear model
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
In this thesis used four different methods in order to diagnose the precipitation extremes on Northeastern Brazil (NEB): Generalized Linear Model s via logistic regression and Poisson, extreme value theory analysis via generalized extre me value (GEV) and generalized Pareto (GPD) distributions and Vectorial Generalized Linea r Models via GEV (MVLG GEV). The logistic regression and Poisson models were used to identify the interactions between the precipitation extremes and other variables based on the odds ratios and relative risks. It was found that the outgoing longwave radiation was the indicator variable for the occurrence of extreme precipitation on eastern, northern and semi arid NEB, and the relative humidity was verified on southern NEB. The GEV and GPD distribut ions (based on the 95th percentile) showed that the location and scale parameters were presented the maximum on the eastern and northern coast NEB, the GEV verified a maximum core on western of Pernambuco influenced by weather systems and topography. The GEV and GPD shape parameter, for most regions the data fitted by Weibull negative an d Beta distributions (ξ < 0) , respectively. The levels and return periods of GEV (GPD) on north ern Maranhão (centerrn of Bahia) may occur at least an extreme precipitation event excee ding over of 160.9 mm /day (192.3 mm / day) on next 30 years. The MVLG GEV model found tha t the zonal and meridional wind components, evaporation and Atlantic and Pacific se a surface temperature boost the precipitation extremes. The GEV parameters show the following results: a) location ( ), the highest value was 88.26 ± 6.42 mm on northern Maran hão; b) scale ( σ ), most regions showed positive values, except on southern of Maranhão; an d c) shape ( ξ ), most of the selected regions were adjusted by the Weibull negative distr ibution ( ξ < 0 ). The southern Maranhão and southern Bahia have greater accuracy. The level period, it was estimated that the centern of Bahia may occur at least an extreme precipitatio n event equal to or exceeding over 571.2 mm/day on next 30 years.
Resumo:
CHAPTER 1 - This study histologically evaluated two implant designs: a classic thread design versus another specifically designed for healing chamber formation placed with two drilling protocols. Forty dental implants (4.1 mm diameter) with two different macrogeometries were inserted in the tibia of 10 Beagle dogs, and maximum insertion torque was recorded. Drilling techniques were: until 3.75 mm (regular-group); and until 4.0 mm diameter (overdrillinggroup) for both implant designs. At 2 and 4 weeks, samples were retrieved and processed for histomorphometric analysis. For torque and BIC (bone-to-implant contact) and BAFO (bone area fraction occupied), a general-linear model was employed including instrumentation technique and time in vivo as independent. The insertion torque recorded for each implant design and drilling group significantly decreased as a function of increasing drilling diameter for both implant designs (p<0.001). No significant differences were detected between implant designs for each drilling technique (p>0.18). A significant increase in BIC was observed from 2 to 4 weeks for both implants placed with the overdrilling technique (p<0.03) only, but not for those placed in the 3.75 mm drilling sites (p>0.32). Despite the differences between implant designs and drilling technique an intramembranous-like healing mode with newly formed woven bone prevailed. CHAPTER 2 - The objective of this preliminary histologic study was to determine whether the alteration of drilling protocols (oversized, intermediate, undersized drilling) present different biologic responses at early healing periods of 2 weeks in vivo in a beagle dog model. Ten beagle dogs were acquired and subjected to surgeries in the tibia 2 weeks before euthanasia. During surgery, 3 implants, 4 mm in diameter by 10 mm in length, were placed in bone sites drilled to 3.5 mm, 3.75 mm, and 4.0 mm in final diameter. The insertion and removal torque was recorded for all samples. Statistical significance was set to 95% level of confidence and the number of dogs was considered as the statistical unit for all comparisons. For the torque and BIC and BAFO, a general linear model was employed including instrumentation technique and time in vivo as independent. Overall, the insertion torque increased as a function of drilling diameter from 4.0 mm, to 3.75 mm, to 3.5 mm, with a significant difference in torque levels between all groups (p<0.001). Statistical assessment of BIC and BAFO showed significantly higher values for the 3.75 mm (recommended) drilling group was observed relative to the other two groups (p<0.001). Different drilling dimensions resulted in variations in insertion torque values (primary stability) and different pattern of healing and interfacial remodeling was observed for the different groups. CHAPTER 3 - The present study evaluated the effect of different drilling dimensions (undersized, regular, and oversized) in the insertion and removal torques of dental implants in a beagle dog model. Six beagle dogs were acquired and subjected to bilateral surgeries in the radii 1 and 3 weeks before euthanasia. During surgery, 3 implants, 4 mm in diameter by 10 mm in length, were placed in bone sites drilled to 3.2 mm, 3.5 mm, and 3.8 mm in final diameter. The insertion and removal torque was recorded for all samples. Statistical analysis was performed by paired t tests for repeated measures and by t tests assuming unequal variances (all at the 95% level of significance). Overall, the insertion torque and removal torque levels obtained were inversely proportional to the drilling dimension, with a significant difference detected between the 3.2 mm and 3.5 mm relative to the 3.8 mm groups (P < 0.03). Although insertion torque–removal torque paired observations was statis- tically maintained for the 3.5 mm and 3.8 mm groups, a significant decrease in removal torque values relative to insertion torque levels was observed for the 3.2 mm group. A different pattern of healing and interfacial remodeling was observed for the different groups. Different drilling dimensions resulted in variations in insertion torque values (primary stability) and stability maintenance over the first weeks of bone healing.
Resumo:
Pupil light reflex can be used as a non-invasive ocular predictor of cephalic autonomic nervous system integrity. Spectral sensitivity of the pupil's response to light has, for some time, been an interesting issue. It has generally, however, only been investigated with the use of white light and studies with monochromatic wavelengths are scarce. This study investigates the effects of wavelength and age within three parameters of the pupil light reflex (amplitude of response, latency, and velocity of constriction) in a large sample of younger and older adults (N = 97), in mesopic conditions. Subjects were exposed to a single light stimulus at four different wavelengths: white (5600° K), blue (450 nm), green (510 nm), and red (600 nm). Data was analyzed appropriately, and, when applicable, using the General Linear Model (GLM), Randomized Complete Block Design (RCBD), Student's t-test and/or ANCOVA. Across all subjects, pupillary response to light had the greatest amplitude and shortest latency in white and green light conditions. In regards to age, older subjects (46-78 years) showed an increased latency in white light and decreased velocity of constriction in green light compared to younger subjects (18-45 years old). This study provides data patterns on parameters of wavelength-dependent pupil reflexes to light in adults and it contributes to the large body of pupillometric research. It is hoped that this study will add to the overall evaluation of cephalic autonomic nervous system integrity.
Resumo:
The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.
Resumo:
Background: Several theories, such as the biological width formation, the inflammatory reactions due to the implant-abutment microgap contamination, and the periimplant stress/strain concentration causing bone microdamage accumulation, have been suggested to explain early periimplant bone loss. However, it is yet not well understood to which extent the implant-abutment connection type may influence the remodeling process around dental implants. Aim: to evaluate clinical, bacteriological, and biomechanical parameters related to periimplant bone loss at the crestal region, comparing external hexagon (EH) and Morse-taper (MT) connections. Materials and methods: Twelve patients with totally edentulous mandibles received four custom made Ø 3.8 x 13 mm implants in the interforaminal region of the mandible, with the same design, but different prosthetic connections (two of them EH or MT, randomly placed based on a split-mouth design), and a immediate implant- supported prosthesis. Clinical parameters (periimplant probing pocket depth, modified gingival index and mucosal thickness) were evaluated at 6 sites around the implants, at a 12 month follow-up. The distance from the top of the implant to the first bone-to-implant contact – IT-FBIC was evaluated on standardized digital peri-apical radiographs acquired at 1, 3, 6 and 12 months follow-up. Samples of the subgingival microbiota were collected 1, 3 and 6 months after implant loading. DNA were extracted and used for the quantification of Tanerella forsythia, Porphyromonas gingivalis, Aggragatibacter actinomycetemcomitans, Prevotella intermedia and Fusobacterium nucleatum. Comparison among multiple periods of observation were performed using repeated-measures Analysis of Variance (ANOVA), followed by a Tukey post-hoc test, while two-period based comparisons were made using paired t- test. Further, 36 computer-tomographic based finite element (FE) models were accomplished, simulating each patient in 3 loading conditions. The results for the peak EQV strain in periimplant bone were interpreted by means of a general linear model (ANOVA). Results: The variation in periimplant bone loss assessed by means of radiographs was significantly different between the connection types (P<0.001). Mean IT-FBIC was 1.17±0.44 mm for EH, and 0.17±0.54 mm for MT, considering all evaluated time periods. All clinical parameters presented not significant differences. No significant microbiological differences could be observed between both connection types. Most of the collected samples had very few pathogens, meaning that these regions were healthy from a microbiological point of view. In FE analysis, a significantly higher peak of EQV strain (P=0.005) was found for EH (mean 3438.65 µ∑) compared to MT (mean 840.98 µ∑) connection. Conclusions: Varying implant-abutment connection type will result in diverse periimplant bone remodeling, regardless of clinical and microbiological conditions. This fact is more likely attributed to the singular loading transmission through different implant-abutment connections to the periimplant bone. The present findings suggest that Morse-taper connection is more efficient to prevent periimplant bone loss, compared to an external hexagon connection.
Resumo:
Background: Identifying biological markers to aid diagnosis of bipolar disorder (BD) is critically important. To be considered a possible biological marker, neural patterns in BD should be discriminant from those in healthy individuals (HI). We examined patterns of neuromagnetic responses revealed by magnetoencephalography (MEG) during implicit emotion-processing using emotional (happy, fearful, sad) and neutral facial expressions, in sixteen BD and sixteen age- and gender-matched healthy individuals. Methods: Neuromagnetic data were recorded using a 306-channel whole-head MEG ELEKTA Neuromag System, and preprocessed using Signal Space Separation as implemented in MaxFilter (ELEKTA). Custom Matlab programs removed EOG and ECG signals from filtered MEG data, and computed means of epoched data (0-250ms, 250-500ms, 500-750ms). A generalized linear model with three factors (individual, emotion intensity and time) compared BD and HI. A principal component analysis of normalized mean channel data in selected brain regions identified principal components that explained 95% of data variation. These components were used in a quadratic support vector machine (SVM) pattern classifier. SVM classifier performance was assessed using the leave-one-out approach. Results: BD and HI showed significantly different patterns of activation for 0-250ms within both left occipital and temporal regions, specifically for neutral facial expressions. PCA analysis revealed significant differences between BD and HI for mild fearful, happy, and sad facial expressions within 250-500ms. SVM quadratic classifier showed greatest accuracy (84%) and sensitivity (92%) for neutral faces, in left occipital regions within 500-750ms. Conclusions: MEG responses may be used in the search for disease specific neural markers.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.
Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.
The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.
The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.
All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.
Resumo:
Safeguarding organizations against opportunism and severe deception in computer-mediated communication (CMC) presents a major challenge to CIOs and IT managers. New insights into linguistic cues of deception derive from the speech acts innate to CMC. Applying automated text analysis to archival email exchanges in a CMC system as part of a reward program, we assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe deception by business partners. We empirically assess the predictive ability of our framework using an ordinal multilevel regression model. Results indicate that deceivers minimize the use of referencing and self-deprecation but include more superfluous descriptions and flattery. Deceitful channel partners also over structure their arguments and rapidly mimic the linguistic style of the account manager across dyadic e-mail exchanges. Thanks to its diagnostic value, the proposed framework can support firms’ decision-making and guide compliance monitoring system development.
Resumo:
It is crucial to understand the role that labor market positions might play in creating gender differences in work–life balance. One theoretical approach to understanding this relationship is the spillover theory. The spillover theory argues that an individual’s life domains are integrated; meaning that well-being can be transmitted between life domains. Based on data collected in Hungary in 2014, this paper shows that work-to-family spillover does not affect both genders the same way. The effect of work on family life tends to be more negative for women than for men. Two explanations have been formulated in order to understand this gender inequality. According to the findings of the analysis, gender is conditionally independent of spillover if financial status and flexibility of work are also incorporated into the analysis. This means that the relative disadvantage for women in terms of spillover can be attributed to their lower financial status and their relatively low access to flexible jobs. In other words, the gender inequalities in work-to-family spillover are deeply affected by individual labor market positions. The observation of the labor market’s effect on work–life balance is especially important in Hungary since Hungary has one of the least flexible labor arrangements in Europe. A marginal log-linear model, which is a method for categorical multivariate analysis, has been applied in this analysis.
Resumo:
LINS, Filipe C. A. et al. Modelagem dinâmica e simulação computacional de poços de petróleo verticais e direcionais com elevação por bombeio mecânico. In: CONGRESSO BRASILEIRO DE PESQUISA E DESENVOLVIMENTO EM PETRÓLEO E GÁS, 5. 2009, Fortaleza, CE. Anais... Fortaleza: CBPDPetro, 2009.
Resumo:
We analyze a real data set pertaining to reindeer fecal pellet-group counts obtained from a survey conducted in a forest area in northern Sweden. In the data set, over 70% of counts are zeros, and there is high spatial correlation. We use conditionally autoregressive random effects for modeling of spatial correlation in a Poisson generalized linear mixed model (GLMM), quasi-Poisson hierarchical generalized linear model (HGLM), zero-inflated Poisson (ZIP), and hurdle models. The quasi-Poisson HGLM allows for both under- and overdispersion with excessive zeros, while the ZIP and hurdle models allow only for overdispersion. In analyzing the real data set, we see that the quasi-Poisson HGLMs can perform better than the other commonly used models, for example, ordinary Poisson HGLMs, spatial ZIP, and spatial hurdle models, and that the underdispersed Poisson HGLMs with spatial correlation fit the reindeer data best. We develop R codes for fitting these models using a unified algorithm for the HGLMs. Spatial count response with an extremely high proportion of zeros, and underdispersion can be successfully modeled using the quasi-Poisson HGLM with spatial random effects.
Resumo:
Objective: 1) to assess the preparedness to practice and satisfaction in learning environment amongst new graduates from European osteopathic institutions; 2) to compare the results of preparedness to practice and satisfaction in learning environment between and within countries where osteopathy is regulated and where regulation is still to be achieved; 3) to identify possible correlations between learning environment and preparedness to practice. Method: Osteopathic education providers of full-time education located in Europe were enrolled, and their final year students were contacted to complete a survey. Measures used were: Dundee Ready Educational Environment Measure (DREEM), the Association of American Medical Colleges (AAMC) and a demographic questionnaire. Scores were compared across institutions using one-way ANOVA and generalised linear model. Results: Nine European osteopathic education institutions participated in the study (4 located in Italy, 2 in the UK, 1 in France, 1 in Belgium and 1 in the Netherlands) and 243 (77%) of their final-year students completed the survey. The DREEM total score mean was 121.4 (SEM: 1.66) whilst the AAMC was 17.58 (SEM:0.35). A generalised linear model found a significant association between not-regulated countries and total score as well as subscales DREEM scores (p<0.001). Learning environment and preparedness to practice were significantly positively correlated (r=0.76; p<0.01). Discussion: A perceived higher level of preparedness and satisfaction was found amongst students from osteopathic institutions located in countries without regulation compared to those located in countries where osteopathy is regulated; however, all institutions obtained a 'more positive than negative' result. Moreover, in general, cohorts with fewer than 20 students scored significantly higher compared to larger student cohorts. Finally, an overall positive correlation between students' preparedness and satisfaction were found across all institutions recruited.
Resumo:
Estuaries are areas which, from their structure, their fonctioning, and their localisation, are subject to significant contribution of nutrients. One of the objectif of the RNO, the French network for coastal water quality monitoring, is to assess the levels and trends of nutrient concentrations in estuaries. A linear model was used in order to describe and to explain the total dissolved nitrogen concentration evolution in the three most important estuaries on the Chanel-Atlantic front (Seine, Loire and Gironde). As a first step, the selection of a reliable data set was performed. Then total dissolved nitrogen evolution schemes in estuary environment were graphically studied, and allowed a resonable choice of covariables. The salinity played a major role in explaining nitrogen concentration variability in estuary, and dilution lines were proved to be a useful tool to detect outlying observations and to model the nitrogenlsalinity relation. Increasing trends were detected by the model, with a high magnitude in Seine, intermediate in Loire, and lower in Gironde. The non linear trend estimated in Loire and Seine estuaries could be due to important interannual variations as suggest in graphics. In the objective of the QUADRIGE database valorisation, a discussion on the statistical model, and on the RNO hydrological data sampling strategy, allowed to formulate suggestions towards a better exploitation of nutrient data.