922 resultados para Generalized Linear Model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional wave kinetics describes the slow evolution of systems with many degrees of freedom to equilibrium via numerous weak non-linear interactions and fails for very important class of dissipative (active) optical systems with cyclic gain and losses, such as lasers with non-linear intracavity dynamics. Here we introduce a conceptually new class of cyclic wave systems, characterized by non-uniform double-scale dynamics with strong periodic changes of the energy spectrum and slow evolution from cycle to cycle to a statistically steady state. Taking a practically important example—random fibre laser—we show that a model describing such a system is close to integrable non-linear Schrödinger equation and needs a new formalism of wave kinetics, developed here. We derive a non-linear kinetic theory of the laser spectrum, generalizing the seminal linear model of Schawlow and Townes. Experimental results agree with our theory. The work has implications for describing kinetics of cyclical systems beyond photonics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evidence of the relationship between altered cognitive function and depleted Fe status is accumulating in women of reproductive age but the degree of Fe deficiency associated with negative neuropsychological outcomes needs to be delineated. Data are limited regarding this relationship in university women in whom optimal cognitive function is critical to academic success. The aim of the present study was to examine the relationship between body Fe, in the absence of Fe-deficiency anaemia, and neuropsychological function in young college women. Healthy, non-Anaemic undergraduate women (n 42) provided a blood sample and completed a standardised cognitive test battery consisting of one manual (Tower of London (TOL), a measure of central executive function) and five computerised (Bakan vigilance task, mental rotation, simple reaction time, immediate word recall and two-finger tapping) tasks. Women's body Fe ranged from - 4·2 to 8·1 mg/kg. General linear model ANOVA revealed a significant effect of body Fe on TOL planning time (P= 0.002). Spearman's correlation coefficients showed a significant inverse relationship between body Fe and TOL planning time for move categories 4 (r - 0.39, P= 0.01) and 5 (r - 0.47, P= 0.002). Performance on the computerised cognitive tasks was not affected by body Fe level. These findings suggest that Fe status in the absence of anaemia is positively associated with central executive function in otherwise healthy college women. Copyright © The Authors 2012.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this study is to demonstrate using weak form partial differential equation (PDE) method for a finite-element (FE) modeling of a new constitutive relation without the need of user subroutine programming. The viscoelastic asphalt mixtures were modeled by the weak form PDE-based FE method as the examples in the paper. A solid-like generalized Maxwell model was used to represent the deforming mechanism of a viscoelastic material, the constitutive relations of which were derived and implemented in the weak form PDE module of Comsol Multiphysics, a commercial FE program. The weak form PDE modeling of viscoelasticity was verified by comparing Comsol and Abaqus simulations, which employed the same loading configurations and material property inputs in virtual laboratory test simulations. Both produced identical results in terms of axial and radial strain responses. The weak form PDE modeling of viscoelasticity was further validated by comparing the weak form PDE predictions with real laboratory test results of six types of asphalt mixtures with two air void contents and three aging periods. The viscoelastic material properties such as the coefficients of a Prony series model for the relaxation modulus were obtained by converting from the master curves of dynamic modulus and phase angle. Strain responses of compressive creep tests at three temperatures and cyclic load tests were predicted using the weak form PDE modeling and found to be comparable with the measurements of the real laboratory tests. It was demonstrated that the weak form PDE-based FE modeling can serve as an efficient method to implement new constitutive models and can free engineers from user subroutine programming.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation studied the determinants and consequences of corporate reputation. It explored how firm-, industry-, and country-level factors influence the general public’s assessment of a firm’s reputation and how this reputation assessment impacted the firm’s strategic actions and organizational outcomes. The three empirical essays are grounded on separate theoretical paradigms in strategy, organizational theory, and corporate governance. The first essay used signaling theory to investigate firm-, industry-, and country-level determinants of individual-level corporate reputation assessments. Using a hierarchical linear model, it tested the theory based on individual evaluations of the largest companies across countries. Results indicated that variables at multiple analysis levels simultaneously impact individual level reputation assessments. Interactions were also found between industry- and country-level factors. Results confirmed the multi-level nature of signaling influences on reputation assessments. Building on a stakeholder-power approach to corporate governance, the second essay studied how differences in the power and preferences of three stakeholder groups—shareholders, creditors, and workers—across countries influence the general public’s reputation assessments of corporations. Examining the largest companies across countries, the study found that while the influence of stock market return is stronger in societies where shareholders have more power, social performance has a more significant role in shaping reputation evaluations in societies with stronger labor rights. Unexpectedly, when creditors have greater power, the influence of financial stability on reputation assessment becomes weaker. Exploring the consequences of reputation, the third essay investigated the specific effects of intangible assets on strategic actions and organizational outcomes. Particularly, it individually studied the impacts of acquirer acquisition experience, corporate reputation, and approach toward social responsibilities as well as their combined effect on market reactions to acquisition announcements. Using an event study of acquisition announcements, it confirmed the significant impacts of both action-specific (acquisition experience) and general (reputation and social performance) intangible assets on market expectations of acquisition outcomes. Moreover, the analysis demonstrated that reputation magnifies the impact of acquisition experience on market response to acquisition announcements. In conclusion, this dissertation tried to advance and extend the application of management and organizational theories by explaining the mechanisms underlying antecedents and consequences of corporate reputation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chronic disease affects 80% of adults over the age of 65 and is expected to increase in prevalence. To address the burden of chronic disease, self-management programs have been developed to increase self-efficacy and improve quality of life by reducing or halting disease symptoms. Two programs that have been developed to address chronic disease are the Chronic Disease Self-Management Program (CDSMP) and Tomando Control de su Salud (TCDS). CDSMP and TCDS both focus on improving participant self-efficacy, but use different curricula, as TCDS is culturally tailored for the Hispanic population. Few studies have evaluated the effectiveness of CDSMP and TCDS when translated to community settings. In addition, little is known about the correlation between demographic, baseline health status, and psychosocial factors and completion of either CDSMP or TCDS. This study used secondary data collected by agencies of the Healthy Aging Regional Collaborative from 10/01/2008–12/31/2010. The aims of this study were to examine six week differences in self-efficacy, time spent performing physical activity, and social/role activity limitations, and to identify correlates of program completion using baseline demographic and psychosocial factors. To examine if differences existed a general linear model was used. Additionally, logistic regression was used to examine correlates of program completion. Study findings show that all measures showed improvement at week six. For CDSMP, self-efficacy to manage disease (p = .001), self-efficacy to manage emotions (p = .026), social/role activities limitations (p = .001), and time spent walking (p = .008) were statistically significant. For TCDS, self-efficacy to manage disease (p = .006), social/role activities limitations (p = .001), and time spent walking (p = .016) and performing other aerobic activity (p = .005) were significant. For CDSMP, no correlates predicting program completion were found to be significant. For TCDS, participants who were male (OR=2.3, 95%CI: 1.15–4.66), from Broward County (OR=2.3, 95%CI: 1.27–4.25), or living alone (OR=2.0, 95%CI: 1.29-–3.08) were more likely to complete the program. CDSMP and TCDS, when implemented through a collaborative effort, can result in improvements for participants. Effective chronic disease management can improve health, quality of life, and reduce health care expenditures among older adults.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prior to 2000, there were less than 1.6 million students enrolled in at least one online course. By fall 2010, student enrollment in online distance education showed a phenomenal 283% increase to 6.1 million. Two years later, this number had grown to 7.1 million. In light of this significant growth and skepticism about quality, there have been calls for greater oversight of this format of educational delivery. Accrediting bodies tasked with this oversight have developed guidelines and standards for online education. There is a lack of empirical studies that examine the relationship between accrediting standards and student success. The purpose of this study was to examine the relationship between the presence of Southern Association of Colleges and Schools Commission on College (SACSCOC) standards for online education in online courses, (a) student support services and (b) curriculum and instruction, and student success. An original 24-item survey with an overall reliability coefficient of .94 was administered to students (N=464) at Florida International University, enrolled in 24 university-wide undergraduate online courses during fall 2014, who rated the presence of these standards in their online courses. The general linear model was utilized to analyze the data. The results of the study indicated that the two standards, student support services and curriculum and instruction were both significantly and positively correlated with student success but with small R2 and strengths of association less than .35 and .20 respectively. Mixed results were produced from Chi-square tests for differences in student success between higher and lower rated online courses when controlling for various covariates such as discipline, gender, race/ethnicity, GPA, age, and number of online courses previously taken. A multiple linear regression analysis revealed that the curriculum and instruction standard was the only variable that accounted for a significant amount of unique variance in student success. Another regression test revealed that no significant interaction effect exists between the two SACSCOC standards and GPA in predicting student success. The results of this study are useful for administrators, faculty, and researchers who are interested in accreditation standards for online education and how these standards relate to student success.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CHAPTER 1 - This study histologically evaluated two implant designs: a classic thread design versus another specifically designed for healing chamber formation placed with two drilling protocols. Forty dental implants (4.1 mm diameter) with two different macrogeometries were inserted in the tibia of 10 Beagle dogs, and maximum insertion torque was recorded. Drilling techniques were: until 3.75 mm (regular-group); and until 4.0 mm diameter (overdrillinggroup) for both implant designs. At 2 and 4 weeks, samples were retrieved and processed for histomorphometric analysis. For torque and BIC (bone-to-implant contact) and BAFO (bone area fraction occupied), a general-linear model was employed including instrumentation technique and time in vivo as independent. The insertion torque recorded for each implant design and drilling group significantly decreased as a function of increasing drilling diameter for both implant designs (p<0.001). No significant differences were detected between implant designs for each drilling technique (p>0.18). A significant increase in BIC was observed from 2 to 4 weeks for both implants placed with the overdrilling technique (p<0.03) only, but not for those placed in the 3.75 mm drilling sites (p>0.32). Despite the differences between implant designs and drilling technique an intramembranous-like healing mode with newly formed woven bone prevailed. CHAPTER 2 - The objective of this preliminary histologic study was to determine whether the alteration of drilling protocols (oversized, intermediate, undersized drilling) present different biologic responses at early healing periods of 2 weeks in vivo in a beagle dog model. Ten beagle dogs were acquired and subjected to surgeries in the tibia 2 weeks before euthanasia. During surgery, 3 implants, 4 mm in diameter by 10 mm in length, were placed in bone sites drilled to 3.5 mm, 3.75 mm, and 4.0 mm in final diameter. The insertion and removal torque was recorded for all samples. Statistical significance was set to 95% level of confidence and the number of dogs was considered as the statistical unit for all comparisons. For the torque and BIC and BAFO, a general linear model was employed including instrumentation technique and time in vivo as independent. Overall, the insertion torque increased as a function of drilling diameter from 4.0 mm, to 3.75 mm, to 3.5 mm, with a significant difference in torque levels between all groups (p<0.001). Statistical assessment of BIC and BAFO showed significantly higher values for the 3.75 mm (recommended) drilling group was observed relative to the other two groups (p<0.001). Different drilling dimensions resulted in variations in insertion torque values (primary stability) and different pattern of healing and interfacial remodeling was observed for the different groups. CHAPTER 3 - The present study evaluated the effect of different drilling dimensions (undersized, regular, and oversized) in the insertion and removal torques of dental implants in a beagle dog model. Six beagle dogs were acquired and subjected to bilateral surgeries in the radii 1 and 3 weeks before euthanasia. During surgery, 3 implants, 4 mm in diameter by 10 mm in length, were placed in bone sites drilled to 3.2 mm, 3.5 mm, and 3.8 mm in final diameter. The insertion and removal torque was recorded for all samples. Statistical analysis was performed by paired t tests for repeated measures and by t tests assuming unequal variances (all at the 95% level of significance). Overall, the insertion torque and removal torque levels obtained were inversely proportional to the drilling dimension, with a significant difference detected between the 3.2 mm and 3.5 mm relative to the 3.8 mm groups (P < 0.03). Although insertion torque–removal torque paired observations was statis- tically maintained for the 3.5 mm and 3.8 mm groups, a significant decrease in removal torque values relative to insertion torque levels was observed for the 3.2 mm group. A different pattern of healing and interfacial remodeling was observed for the different groups. Different drilling dimensions resulted in variations in insertion torque values (primary stability) and stability maintenance over the first weeks of bone healing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pupil light reflex can be used as a non-invasive ocular predictor of cephalic autonomic nervous system integrity. Spectral sensitivity of the pupil's response to light has, for some time, been an interesting issue. It has generally, however, only been investigated with the use of white light and studies with monochromatic wavelengths are scarce. This study investigates the effects of wavelength and age within three parameters of the pupil light reflex (amplitude of response, latency, and velocity of constriction) in a large sample of younger and older adults (N = 97), in mesopic conditions. Subjects were exposed to a single light stimulus at four different wavelengths: white (5600° K), blue (450 nm), green (510 nm), and red (600 nm). Data was analyzed appropriately, and, when applicable, using the General Linear Model (GLM), Randomized Complete Block Design (RCBD), Student's t-test and/or ANCOVA. Across all subjects, pupillary response to light had the greatest amplitude and shortest latency in white and green light conditions. In regards to age, older subjects (46-78 years) showed an increased latency in white light and decreased velocity of constriction in green light compared to younger subjects (18-45 years old). This study provides data patterns on parameters of wavelength-dependent pupil reflexes to light in adults and it contributes to the large body of pupillometric research. It is hoped that this study will add to the overall evaluation of cephalic autonomic nervous system integrity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The authors would like to thank the College of Life Sciences of Aberdeen University and Marine Scotland Science which funded CP's PhD project. Skate tagging experiments were undertaken as part of Scottish Government project SP004. We thank Ian Burrett for help in catching the fish and the other fishermen and anglers who returned tags. We thank José Manuel Gonzalez-Irusta for extracting and making available the environmental layers used as environmental covariates in the environmental suitability modelling procedure. We also thank Jason Matthiopoulos for insightful suggestions on habitat utilization metrics as well as Stephen C.F. Palmer, and three anonymous reviewers for useful suggestions to improve the clarity and quality of the manuscript.