928 resultados para general regression model
Resumo:
Given the growing impact of human activities on the sea, managers are increasingly turning to marine protected areas (MPAs) to protect marine habitats and species. Many MPAs have been unsuccessful, however, and lack of income has been identified as a primary reason for failure. In this study, data from a global survey of 79 MPAs in 36 countries were analysed and attempts made to construct predictive models to determine the income requirements of any given MPA. Statistical tests were used to uncover possible patterns and relationships in the data, with two basic approaches. In the first of these, an attempt was made to build an explanatory "bottom-up" model of the cost structures that might be required to pursue various management activities. This proved difficult in practice owing to the very broad range of applicable data, spanning many orders of magnitude. In the second approach, a "top-down" regression model was constructed using logarithms of the base data, in order to address the breadth of the data ranges. This approach suggested that MPA size and visitor numbers together explained 46% of the minimum income requirements (P < 0.001), with area being the slightly more influential factor. The significance of area to income requirements was of little surprise, given its profile in the literature. However, the relationship between visitors and income requirements might go some way to explaining why northern hemisphere MPAs with apparently high incomes still claim to be under-funded. The relationship between running costs and visitor numbers has important implications not only in determining a realistic level of funding for MPAs, but also in assessing from where funding might be obtained. Since a substantial proportion of the income of many MPAs appears to be utilized for amenity purposes, a case may be made for funds to be provided from the typically better resourced government social and educational budgets as well as environmental budgets. Similarly visitor fees, already an important source of funding for some MPAs, might have a broader role to play in how MPAs are financed in the future. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Fixed transactions costs that prohibit exchange engender bias in supply analysis due to censoring of the sample observations. The associated bias in conventional regression procedures applied to censored data and the construction of robust methods for mitigating bias have been preoccupations of applied economists since Tobin [Econometrica 26 (1958) 24]. This literature assumes that the true point of censoring in the data is zero and, when this is not the case, imparts a bias to parameter estimates of the censored regression model. We conjecture that this bias can be significant; affirm this from experiments; and suggest techniques for mitigating this bias using Bayesian procedures. The bias-mitigating procedures are based on modifications of the key step that facilitates Bayesian estimation of the censored regression model; are easy to implement; work well in both small and large samples; and lead to significantly improved inference in the censored regression model. These findings are important in light of the widespread use of the zero-censored Tobit regression and we investigate their consequences using data on milk-market participation in the Ethiopian highlands. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
A 2-year longitudinal survey was carried out to investigate factors affecting milk yield in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of milk yield, body condition score (BCS) and heart girth measurements were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in milk yield were investigated using a general linear model by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. Daily milk yield (including milk sucked by calves) was determined by calving year (p < 0.0001), calf rearing method (p = 0.044) and BCS at calving (p < 0.0001). Only BCS at calving contributed to variation in volume of milk sucked by the calf, lactation length and lactation milk yield. BCS at 3 months after calving was improved on farms where labour was hired (p = 0.041) and BCS change from calving to 6 months was more than twice as likely to be negative on U than SU and PU farms. It was concluded that milk production was predominantly associated with BCS at calving, lactation milk yield increasing quadratically from score 1 to 3. BCS at calving may provide a simple, single indicator of the nutritional status of a cow population.
Resumo:
A 2-year longitudinal survey was carried out to investigate factors affecting reproduction in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of reproductive traits and body condition score (BCS) were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in reproductive traits were investigated using a general linear model (GLM) by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), type of insemination, calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. The effects of the independent variables identified were then investigated using a non-parametric survival technique. The number of days to first oestrus was increased on the U site (p = 0.045) and when family labour was used (p = 0.02). The non-parametric test confirmed the effect of site (p = 0.059), but effect of labour was not significant. The number of days from calving to conception was reduced by hiring labour (p = 0.003) and using natural service (p = 0.028). The non-parametric test confirmed the effects of type of insemination (p = 0.0001) while also identifying extended calving intervals on U and SU sites (p = 0.014). Labour source was again non-significant. Calving interval was prolonged on U and SU sites (p = 0.021), by the use of AI (p = 0.031) and by the use of family labour (p = 0.001). The non-parametric test confirmed the effect of site (p = 0.008) and insemination type (p > 0.0001) but not of labour source. It was concluded that under favourable conditions (PU site, hired labour and natural service) calving intervals of around 440 days could be achieved.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on binary measures of undesirable events and continuous measures of therapeutic benefit. The methods generalize earlier approaches where undesirable events and therapeutic benefit are both binary. A logistic regression model is used to model the binary responses, while a linear regression model is used to model the continuous responses. Prior distributions for the unknown model parameters are suggested. A gain function is discussed and an optional safety constraint is included. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
Aim: To describe the geographical pattern of mean body size of the non-volant mammals of the Nearctic and Neotropics and evaluate the influence of five environmental variables that are likely to affect body size gradients. Location: The Western Hemisphere. Methods: We calculated mean body size (average log mass) values in 110 × 110 km cells covering the continental Nearctic and Neotropics. We also generated cell averages for mean annual temperature, range in elevation, their interaction, actual evapotranspiration, and the global vegetation index and its coefficient of variation. Associations between mean body size and environmental variables were tested with simple correlations and ordinary least squares multiple regression, complemented with spatial autocorrelation analyses and split-line regression. We evaluated the relative support for each multiple-regression model using AIC. Results: Mean body size increases to the north in the Nearctic and is negatively correlated with temperature. In contrast, across the Neotropics mammals are largest in the tropical and subtropical lowlands and smaller in the Andes, generating a positive correlation with temperature. Finally, body size and temperature are nonlinearly related in both regions, and split-line linear regression found temperature thresholds marking clear shifts in these relationships (Nearctic 10.9 °C; Neotropics 12.6 °C). The increase in body sizes with decreasing temperature is strongest in the northern Nearctic, whereas a decrease in body size in mountains dominates the body size gradients in the warmer parts of both regions. Main conclusions: We confirm previous work finding strong broad-scale Bergmann trends in cold macroclimates but not in warmer areas. For the latter regions (i.e. the southern Nearctic and the Neotropics), our analyses also suggest that both local and broad-scale patterns of mammal body size variation are influenced in part by the strong mesoscale climatic gradients existing in mountainous areas. A likely explanation is that reduced habitat sizes in mountains limit the presence of larger-sized mammals.
Resumo:
Background: Robot-mediated therapies offer entirely new approaches to neurorehabilitation. In this paper we present the results obtained from trialling the GENTLE/S neurorehabilitation system assessed using the upper limb section of the Fugl-Meyer ( FM) outcome measure. Methods: We demonstrate the design of our clinical trial and its results analysed using a novel statistical approach based on a multivariate analytical model. This paper provides the rational for using multivariate models in robot-mediated clinical trials and draws conclusions from the clinical data gathered during the GENTLE/S study. Results: The FM outcome measures recorded during the baseline ( 8 sessions), robot-mediated therapy ( 9 sessions) and sling-suspension ( 9 sessions) was analysed using a multiple regression model. The results indicate positive but modest recovery trends favouring both interventions used in GENTLE/S clinical trial. The modest recovery shown occurred at a time late after stroke when changes are not clinically anticipated. Conclusion: This study has applied a new method for analysing clinical data obtained from rehabilitation robotics studies. While the data obtained during the clinical trial is of multivariate nature, having multipoint and progressive nature, the multiple regression model used showed great potential for drawing conclusions from this study. An important conclusion to draw from this paper is that this study has shown that the intervention and control phase both caused changes over a period of 9 sessions in comparison to the baseline. This might indicate that use of new challenging and motivational therapies can influence the outcome of therapies at a point when clinical changes are not expected. Further work is required to investigate the effects arising from early intervention, longer exposure and intensity of the therapies. Finally, more function-oriented robot-mediated therapies or sling-suspension therapies are needed to clarify the effects resulting from each intervention for stroke recovery.
Resumo:
Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.
Resumo:
The winter climate of Europe and the Mediterranean is dominated by the weather systems of the mid-latitude storm tracks. The behaviour of the storm tracks is highly variable, particularly in the eastern North Atlantic, and has a profound impact on the hydroclimate of the Mediterranean region. A deeper understanding of the storm tracks and the factors that drive them is therefore crucial for interpreting past changes in Mediterranean climate and the civilizations it has supported over the last 12 000 years (broadly the Holocene period). This paper presents a discussion of how changes in climate forcing (e.g. orbital variations, greenhouse gases, ice sheet cover) may have impacted on the ‘basic ingredients’ controlling the mid-latitude storm tracks over the North Atlantic and the Mediterranean on intermillennial time scales. Idealized simulations using the HadAM3 atmospheric general circulation model (GCM) are used to explore the basic processes, while a series of timeslice simulations from a similar atmospheric GCM coupled to a thermodynamic slab ocean (HadSM3) are examined to identify the impact these drivers have on the storm track during the Holocene. The results suggest that the North Atlantic storm track has moved northward and strengthened with time since the Early to Mid-Holocene. In contrast, the Mediterranean storm track may have weakened over the same period. It is, however, emphasized that much remains still to be understood about the evolution of the North Atlantic and Mediterranean storm tracks during the Holocene period.
Resumo:
This paper describes the impact of changing the current imposed ozone climatology upon the tropical Quasi-Biennial Oscillation (QBO) in a high top climate configuration of the Met Office U.K. general circulation model. The aim is to help distinguish between QBO changes in chemistry climate models that result from temperature-ozone feedbacks and those that might be forced by differences in climatology between previously fixed and newly interactive ozone distributions. Different representations of zonal mean ozone climatology under present-day conditions are taken to represent the level of change expected between acceptable model realizations of the global ozone distribution and thus indicate whether more detailed investigation of such climatology issues might be required when assessing ozone feedbacks. Tropical stratospheric ozone concentrations are enhanced relative to the control climatology between 20–30 km, reduced from 30–40 km and enhanced above, impacting the model profile of clear-sky radiative heating, in particular warming the tropical stratosphere between 15–35 km. The outcome is consistent with a localized equilibrium response in the tropical stratosphere that generates increased upwelling between 100 and 4 hPa, sufficient to account for a 12 month increase of modeled mean QBO period. This response has implications for analysis of the tropical circulation in models with interactive ozone chemistry because it highlights the possibility that plausible changes in the ozone climatology could have a sizable impact upon the tropical upwelling and QBO period that ought to be distinguished from other dynamical responses such as ozone-temperature feedbacks.
Resumo:
Little has so far been reported on the performance of the near-far resistant CDMA detectors in the presence of the synchronization errors. Starting with the general mathematical model of matched filters, this paper examines the effects of three classes of synchronization errors (i.e. time-delay errors, carrier phase errors, and carrier frequency errors) on the performance (bit error rate and near-far resistance) of an emerging type of near-far resistant coherent DS/SSMA detectors, i.e. the linear decorrelating detector (LDD). For comparison, the corresponding results for the conventional detector are also presented. It is shown that the LDD can still maintain a considerable performance advantage over the conventional detector even when some synchronization errors exist. Finally, several computer simulations are carried out to verify the theoretical conclusions.
Resumo:
Little has been reported on the performance of near-far resistant CDMA detectors in the presence of system parameter estimation errors (SPEEs). Starting with the general mathematical model of matched filters, the paper examines the effects of three classes of SPEEs, i.e., time-delay, carrier phase, and carrier frequency errors, on the performance (BER) of an emerging type of near-far resistant coherent DS/SSMA detector, i.e., the linear decorrelating detector. For comparison, the corresponding results for the conventional detector are also presented. It is shown that the linear decorrelating detector can still maintain a considerable performance advantage over the conventional detector even when some SPEEs exist.