48 resultados para PROPORTIONAL HAZARD AND ACCELERATED FAILURE MODELS
em CentAUR: Central Archive University of Reading - UK
Resumo:
Accelerated failure time models with a shared random component are described, and are used to evaluate the effect of explanatory factors and different transplant centres on survival times following kidney transplantation. Different combinations of the distribution of the random effects and baseline hazard function are considered and the fit of such models to the transplant data is critically assessed. A mixture model that combines short- and long-term components of a hazard function is then developed, which provides a more flexible model for the hazard function. The model can incorporate different explanatory variables and random effects in each component. The model is straightforward to fit using standard statistical software, and is shown to be a good fit to the transplant data. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
We analyse by simulation the impact of model-selection strategies (sometimes called pre-testing) on forecast performance in both constant-and non-constant-parameter processes. Restricted, unrestricted and selected models are compared when either of the first two might generate the data. We find little evidence that strategies such as general-to-specific induce significant over-fitting, or thereby cause forecast-failure rejection rates to greatly exceed nominal sizes. Parameter non-constancies put a premium on correct specification, but in general, model-selection effects appear to be relatively small, and progressive research is able to detect the mis-specifications.
Resumo:
A university degree is effectively a prerequisite for entering the archaeological workforce in the UK. Archaeological employers consider that new entrants to the profession are insufficiently skilled, and hold university training to blame. But university archaeology departments do not consider it their responsibility to deliver fully formed archaeological professionals, but rather to provide an education that can then be applied in different workplaces, within and outside archaeology. The number of individuals studying archaeology at university exceeds the total number working in professional practice, with many more new graduates emerging than archaeological jobs advertised annually. Over-supply of practitioners is also a contributing factor to low pay in archaeology. Steps are being made to provide opportunities for vocational training, both within and outside the university system, but archaeological training and education within the universities and subsequently the archaeological labour market may be adversely impacted upon by the introduction of variable top-up student fees.
Resumo:
1. We compared the baseline phosphorus (P) concentrations inferred by diatom-P transfer functions and export coefficient models at 62 lakes in Great Britain to assess whether the techniques produce similar estimates of historical nutrient status. 2. There was a strong linear relationship between the two sets of values over the whole total P (TP) gradient (2-200 mu g TP L-1). However, a systematic bias was observed with the diatom model producing the higher values in 46 lakes (of which values differed by more than 10 mu g TP L-1 in 21). The export coefficient model gave the higher values in 10 lakes (of which the values differed by more than 10 mu g TP L-1 in only 4). 3. The difference between baseline and present-day TP concentrations was calculated to compare the extent of eutrophication inferred by the two sets of model output. There was generally poor agreement between the amounts of change estimated by the two approaches. The discrepancy in both the baseline values and the degree of change inferred by the models was greatest in the shallow and more productive sites. 4. Both approaches were applied to two lakes in the English Lake District where long-term P data exist, to assess how well the models track measured P concentrations since approximately 1850. There was good agreement between the pre-enrichment TP concentrations generated by the models. The diatom model paralleled the steeper rise in maximum soluble reactive P (SRP) more closely than the gradual increase in annual mean TP in both lakes. The export coefficient model produced a closer fit to observed annual mean TP concentrations for both sites, tracking the changes in total external nutrient loading. 5. A combined approach is recommended, with the diatom model employed to reflect the nature and timing of the in-lake response to changes in nutrient loading, and the export coefficient model used to establish the origins and extent of changes in the external load and to assess potential reduction in loading under different management scenarios. 6. However, caution must be exercised when applying these models to shallow lakes where the export coefficient model TP estimate will not include internal P loading from lake sediments and where the diatom TP inferences may over-estimate TP concentrations because of the high abundance of benthic taxa, many of which are poor indicators of trophic state.
Resumo:
The ground surface net solar radiation is the energy that drives physical and chemical processes at the ground surface. In this paper, multi-spectral data from the Landsat-5 TM, topographic data from a gridded digital elevation model, field measurements, and the atmosphere model LOWTRAN 7 are used to estimate surface net solar radiation over the FIFE site. Firstly an improved method is presented and used for calculating total surface incoming radiation. Then, surface albedo is integrated from surface reflectance factors derived from remotely sensed data from Landsat-5 TM. Finally, surface net solar radiation is calculated by subtracting surface upwelling radiation from the total surface incoming radiation.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
This work analyzes the use of linear discriminant models, multi-layer perceptron neural networks and wavelet networks for corporate financial distress prediction. Although simple and easy to interpret, linear models require statistical assumptions that may be unrealistic. Neural networks are able to discriminate patterns that are not linearly separable, but the large number of parameters involved in a neural model often causes generalization problems. Wavelet networks are classification models that implement nonlinear discriminant surfaces as the superposition of dilated and translated versions of a single "mother wavelet" function. In this paper, an algorithm is proposed to select dilation and translation parameters that yield a wavelet network classifier with good parsimony characteristics. The models are compared in a case study involving failed and continuing British firms in the period 1997-2000. Problems associated with over-parameterized neural networks are illustrated and the Optimal Brain Damage pruning technique is employed to obtain a parsimonious neural model. The results, supported by a re-sampling study, show that both neural and wavelet networks may be a valid alternative to classical linear discriminant models.
Resumo:
A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.
Resumo:
A self-tuning proportional, integral and derivative control scheme based on genetic algorithms (GAs) is proposed and applied to the control of a real industrial plant. This paper explores the improvement in the parameter estimator, which is an essential part of an adaptive controller, through the hybridization of recursive least-squares algorithms by making use of GAs and the possibility of the application of GAs to the control of industrial processes. Both the simulation results and the experiments on a real plant show that the proposed scheme can be applied effectively.
Resumo:
Summary Background and purpose: Phytocannabinoids in Cannabis sativa have diverse pharmacological targets extending beyond cannabinoid receptors and several exert notable anticonvulsant effects. For the first time, we investigated the anticonvulsant profile of the phytocannabinoid cannabidivarin (CBDV) in vitro and in in vivo seizure models. Experimental approach: The effect of CBDV (1-100μM) on epileptiform local field potentials (LFPs) induced in rat hippocampal brain slices by 4-AP application or Mg2+-free conditions was assessed by in vitro multi-electrode array recordings. Additionally, the anticonvulsant profile of CBDV (50-200 mg kg-1) in vivo was investigated in four rodent seizure models: maximal electroshock (mES) and audiogenic seizures in mice, and pentylenetetrazole (PTZ) and pilocarpine-induced seizures in rat. CBDV effects in combination with commonly-used antiepileptic drugs were investigated in rat seizures. Finally, the motor side effect profile of CBDV was investigated using static beam and gripstrength assays. Key results: CDBV significantly attenuated status epilepticus-like epileptiform LFPs induced by 4-AP and Mg2+-free conditions. CBDV had significant anticonvulsant effects in mES (≥100 mg kg-1), audiogenic (≥50 mg kg-1) and PTZ-induced seizures (≥100 mg kg-1). CBDV alone had no effect against pilocarpine-induced seizures, but significantly attenuated these seizures when administered with valproate or phenobarbital at 200 mg kg-1 CBDV. CBDV had no effect on motor function. Conclusions and Implications: These results indicate that CBDV is an effective anticonvulsant across a broad range of seizure models, does not significantly affect normal motor function and therefore merits further investigation in chronic epilepsy models to justify human trials.