37 resultados para Worth
em CentAUR: Central Archive University of Reading - UK
Resumo:
In our state of centralised control of the curriculum and high-stakes testing an examination subject's assessment objectives have become high profile. Some of the anomalous effects of this profile are shown in the teaching, question-setting, and marking of English literature. Glimpses of earlier times are revealed, all three secondary school key stages are considered, examination performances are discussed, and the views of beginning teachers about teaching to the test are sought.
Resumo:
Our digital universe is rapidly expanding,more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possible and as frequently as possible. In this study we consider potential model update as an investment decision, which, as in the financial markets, should be taken only if a certain return on investment is expected. We introduce and motivate a new research problem for data streams ? cost-sensitive adaptation. We propose a reference framework for analyzing adaptation strategies in terms of costs and benefits. Our framework allows to characterize and decompose the costs of model updates, and to asses and interpret the gains in performance due to model adaptation for a given learning algorithm on a given prediction task. Our proof-of-concept experiment demonstrates how the framework can aid in analyzing and managing adaptation decisions in the chemical industry.
Resumo:
In this paper, a review is undertaken of the major models currently in use for describing water quality in freshwater river systems. The number of existing models is large because the various studies of water quality in rivers around the world have often resulted in the construction of new 'bespoke' models designed for the particular situation of that study. However, it is worth considering models that are already available, since an existing model, suitable for the purposes of the study, will save a great deal of work and may already have been established within regulatory and legal frameworks. The models chosen here are SIMCAT, TOMCAT, QUAL2E, QUASAR, MIKE-11 and ISIS, and the potential for each model is examined in relation to the issue of simulating dissolved oxygen (DO) in lowland rivers. These models have been developed for particular purposes and this review shows that no one model can provide all of the functionality required. Furthermore, all of the models contain assumptions and limitations that need to be understood if meaningful interpretations of the model simulations are to. be made. The work is concluded with the view that it is unfair to set one model against another in terms of broad applicability, but that a model of intermediate complexity, such as QUASAR, is generally well suited to simulate DO in river systems. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statistics—dimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transfer—were analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Λ (where Λ is the local Obukhov length and z is the height above ground); the σ_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth.
Resumo:
Rationale The hyperphagic effect of ∆9-tetrahydrocannabinol (∆9THC) in humans and rodents is well known. However, no studies have investigated the importance of ∆9THC composition and any influence other non-∆9THC cannabinoids present in Cannabis sativa may have. We therefore compared the effects of purified ∆9THC, synthetic ∆9THC (dronabinol), and ∆9THC botanical drug substance (∆9THC-BDS), a ∆9THC-rich standardized extract comparable in composition to recreationally used cannabis. Methods Adult male rats were orally dosed with purified ∆9THC, synthetic ∆9THC, or ∆9THC-BDS, matched for ∆9THC content (0.34–2.68 mg/kg). Prior to dosing, subjects were satiated, and food intake was recorded following ∆9THC administration. Data were then analyzed in terms of hourly intake and meal patterns. Results All three ∆9THC substances tested induced significant hyperphagic effects at doses ≥0.67 mg/kg. These effects included increased intake during hour one, a shorter latency to onset of feeding and a greater duration and consumption in the first meal. However, while some differences in vehicle control intakes were observed, there were significant, albeit subtle, differences in pattern of effects between the purified ∆9THC and ∆9THC-BDS. Conclusion All ∆9THC compounds displayed classical ∆9THC effects on feeding, significantly increasing short-term intake whilst decreasing latency to the first meal. We propose that the subtle adjustment to the meal patterns seen between the purified ∆9THC and ∆9THC-BDS are due to non-∆9THC cannabinoids present in ∆9THC-BDS. These compounds and other non-cannabinoids have an emerging and diverse pharmacology and can modulate ∆9THC-induced hyperphagia, making them worth further investigation for their therapeutic potential.
Resumo:
A modeling Study was carried out into pea-barley intercropping in northern Europe. The two objectives were (a) to compare pea-barley intercropping to sole cropping in terms of grain and nitrogen yield amounts and stability, and (b) to explore options for managing pea-barley intercropping systems in order to maximize the biomass produced and the grain and nitrogen yields according to the available resources, such as light, water and nitrogen. The study consisted of simulations taking into account soil and weather variability among three sites located in northern European Countries (Denmark, United Kingdom and France), and using 10 years of weather records. A preliminary stage evaluated the STICS intercrop model's ability to predict grain and nitrogen yields of the two species, using a 2-year dataset from trials conducted at the three sites. The work was carried out in two phases, (a) the model was run to investigate the potentialities of intercrops as compared to sole crops, and (b) the model was run to explore options for managing pea-barley intercropping, asking the following three questions: (i) in order to increase light capture, Would it be worth delaying the sowing dates of one species? (ii) How to manage sowing density and seed proportion of each species in the intercrop to improve total grain yield and N use efficiency? (iii) How to optimize the use of nitrogen resources by choosing the most suitable preceding crop and/or the most appropriate soil? It was found that (1) intercropping made better use of environmental resources as regards yield amount and stability than sole cropping, with a noticeable site effect, (2) pea growth in intercrops was strongly linked to soil moisture, and barley yield was determined by nitrogen uptake and light interception due to its height relative to pea, (3) sowing barley before pea led to a relative grain yield reduction averaged over all three sites, but sowing strategy must be adapted to the location, being dependent on temperature and thus latitude, (4) density and species proportions had a small effect on total grain yield, underlining the interspecific offset in the use of environmental growth resources which led to similar total grain yields whatever the pea-barley design, and (5) long-term strategies including mineralization management through organic residue supply and rotation management were very valuable, always favoring intercrop total grain yield and N accumulation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The possibility that parents of one sex may preferentially invest in offspring of a certain sex raises profound evolutionary questions about the relative worth of sons and daughters to their mothers and fathers. Post-fledging brood division-in which cacti parent feeds a different subset of offspring-has been well documented in birds. However, a lack of empirical evidence that this may be based oil offspring sex, combined with the theoretical difficulty of explaining such an interaction, has led researchers to consider a gender bias in post-fledging brood division highly unlikely. Here we show that in the toc-toc, Foudia sechellarum, postfledging brood division is extreme and determined by sex; where brood composition allows, male parents exclusively provision male fledglings, whereas female parents provision female fledglings. This is the first study to provide unambiguous evidence, based on molecular sexing, that sex-biased post-fledging brood division can occur in birds. Male and female parents provisioned at the same rate and neither offspring nor parent survival appeared to be affected by the sex of the parent or offspring, respectively. The current hypotheses predicting advantages for brood division and preferential care for one specific type of offspring are discussed in the light of our results.
Resumo:
Pulsed Phase Thermography (PPT) has been proven effective on depth retrieval of flat-bottomed holes in different materials such as plastics and aluminum. In PPT, amplitude and phase delay signatures are available following data acquisition (carried out in a similar way as in classical Pulsed Thermography), by applying a transformation algorithm such as the Fourier Transform (FT) on thermal profiles. The authors have recently presented an extended review on PPT theory, including a new inversion technique for depth retrieval by correlating the depth with the blind frequency fb (frequency at which a defect produce enough phase contrast to be detected). An automatic defect depth retrieval algorithm had also been proposed, evidencing PPT capabilities as a practical inversion technique. In addition, the use of normalized parameters to account for defect size variation as well as depth retrieval from complex shape composites (GFRP and CFRP) are currently under investigation. In this paper, steel plates containing flat-bottomed holes at different depths (from 1 to 4.5 mm) are tested by quantitative PPT. Least squares regression results show excellent agreement between depth and the inverse square root blind frequency, which can be used for depth inversion. Experimental results on steel plates with simulated corrosion are presented as well. It is worth noting that results are improved by performing PPT on reconstructed (synthetic) rather than on raw thermal data.
Resumo:
Building services are worth about 2% GDP and are essential for the effective and efficient operations of the building. It is increasingly recognised that the value of a building is related to the way it supports the client organisation’s ongoing business operations. Building services are central to the functional performance of buildings and provide the necessary conditions for health, well-being, safety and security of the occupants. They frequently comprise several technologically distinct sub-systems and their design and construction requires the involvement of numerous disciplines and trades. Designers and contractors working on the same project are frequently employed by different companies. Materials and equipment is supplied by a diverse range of manufacturers. Facilities managers are responsible for operation of the building service in use. The coordination between these participants is crucially important to achieve optimum performance, but too often is neglected. This leaves room for serious faults. The need for effective integration is important. Modern technology offers increasing opportunities for integrated personal-control systems for lighting, ventilation and security as well as interoperability between systems. Opportunities for a new mode of systems integration are provided by the emergence of PFI/PPP procurements frameworks. This paper attempts to establish how systems integration can be achieved in the process of designing, constructing and operating building services. The essence of the paper therefore is to envisage the emergent organisational responses to the realisation of building services as an interactive systems network.
Resumo:
This report summarises a workshop convened by the UK Food Standards Agency (FSA) on 11 September 2006 to review the results of three FSA-funded studies and other recent research on effects of the dietary n-6:n-3 fatty acid ratio on cardiovascular health. The objective of this workshop was to reach a clear conclusion on whether or not it was worth funding any further research in this area. On the basis of this review of the experimental evidence and on theoretical grounds, it was concluded that the n-6:n-3 fatty acid ratio is not a useful concept and that it distracts attention away from increasing absolute intakes of long-chain n-3 fatty acids which have been shown to have beneficial effects on cardiovascular health. Other markers of fatty acid intake, that more closely relate to physiological function, may be more useful.