53 resultados para Traditional joints of wood
em CentAUR: Central Archive University of Reading - UK
Resumo:
The term microfibril angle (MFA) in wood science refers to the angle between the direction of the helical windings of cellulose microfibrils in the secondary cell wall of fibres and tracheids and the long axis of cell. Technologically, it is usually applied to the orientation of cellulose microfibrils in the S2 layer that makes up the greatest proportion of the wall thickness, since it is this which most affects the physical properties of wood. This review describes the organisation of the cellulose component of the secondary wall of fibres and tracheids and the various methods that have been used for the measurement of MFA. It considers the variation of MFA within the tree and the biological reason for the large differences found between juvenile (or core) wood and mature (or outer) wood. The ability of the tree to vary MFA in response to environmental stress, particularly in reaction wood, is also described. Differences in MFA have a profound effect on the properties of wood, in particular its stiffness. The large MFA in juvenile wood confers low stiffness and gives the sapling the flexibility it needs to survive high winds without breaking. It also means, however, that timber containing a high proportion of juvenile wood is unsuitable for use as high-grade structural timber. This fact has taken on increasing importance in view of the trend in forestry towards short rotation cropping of fast grown species. These trees at harvest may contain 50% or more of timber with low stiffness and therefore, low economic value. Although they are presently grown mainly for pulp, pressure for increased timber production means that ways will be sought to improve the quality of their timber by reducing juvenile wood MFA. The mechanism by which the orientation of microfibril deposition is controlled is still a matter of debate. However, the application of molecular techniques is likely to enable modification of this process. The extent to which these techniques should be used to improve timber quality by reducing MFA in juvenile wood is, however, uncertain, since care must be taken to avoid compromising the safety of the tree.
Resumo:
The term microfibril angle (MFA) in wood science refers to the angle between the direction of the helical windings of cellulose microfibrils in the secondary cell wall of fibres and tracheids and the long axis of cell. Technologically, it is usually applied to the orientation of cellulose microfibrils in the S2 layer that makes up the greatest proportion of the wall thickness, since it is this which most affects the physical properties of wood. This review describes the organisation of the cellulose component of the secondary wall of fibres and tracheids and the various methods that have been used for the measurement of MFA. It considers the variation of MFA within the tree and the biological reason for the large differences found between juvenile (or core) wood and mature (or outer) wood. The ability of the tree to vary MFA in response to environmental stress, particularly in reaction wood, is also described. Differences in MFA have a profound effect on the properties of wood, in particular its stiffness. The large MFA in juvenile wood confers low stiffness and gives the sapling the flexibility it needs to survive high winds without breaking. It also means, however, that timber containing a high proportion of juvenile wood is unsuitable for use as high-grade structural timber. This fact has taken on increasing importance in view of the trend in forestry towards short rotation cropping of fast grown species. These trees at harvest may contain 50% or more of timber with low stiffness and therefore, low economic value. Although they are presently grown mainly for pulp, pressure for increased timber production means that ways will be sought to improve the quality of their timber by reducing juvenile wood MFA. The mechanism by which the orientation of microfibril deposition is controlled is still a matter of debate. However, the application of molecular techniques is likely to enable modification of this process. The extent to which these techniques should be used to improve timber quality by reducing MFA in juvenile wood is, however, uncertain, since care must be taken to avoid compromising the safety of the tree.
Resumo:
The wood mouse is a common and abundant species in agricultural landscape and is a focal species in pesticide risk assessment. Empirical studies on the ecology of the wood mouse have provided sufficient information for the species to be modelled mechanistically. An individual-based model was constructed to explicitly represent the locations and movement patterns of individual mice. This together with the schedule of pesticide application allows prediction of the risk to the population from pesticide exposure. The model included life-history traits of wood mice as well as typical landscape dynamics in agricultural farmland in the UK. The model obtains a good fit to the available population data and is fit for risk assessment purposes. It can help identify spatio-temporal situations with the largest potential risk of exposure and enables extrapolation from individual-level endpoints to population-level effects. Largest risk of exposure to pesticides was found when good crop growth in the “sink” fields coincided with high “source” population densities in the hedgerows. Keywords: Population dynamics, Pesticides, Ecological risk assessment, Habitat choice, Agent-based model, NetLogo
Resumo:
A review is given of the mechanics of cutting, ranging from the slicing of thin floppy offcuts (where there is negligible elasticity and no permanent deformation of the offcut) to the machining of ductile metals (where there is severe permanent distortion of the offcut/chip). Materials scientists employ the former conditions to determine the fracture toughness of ‘soft’ solids such as biological materials and foodstuffs. In contrast, traditional analyses of metalcutting are based on plasticity and friction only, and do not incorporate toughness. The machining theories are inadequate in a number of ways but a recent paper has shown that when ductile work of fracture is included many, if not all, of the shortcomings are removed. Support for the new analysis is given by examination of FEM simulations of metalcutting which reveal that a ‘separation criterion’ has to be employed at the tool tip. Some consideration shows that the separation criteria are versions of void-initiation-growth-and-coalescence models employed in ductile fracture mechanics. The new analysis shows that cutting forces for ductile materials depend upon the fracture toughness as well as plasticity and friction, and reveals a simple way of determining both toughness and flow stress from cutting experiments. Examples are given for a wide range of materials including metals, polymers and wood, and comparison is made with the same properties independently determined using conventional testpieces. Because cutting can be steady state, a new way is presented for simultaneously measuring toughness and flow stress at controlled speeds and strain rates.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
In the UK, the recycling of sewage sludge to land is expected to double by 2006 but the security of this route is threatened by environmental concerns and health scares. Strategic investment is needed to ensure sustainable and secure sludge recycling outlets. At present, the security of this landbank for sludge recycling is determined by legislation relating to nutrient rather than potentially toxic elements (PTEs) applications to land - especially the environmental risk linked to soil phosphorus (P) saturation. We believe that not all land has an equal risk of contributing nutrients derived from applications to land to receiving waters. We are currently investigating whether it is possible to minimise nutrient loss by applying sludge to land outside Critical Source Areas (CSAs) regardless of soil P Index status. Research is underway to develop a predictive and spatially-sensitive, semi-distributed model of critical thresholds for sludge application that goes beyond traditional 'end-of-pipe" or "edge-of-field" modelling, to include hydrological flow paths and delivery mechanisms to receiving waters from non-point sources at the catchment scale.
Resumo:
Given the non-monotonic form of the radiocarbon calibration curve, the precision of single C-14 dates on the calendar timescale will always be limited. One way around this limitation is through comparison of time-series, which should exhibit the same irregular patterning as the calibration curve. This approach can be employed most directly in the case of wood samples with many years growth present (but not able to be dated by dendrochronology), where the tree-ring series of unknown date can be compared against the similarly constructed C-14 calibration curve built from known-age wood. This process of curve-fitting has come to be called "wiggle-matching." In this paper, we look at the requirements for getting good precision by this method: sequence length, sampling frequency, and measurement precision. We also look at 3 case studies: one a piece of wood which has been independently dendrochronologically dated, and two others of unknown age relating to archaeological activity at Silchester, UK (Roman) and Miletos, Anatolia (relating to the volcanic eruption at Thera).
Resumo:
Structure is an important physical feature of the soil that is associated with water movement, the soil atmosphere, microorganism activity and nutrient uptake. A soil without any obvious organisation of its components is known as apedal and this state can have marked effects on several soil processes. Accurate maps of topsoil and subsoil structure are desirable for a wide range of models that aim to predict erosion, solute transport, or flow of water through the soil. Also such maps would be useful to precision farmers when deciding how to apply nutrients and pesticides in a site-specific way, and to target subsoiling and soil structure stabilization procedures. Typically, soil structure is inferred from bulk density or penetrometer resistance measurements and more recently from soil resistivity and conductivity surveys. To measure the former is both time-consuming and costly, whereas observations made by the latter methods can be made automatically and swiftly using a vehicle-mounted penetrometer or resistivity and conductivity sensors. The results of each of these methods, however, are affected by other soil properties, in particular moisture content at the time of sampling, texture, and the presence of stones. Traditional methods of observing soil structure identify the type of ped and its degree of development. Methods of ranking such observations from good to poor for different soil textures have been developed. Indicator variograms can be computed for each category or rank of structure and these can be summed to give the sum of indicator variograms (SIV). Observations of the topsoil and subsoil structure were made at four field sites where the soil had developed on different parent materials. The observations were ranked by four methods and indicator and the sum of indicator variograms were computed and modelled for each method of ranking. The individual indicators were then kriged with the parameters of the appropriate indicator variogram model to map the probability of encountering soil with the structure represented by that indicator. The model parameters of the SIVs for each ranking system were used with the data to krige the soil structure classes, and the results are compared with those for the individual indicators. The relations between maps of soil structure and selected wavebands from aerial photographs are examined as basis for planning surveys of soil structure. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This article explores conflicts over a series of ruins located within Zimbabwe's flagship National Park. The relics have long been regarded as sacred places by local African communities evicted from their vicinity, and have come to be seen as their ethnic heritage. Local intellectuals' promotion of this heritage was an important aspect of a defensive mobilization of cultural difference on the part of a marginalized minority group. I explore both indigenous and colonial ideas about the ruins, the different social movements with which they have been associated and the changing social life they have given the stone relics. Although African and European ideas sometimes came into violent confrontation - as in the context of colonial era evictions - there were also mutual influences in emergent ideas about tribe, heritage and history. The article engages with Pierre Nora's notion of 'sites of memory', which has usefully drawn attention to the way in which ideas of the past are rooted and reproduced in representations of particular places. But it criticizes Nora's tendency to romanticize pre-modern 'memory', suppress narrative and depoliticize traditional connections with the past. Thus, the article highlights the historicity of traditional means of relating to the past, highlighting the often bitter and divisive politics of traditional ritual, myth, kinship, descent and 'being first'. It also emphasizes the entanglement of modern and traditional ideas, inadequately captured by Nora's implied opposition between history and memory. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The community pharmacy service medicines use review (MUR) was introduced in 2005 ‘to improve patient knowledge, concordance and use of medicines’ through a private patient–pharmacist consultation. The MUR presents a fundamental change in community pharmacy service provision. While traditionally pharmacists are dispensers of medicines and providers of medicines advice, and patients as recipients, the MUR considers pharmacists providing consultation-type activities and patients as active participants. The MUR facilitates a two-way discussion about medicines use. Traditional patient–pharmacist behaviours transform into a new set of behaviours involving the booking of appointments, consultation processes and form completion, and the physical environment of the patient–pharmacist interaction moves from the traditional setting of the dispensary and medicines counter to a private consultation room. Thus, the new service challenges traditional identities and behaviours of the patient and the pharmacist as well as the environment in which the interaction takes place. In 2008, the UK government concluded there is at present too much emphasis on the quantity of MURs rather than on their quality.[1] A number of plans to remedy the perceived imbalance included a suggestion to reward ‘health outcomes’ achieved, with calls for a more focussed and scientific approach to the evaluation of pharmacy services using outcomes research. Specifically, the UK government set out the main principal research areas for the evaluation of pharmacy services to include ‘patient and public perceptions and satisfaction’as well as ‘impact on care and outcomes’. A limited number of ‘patient satisfaction with pharmacy services’ type questionnaires are available, of varying quality, measuring dimensions relating to pharmacists’ technical competence, behavioural impressions and general satisfaction. For example, an often cited paper by Larson[2] uses two factors to measure satisfaction, namely ‘friendly explanation’ and ‘managing therapy’; the factors are highly interrelated and the questions somewhat awkwardly phrased, but more importantly, we believe the questionnaire excludes some specific domains unique to the MUR. By conducting patient interviews with recent MUR recipients, we have been working to identify relevant concepts and develop a conceptual framework to inform item development for a Patient Reported Outcome Measure questionnaire bespoke to the MUR. We note with interest the recent launch of a multidisciplinary audit template by the Royal Pharmaceutical Society of Great Britain (RPSGB) in an attempt to review the effectiveness of MURs and improve their quality.[3] This template includes an MUR ‘patient survey’. We will discuss this ‘patient survey’ in light of our work and existing patient satisfaction with pharmacy questionnaires, outlining a new conceptual framework as a basis for measuring patient satisfaction with the MUR. Ethical approval for the study was obtained from the NHS Surrey Research Ethics Committee on 2 June 2008. References 1. Department of Health (2008). Pharmacy in England: Building on Strengths – Delivering the Future. London: HMSO. www. official-documents.gov.uk/document/cm73/7341/7341.pdf (accessed 29 September 2009). 2. Larson LN et al. Patient satisfaction with pharmaceutical care: update of a validated instrument. JAmPharmAssoc 2002; 42: 44–50. 3. Royal Pharmaceutical Society of Great Britain (2009). Pharmacy Medicines Use Review – Patient Audit. London: RPSGB. http:// qi4pd.org.uk/index.php/Medicines-Use-Review-Patient-Audit. html (accessed 29 September 2009).
Resumo:
Foot and mouth disease (FMD) is a major threat, not only to countries whose economies rely on agricultural exports, but also to industrialised countries that maintain a healthy domestic livestock industry by eliminating major infectious diseases from their livestock populations. Traditional methods of controlling diseases such as FMD require the rapid detection and slaughter of infected animals, and any susceptible animals with which they may have been in contact, either directly or indirectly. During the 2001 epidemic of FMD in the United Kingdom (UK), this approach was supplemented by a culling policy driven by unvalidated predictive models. The epidemic and its control resulted in the death of approximately ten million animals, public disgust with the magnitude of the slaughter, and political resolve to adopt alternative options, notably including vaccination, to control any future epidemics. The UK experience provides a salutary warning of how models can be abused in the interests of scientific opportunism.