20 resultados para Horizontal impact testing
Resumo:
The use of plants fibre reinforced composites has continuously increased during recent years. Their low density, higher environmental friendliness, and reduced cost proved particularly attractive for low-tech applications e.g., in building, automotive and leisure time industry. However, a major limitation to the use of these materials in structural components is unsatisfactory impact performance. An intermediate approach, the production of glass/ plant fibre hybrid laminates, has also been explored, trying to obtain materials with sufficient impact properties, whilst retaining a reduced cost and a substantial environmental gain. A survey is given on some aspects, crucial for the use of glass/plant fibre hybrid laminates in structural components: performance of hybrids when subjected to impact testing; the effect of laminate configuration, manufacturing procedure and fibre treatment on impact properties of the composite. Finally, indications are provided for a suitable selection of plant fibres with minimal extraction damage and sufficient toughness, for introduction in an impact-resistant glass/plant fibre hybrid laminate.
Resumo:
Results are presented from a matrix of coupled model integrations, using atmosphere resolutions of 135 and 90 km, and ocean resolutions of 1° and 1/3°, to study the impact of resolution on simulated climate. The mean state of the tropical Pacific is found to be improved in the models with a higher ocean resolution. Such an improved mean state arises from the development of tropical instability waves, which are poorly resolved at low resolution; these waves reduce the equatorial cold tongue bias. The improved ocean state also allows for a better simulation of the atmospheric Walker circulation. Several sensitivity studies have been performed to further understand the processes involved in the different component models. Significantly decreasing the horizontal momentum dissipation in the coupled model with the lower-resolution ocean has benefits for the mean tropical Pacific climate, but decreases model stability. Increasing the momentum dissipation in the coupled model with the higher-resolution ocean degrades the simulation toward that of the lower-resolution ocean. These results suggest that enhanced ocean model resolution can have important benefits for the climatology of both the atmosphere and ocean components of the coupled model, and that some of these benefits may be achievable at lower ocean resolution, if the model formulation allows.
Resumo:
We give an overview on the development of "horizontal" European Committee for Standardisation (CEN) standards for characterising soils, sludges and biowaste in the context of environmental legislation in the European Union (EU). We discuss the various steps in the development of a horizontal standard (i.e. assessment of the possibility of such a standard, review of existing normative documents, pre-normative testing and validation) and related problems. We also provide a synopsis of European and international standards covered by the so-called Project HORIZONTAL. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
A primary objective of agri-environment schemes is the conservation of biodiversity; in addition to increasing the value of farmland for wildlife, these schemes also aim to restore natural ecosystem functioning. The management of scheme options can influence their value for delivering ecosystem services by modifying the composition of floral and faunal communities. This study examines the impact of an agri-environment scheme prescription on ecosystem functioning by testing the hypothesis that vegetation management influences decomposition rates in grassy arable field margins. The effects of two vegetation management practices in arable field margins - cutting and soil disturbance (scarification) - on litter decomposition were compared using a litterbag experimental approach in early April 2006. Bags had either small mesh designed to restrict access to soil macrofauna, or large mesh that would allow macrofauna to enter. Bags were positioned on the soil surface or inserted into the soil in cut and scarified margins, retrieved after 44, 103 and 250 days and the amount of litter mass remaining was calculated. Litter loss from the litterbags with large mesh was greater than from the small mesh bags, providing evidence that soil macrofauna accelerate rates of litter decomposition. In the large mesh bags, the proportion of litter remaining in bags above and belowground in the cut plots was similar, while in the scarified plots, there was significantly more litter left in the aboveground bags than in the belowground bags. This loss of balance between decomposition rates above and belowground in scarified margins may have implications for the development and maintenance of grassy arable field margins by influencing nutrient availability for plant communities. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this work, IR thermography is used as a non-destructive tool for impact damage characterisation on thermoplastic E-glass/polypropylene composites for automotive applications. The aim of this experimentation was to compare impact resistance and to characterise damage patterns of different laminates, in order to provide indications for their use in components. Two E-glass/polypropylene composites, commingled ®Twintex (with three different weave structures: directional, balanced and 3-D) and random reinforced GMT, were in particular characterised. Directional and balanced Twintex were also coupled in a number of hybrid configurations with GMT to evaluate the possible use of GMT/Twintex hybrids in high-energy absorption components. The laminates were impacted using a falling weight tower, with impact energies ranging from 15 J to penetration. Using IR thermography during cooling down following a long pulse (3 s), impact damaged areas were characterised and the influence of weave structure on damage patterns was studied. IR thermography offered good accuracy for laminates with thickness not exceeding 3.5 mm: this appears to be a limit for the direct use of this method on components, where more refined signal treatment would probably be needed for impact damage characterisation.
Resumo:
Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.
Resumo:
The role of the academic in the built environment seems generally to be not well understood or articulated. While this problem is not unique to our field, there are plenty of examples in a wide range of academic disciplines where the academic role has been fully articulated. But built environment academics have tended not to look beyond their own literature and their own vocational context in trying to give meaning to their academic work. The purpose of this keynote presentation is to explore the context of academic work generally and the connections between education, research and practice in the built environment, specifically. By drawing on ideas from the sociology of the professions, the role of universities, and the fundamentals of social science research, a case is made that helps to explain the kind of problems that routinely obstruct academic progress in our field. This discussion reveals that while there are likely to be great weaknesses in much of what is published and taught in the built environment, it is not too great a stretch to provide a more robust understanding and a good basis for developing our field in a way that would enable us collectively to make a major contribution to theory-building, theory-testing and to make a good stab at tackling some of the problems facing society at large. There is no reason to disregard the fundamental academic disciplines that underpin our knowledge of the built environment. If we contextualise our work in these more fundamental disciplines, there is every reason to think that we can have a much greater impact that we have experienced to date.
Resumo:
A key strategy to improve the skill of quantitative predictions of precipitation, as well as hazardous weather such as severe thunderstorms and flash floods is to exploit the use of observations of convective activity (e.g. from radar). In this paper, a convection-permitting ensemble prediction system (EPS) aimed at addressing the problems of forecasting localized weather events with relatively short predictability time scale and based on a 1.5 km grid-length version of the Met Office Unified Model is presented. Particular attention is given to the impact of using predicted observations of radar-derived precipitation intensity in the ensemble transform Kalman filter (ETKF) used within the EPS. Our initial results based on the use of a 24-member ensemble of forecasts for two summer case studies show that the convective-scale EPS produces fairly reliable forecasts of temperature, horizontal winds and relative humidity at 1 h lead time, as evident from the inspection of rank histograms. On the other hand, the rank histograms seem also to show that the EPS generates too much spread for forecasts of (i) surface pressure and (ii) surface precipitation intensity. These may indicate that for (i) the value of surface pressure observation error standard deviation used to generate surface pressure rank histograms is too large and for (ii) may be the result of non-Gaussian precipitation observation errors. However, further investigations are needed to better understand these findings. Finally, the inclusion of predicted observations of precipitation from radar in the 24-member EPS considered in this paper does not seem to improve the 1-h lead time forecast skill.
Children playing branded video games: The impact of interactivity on product placement effectiveness
Resumo:
This study extends product placement research by testing the impact of interactivity on product placement effectiveness. The results suggest that when children cannot interact with the placements in video games, perceptual fluency is the underlying mechanism leading to positive affect. Therefore, the effects are only evident in a stimulus-based choice where the same stimulus is provided as a cue. However, when children have the opportunity to interact with the placements in video games, they may be influenced by conceptual fluency. Thus, placements are still effective in a memory-based choice where no stimulus is provided as a cue. Keywords: Perceptual fluency; Conceptual fluency; Video games; Interactivity; Children; Product placement
Resumo:
This paper considers the effect of GARCH errors on the tests proposed byPerron (1997) for a unit root in the presence of a structural break. We assessthe impact of degeneracy and integratedness of the conditional varianceindividually and find that, apart from in the limit, the testing procedure isinsensitive to the degree of degeneracy but does exhibit an increasingover-sizing as the process becomes more integrated. When we consider the GARCHspecifications that we are likely to encounter in empirical research, we findthat the Perron tests are reasonably robust to the presence of GARCH and donot suffer from severe over-or under-rejection of a correct null hypothesis.
Resumo:
The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.
Resumo:
The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess functional impact of motor manifestations in prHD and early HD individuals.
Resumo:
During April and May 2010 the ash cloud from the eruption of the Icelandic volcano Eyjafjallajökull caused widespread disruption to aviation over northern Europe. The location and impact of the eruption led to a wealth of observations of the ash cloud were being obtained which can be used to assess modelling of the long range transport of ash in the troposphere. The UK FAAM (Facility for Airborne Atmospheric Measurements) BAe-146-301 research aircraft overflew the ash cloud on a number of days during May. The aircraft carries a downward looking lidar which detected the ash layer through the backscatter of the laser light. In this study ash concentrations derived from the lidar are compared with simulations of the ash cloud made with NAME (Numerical Atmospheric-dispersion Modelling Environment), a general purpose atmospheric transport and dispersion model. The simulated ash clouds are compared to the lidar data to determine how well NAME simulates the horizontal and vertical structure of the ash clouds. Comparison between the ash concentrations derived from the lidar and those from NAME is used to define the fraction of ash emitted in the eruption that is transported over long distances compared to the total emission of tephra. In making these comparisons possible position errors in the simulated ash clouds are identified and accounted for. The ash layers seen by the lidar considered in this study were thin, with typical depths of 550–750 m. The vertical structure of the ash cloud simulated by NAME was generally consistent with the observed ash layers, although the layers in the simulated ash clouds that are identified with observed ash layers are about twice the depth of the observed layers. The structure of the simulated ash clouds were sensitive to the profile of ash emissions that was assumed. In terms of horizontal and vertical structure the best results were obtained by assuming that the emission occurred at the top of the eruption plume, consistent with the observed structure of eruption plumes. However, early in the period when the intensity of the eruption was low, assuming that the emission of ash was uniform with height gives better guidance on the horizontal and vertical structure of the ash cloud. Comparison of the lidar concentrations with those from NAME show that 2–5% of the total mass erupted by the volcano remained in the ash cloud over the United Kingdom.
Resumo:
Anticoagulants rodenticides have already known for over half a century, as effective and safe method of rodent control. However, discovered in 1958 anticoagulant resistance has given us a very important problem for their future long-term use. Laboratory tests provide the main method for identification the different types of anticoagulant resistances, quantify the magnitude of their effect and help us to choose the best pest control strategy. The main important tests are lethal feeding period (LFP) and blood clotting response (BCR) tests. These tests can now be used to quantify the likely effect of the resistance on treatment outcome by providing an estimate of the ‘resistance factor’. In 2004 the gene responsible for anticoagulant resistance (VKORC1) was identified and sequenced. As a result, a new molecular resistance testing methodology has been developed, and a number of resistance mutations, particularly in Norway rats and house mice. Three mutations of the VKORC1 gene in Norway rats have been identified to date that confer a degree of resistance to bromadiolone and difenacoum, sufficient to affect treatment outcome in the field.