26 resultados para assessments
em Cambridge University Engineering Department Publications Database
Resumo:
Active vibration control (AVC) is a relatively new technology for the mitigation of annoying human-induced vibrations in floors. However, recent technological developments have demonstrated its great potential application in this field. Despite this, when a floor is found to have problematic floor vibrations after construction the unfamiliar technology of AVC is usually avoided in favour of more common techniques, such as Tuned Mass Dampers (TMDs) which have a proven track record of successful application, particularly for footbridges and staircases. This study aims to investigate the advantages and disadvantages that AVC has, when compared with TMDs, for the application of mitigation of pedestrian-induced floor vibrations in offices. Simulations are performed using the results from a finite element model of a typical office layout that has a high vibration response level. The vibration problems on this floor are then alleviated through the use of both AVC and TMDs and the results of each mitigation configuration compared. The results of this study will enable a more informed decision to be made by building owners and structural engineers regarding suitable technologies for reducing floor vibrations.
Resumo:
The current procedures in post-earthquake safety and structural assessment are performed manually by a skilled triage team of structural engineers/certified inspectors. These procedures, and particularly the physical measurement of the damage properties, are time-consuming and qualitative in nature. This paper proposes a novel method that automatically detects spalled regions on the surface of reinforced concrete columns and measures their properties in image data. Spalling has been accepted as an important indicator of significant damage to structural elements during an earthquake. According to this method, the region of spalling is first isolated by way of a local entropy-based thresholding algorithm. Following this, the exposure of longitudinal reinforcement (depth of spalling into the column) and length of spalling along the column are measured using a novel global adaptive thresholding algorithm in conjunction with image processing methods in template matching and morphological operations. The method was tested on a database of damaged RC column images collected after the 2010 Haiti earthquake, and comparison of the results with manual measurements indicate the validity of the method.
Resumo:
Biopolymers are generally considered an eco-friendly alternative to petrochemical polymers due to the renewable feedstock used to produce them and their biodegradability. However, the farming practices used to grow these feedstocks often carry significant environmental burdens, and the production energy can be higher than for petrochemical polymers. Life cycle assessments (LCAs) are available in the literature, which make comparisons between biopolymers and various petrochemical polymers, however the results can be very disparate. This review has therefore been undertaken, focusing on three biodegradable biopolymers, poly(lactic acid) (PLA), poly(hydroxyalkanoates) (PHAs), and starch-based polymers, in an attempt to determine the environmental impact of each in comparison to petrochemical polymers. Reasons are explored for the discrepancies between these published LCAs. The majority of studies focused only on the consumption of non-renewable energy and global warming potential and often found these biopolymers to be superior to petrochemically derived polymers. In contrast, studies which considered other environmental impact categories as well as those which were regional or product specific often found that this conclusion could not be drawn. Despite some unfavorable results for these biopolymers, the immature nature of these technologies needs to be taken into account as future optimization and improvements in process efficiencies are expected. © 2013 Elsevier B.V. All rights reserved.
Resumo:
INTRODUCTION: Recent studies in other European countries suggest that the prevalence of congenital cryptorchidism continues to increase. This study aimed to explore the prevalence and natural history of congenital cryptorchidism in a UK centre. METHODS: Between October 2001 and July 2008, 784 male infants were born in the prospective Cambridge Baby Growth Study. 742 infants were examined by trained research nurses at birth; testicular position was assessed using standard techniques. Follow-up assessments were completed at ages 3, 12, 18 and 24 months in 615, 462, 393 and 326 infants, respectively. RESULTS: The prevalence of cryptorchidism at birth was 5.9% (95% CI 4.4% to 7.9%). Congenital cryptorchidism was associated with earlier gestational age (p<0.001), lower birth weight (p<0.001), birth length (p<0.001) and shorter penile length at birth (p<0.0001) compared with other infants, but normal size after age 3 months. The prevalence of cryptorchidism declined to 2.4% at 3 months, but unexpectedly rose again to 6.7% at 12 months as a result of new cases. The cumulative incidence of "acquired cryptorchidism" by age 24 months was 7.0% and these cases had shorter penile length during infancy than other infants (p = 0.003). CONCLUSIONS: The prevalence of congenital cryptorchidism was higher than earlier estimates in UK populations. Furthermore, this study for the first time describes acquired cryptorchidism or "ascending testis" as a common entity in male infants, which is possibly associated with reduced early postnatal androgen activity.
Resumo:
Contaminant behaviour in soils and fractured rock is very complex, not least because of the heterogeneity of the subsurface environment. For non-aqueous phase liquids (NAPLs), a liquid density contrast and interfacial tension between the contaminant and interstitial fluid adds to the complexity of behaviour, increasing the difficulty of predicting NAPL behaviour in the subsurface. This paper outlines the need for physical model tests that can improve fundamental understanding of NAPL behaviour in the subsurface, enhance risk assessments of NAPL contaminated sites, reduce uncertainty associated with NAPL source remediation and improve current technologies for NAPL plume remediation. Four case histories are presented to illustrate physical modelling approaches that have addressed problems associated with NAPL transport, remediation and source zone characterization. © 2006 Taylor & Francis Group, London.
Resumo:
If a product is being designed to be genuinely inclusive, then the designers need to be able to assess the level of exclusion of the product that they are working on and to identify possible areas of improvement. To be of practical use, the assessments need to be quick, consistent and repeatable. The aim of this workshop is to invite attendees to participate in the evaluation of a number of everyday objects using an assessment technique being considered by the workshop organisers. The objectives of the workshop include evaluating the effectiveness of the assessment method, evaluating the accessibility of the products being assessed and to suggest revisions to the assessment scales being used. The assessment technique is to be based on the ONS capability measures [1]. This source recognises fourteen capability scales of which seven are particularly pertinent to product evaluation, namely: motion, dexterity, reach and stretch, vision, hearing, communication, and intellectual functioning. Each of these scales ranges from 0 (fully able) through 1 (minimal impairment) to 10 (severe impairment). The attendees will be asked to rate the products on these scales. Clearly the assessed accessibility of the product depends on the assumptions made about the context of use. The attendees will be asked to clearly note the assumptions that they are making about the context in which the product is being assessed. For instance, with a hot water bottle, assumptions have to be made about the availability of hot water and these can affect the overall accessibility rating. The workshop organisers will not specify the context of use as the aim is to identify how assessors would use the assessment method in the real world. The objects being assessed will include items such as remote controls, pill bottles, food packaging, hot water bottles and mobile telephones. the attendees will be encouraged to assess two or more products in detail. Helpers will be on hand to assist and observe the assessments. The assessments will be collated and compared and feedback about the assessment method sought from the attendees. Drawing on a preliminary review of the assessment results, initial conclusions will be presented at the end of the workshop. More detailed analyses will be made available in subsequent proceedings. It is intended that the workshop will provide workshop attendees with an opportunity to perform hands-on assessment of a number everyday products and identify features which are inclusive and those that are not. It is also intended to encourage an appreciation of the capabilities to be considered when evaluating accessibility.
Resumo:
PD6493:1991 fracture assessment have been performed for a range of large-scale fracture mechanics tests conducted at TWI in the past. These tests cover several material groups, including pressure vessel steels, pipeline steels, stainless steels and aluminium alloys, including parent material and weldments. Ninety-two wide plate and pressure vessel tests have been assessed, following Levels 1, 2 and 3 PD6493:1991 procedures. In total, over 400 assessments have been performed, examining many features of the fracture assessment procedure including toughness input, proof testing, residual stress assumptions and stress state (tension, bending and biaxial). In all cases the large scale tests have been assessed as one would actual structures: i.e., based on lower bound toughness values obtained from small scale fracture toughness specimens.
Resumo:
Purpose - The purpose of this paper is to explore the impact of configuration on supply network capability. It was believed that a configuration perspective might provide new insights on the capability and performance of supply networks, a gap in the literature, and provide a basis for the development of tools to aid their analysis and design. Design/methodology/ approach - The methodology involved the development of a configuration definition and mapping approach extending established strategic and firm level constructs to the network operational level. The resulting tools were tested and refined in a series of case studies across a range of sectors and value chain models. Supply network capability assessments, from the perspective of the focal firm, were then compared with their configuration profiles. Findings - The configuration mapping tools were found to give new insights into the structure of supply networks and allow comparisons to be made across sectors and business models through the use of consistent and quantitative methods and common presentation. They provide the foundations for linking configuration to capability and performance, and contribute to supply network design and development by highlighting the intrinsic capabilities associated with different configurations. Research limitations/implications - Although multiple case networks have been investigated, the configuration exemplars remain suggestive models. The research suggests that a re-evaluation of operational process excellence models is needed, where the link between process maturity and performance may require a configuration context. Practical implications - Advantages of particular configurations have been identified with implications for supply network development and industrial policy. Originality/value - The paper seeks to develop established strategic management configuration concepts to the analysis and design of supply networks by providing a robust operational definition of supply network configuration and novel tools for their mapping and assessment. © Emerald Group Publishing Limited.
Resumo:
The Bayesian perspective of designing for the consequences of hazard is discussed. Structural engineers should be educated in Bayesian theory and its underlying philosophy, and about the centrality to the prediction problem of the predictive distribution. The primary contribution that Bayesianism can make to the debate about extreme possibilities is its clarification of the language of and thinking about risk. Frequentist methodologies are the wrong approach to the decisions that engineers need to make, decisions that involve assessments of abstract future possibilities based on incomplete and abstract information.
Resumo:
While tools have been developed to assist firms' decision making for bringing known products and components into the supply chain, fewer tools are available to guide the acquisition of earlier-stage technologies, which is a riskier proposition due to higher technological and market uncertainties. Through synthesis of literature in technology sourcing, open innovation, alliances, mergers and acquisitions, outsourcing, and technology and knowledge transfer and consultation with industry, this paper identifies critical issues that decision makers should consider before making an early-stage technology acquisition. Sixteen questions emerge to guide decision making, comprising internal, technology, and partner assessments. These questions allow a firm to disentangle the complexity of early-stage technology acquisitions and select the most appropriate targets.
Resumo:
Air pockets, one kind of concrete surface defects, are often created on formed concrete surfaces during concrete construction. Their existence undermines the desired appearance and visual uniformity of architectural concrete. Therefore, measuring the impact of air pockets on the concrete surface in the form of air pockets is vital in assessing the quality of architectural concrete. Traditionally, such measurements are mainly based on in-situ manual inspections, the results of which are subjective and heavily dependent on the inspectors’ own criteria and experience. Often, inspectors may make different assessments even when inspecting the same concrete surface. In addition, the need for experienced inspectors costs owners or general contractors more in inspection fees. To alleviate these problems, this paper presents a methodology that can measure air pockets quantitatively and automatically. In order to achieve this goal, a high contrast, scaled image of a concrete surface is acquired from a fixed distance range and then a spot filter is used to accurately detect air pockets with the help of an image pyramid. The properties of air pockets (the number, the size, and the occupation area of air pockets) are subsequently calculated. These properties are used to quantify the impact of air pockets on the architectural concrete surface. The methodology is implemented in a C++ based prototype and tested on a database of concrete surface images. Comparisons with manual tests validated its measuring accuracy. As a result, the methodology presented in this paper can increase the reliability of concrete surface quality assessment
Resumo:
Manual inspection is required to determine the condition of damaged buildings after an earthquake. The lack of available inspectors, when combined with the large volume of inspection work, makes such inspection subjective and time-consuming. Completing the required inspection takes weeks to complete, which has adverse economic and societal impacts on the affected population. This paper proposes an automated framework for rapid post-earthquake building evaluation. Under the framework, the visible damage (cracks and buckling) inflicted on concrete columns is first detected. The damage properties are then measured in relation to the column's dimensions and orientation, so that the column's load bearing capacity can be approximated as a damage index. The column damage index supplemented with other building information (e.g. structural type and columns arrangement) is then used to query fragility curves of similar buildings, constructed from the analyses of existing and on-going experimental data. The query estimates the probability of the building being in different damage states. The framework is expected to automate the collection of building damage data, to provide a quantitative assessment of the building damage state, and to estimate the vulnerability of the building to collapse in the event of an aftershock. Videos and manual assessments of structures after the 2009 earthquake in Haiti are used to test the parts of the framework.
Resumo:
Half of the world's annual production of steel is used in constructing buildings and infrastructure. Producing this steel causes significant amounts of carbon dioxide to be released into the atmosphere. Climate change experts recommend this amount be halved by 2050; however steel demand is predicted to have doubled by this date. As process efficiency improvements will not reach the required 75% reduction in emissions per unit steel output, new methods must be examined to deliver service using less steel production. To apply such methods successfully to construction, it must first be known where steel is used currently within the industry. This information is not available so a methodology is proposed to estimate it from known data. Results are presented for steel flows by product for ten construction sectors for both the UK and the world in 2006. An estimate for steel use within a 'typical' building is also published for the first time. Industrial buildings and utility infrastructure are identified as the largest end-uses of steel, while superstructure is confirmed as the main use of steel in a building. The results highlight discrepancies in previous steel estimates and life-cycle assessments, and will inform future research on lowering demand for steel, hence reducing carbon emissions. © 2012 Elsevier B.V. All rights reserved.