911 resultados para Photography in traffic engineering.
Resumo:
Post-earthquake structural safety evaluations are currently performed manually by a team of certified inspectors and/or structural engineers. This process is time-consuming and costly, keeping owners and occupants from returning to their businesses and homes. Automating these evaluations would enable faster, and potentially more consistent, relief and response processes. In order to do this, the detection of exposed reinforcing steel is of utmost significance. This paper presents a novel method of detecting exposed reinforcement in concrete columns for the purpose of advancing practices of structural and safety evaluation of buildings after earthquakes. Under this method, the binary image of the reinforcing area is first isolated using a state-of-the-art adaptive thresholding technique. Next, the ribbed regions of the reinforcement are detected by way of binary template matching. Finally, vertical and horizontal profiling are applied to the processed image in order to filter out any superfluous pixels and take into consideration the size of reinforcement bars in relation to that of the structural element within which they reside. The final result is the combined binary image disclosing only the regions containing rebar overlaid on top of the original image. The method is tested on a set of images from the January 2010 earthquake in Haiti. Preliminary test results convey that most exposed reinforcement could be properly detected in images of moderately-to-severely damaged concrete columns.
Resumo:
The world is at the threshold of emerging technologies, where new systems in construction, materials, and civil and architectural design are poised to make the world better from a structural and construction perspective. Exciting developments, that are too many to name individually, take place yearly, affecting design considerations and construction practices. This edited book brings together modern methods and advances in structural engineering and construction, fulfilling the mission of ISEC Conferences, which is to enhance communication and understanding between structural and construction engineers for successful design and construction of engineering projects. The articles in this book are those accepted for publication and presentation at the 6th International Structural Engineering and Construction Conference in Zurich. The 6th ISEC Conference in Zurich, Switzerland, follows the overwhelming reception and success of previous ISEC conference in Las Vegas, USA in 2009; Melbourne, Australia in 2007; Shunan, Japan in 2005; Rome, Italy in 2003; and Honolulu, USA in 2001. Many topics are covered in this book, ranging from legal affairs and contracting, to innovations and risk analysis in infrastructure projects, analysis and design of structural systems, materials, architecture, and construction. The articles here are a lasting testimony to the excellent research being undertaken around the world. These articles provide a platform for the exchange of ideas, research efforts and networking in the structural engineering and construction communities. We congratulate and thank the authors for these articles that were selected after intensive peer-review, and our gratitude extends to all reviewers and members of the International Technical Committee. It is their combined contributions that have made this book a reality.
Resumo:
Jacked piles are becoming a valuable installation method due to the low noise and vibration involved in the installation procedure. Cyclic jacking may be used in an attempt to decrease the required installation force. Small scale models of jacked piles were tested in sand and silt in a 10 m beam centrifuge. Two different piles were tested: smooth and rough. Piles were driven in two ways with monotonic and cyclically jacked installations. The cyclically jacked installation involves displacement reversal at certain depth for a fixed number of cycles. The depth of reversal and amplitude of the cycle vary for different tests. Data show that the base resistance increases during cyclic jacking due to soil compaction at the pile toe. On the other hand, shaft load decreases with the number of cycles applied due to densification of soil next to the pile shaft. Cyclic jacking may be used in unplugged tubular piles to decrease the required installation load. © 2013 Taylor & Francis Group, London.
Resumo:
At present, optimisation is an enabling technology in innovation. Multi-objective and multi-disciplinary design tools are essential in the engineering design process, and have been applied successfully in aerospace and turbomachinery applications extensively. These approaches give insight into the design space and identify the trade-offs between the competing performance measures satisfying a number of constraints at the same time. It is anticipated here that the same benefits can be obtained for the design of micro-scale combustors. In this paper, a multi-disciplinary automated design optimisation system was developed for this purpose, which comprises a commercial computational fluid dynamics package and a multi-objective variant of the Tabu Search optimisation algorithm. The main objectives that are considered in this study are to optimise the main micro-scale combustor design characteristics and to satisfy manufacturability considerations from the very beginning of the whole design operation. Hydrogen-air combustion as well as 14 geometrical and 2 operational parameters are used to describe and model the design problem. Two illustrative test cases will be presented, in which the most important device operational requirements are optimised, and the efficiency of the developed optimisation system is demonstrated. The identification, assessment and suitability of the optimum design configurations are discussed in detail. Copyright © 2012 by ASME.
Resumo:
Large concrete structures need to be inspected in order to assess their current physical and functional state, to predict future conditions, to support investment planning and decision making, and to allocate limited maintenance and rehabilitation resources. Current procedures in condition and safety assessment of large concrete structures are performed manually leading to subjective and unreliable results, costly and time-consuming data collection, and safety issues. To address these limitations, automated machine vision-based inspection procedures have increasingly been proposed by the research community. This paper presents current achievements and open challenges in vision-based inspection of large concrete structures. First, the general concept of Building Information Modeling is introduced. Then, vision-based 3D reconstruction and as-built spatial modeling of concrete civil infrastructure are presented. Following that, the focus is set on structural member recognition as well as on concrete damage detection and assessment exemplified for concrete columns. Although some challenges are still under investigation, it can be concluded that vision-based inspection methods have significantly improved over the last 10 years, and now, as-built spatial modeling as well as damage detection and assessment of large concrete structures have the potential to be fully automated.
Resumo:
This edited volume presents the proceedings of the 20th CIRP LCE Conference, which cover various areas in life cycle engineering such as life cycle design, end-of-life management, manufacturing processes, manufacturing systems, methods and ...
Resumo:
This paper reports for the first time the transient expression of a reporter gene, LacZ, in the unicellular green alga Haematococcus pluvialis. By employing the micro-particle bombardment method, motile cells in the exponential phase showed transient expression of lacZ. This was detected in bombarded motile cells under the rupture-disc pressures of 3103 KPa and 4137 KPa. Transient expression of LacZ gene could not be observed in non-motile cells of this alga under the same transformation condition. No LacZ background was found in either the motile cells or the non-motile cells. The study suggests a promising potential of the SV40 promoter and the lacZ reporter gene in genetic engineering of unicellular green algae.
Resumo:
Seepage control in karstic rock masses is one of the most important problems in domestic hydroelectric engineering and mining engineering as well as traffic engineering. At present permeability assessment and leakage analysis of multi-layer karstic rock masses are mainly qualitative, while seldom quantitative. Quantitative analyses of the permeability coefficient and seepage amount are conducted in this report, which will provide a theoretical basis for the study of seepage law and seepage control treatment of karstic rocks. Based on the field measurements in the horizontal grouting galleries of seepage control curtains on the left bank of the Shuibuya Hydropower Project on the Qingjiang river, a hydraulic model is established in this report, and the computation results will provide a scientific basis for optimization of grouting curtain engineering. Following issues are addressed in the report. (1) Based on the in-situ measurements of fissures and karstic cavities in grouting galleries, the characteristics of karstic rock mass is analyzed, and a stochastic structural model of karstic rock masses is set up, which will provide the basis for calculation of the permeability and leakage amount of karstic rock mass. (2) According to the distribution of the measured joints in the grouting galleries and the stochastic results obtained from the stochastic structural model of karstic rock mass between grouting galleries, a formula for computation of permeability tensor of fracturing system is set up, and a computation program is made with Visual Basic language. The computation results will be helpful for zoning of fissured rock masses and calculation of seepage amount as well as optimization of seepage control curtains. (3) Fractal theory is used to describe quantitatively the roughness of conduit walls of karstic systems and the sinuosity of karstic conduits. It is proposed that the roughness coefficient of kastic caves can be expressed by both fractal dimension Ds and Dr that represent respectively the extension sinuosity of karstic caves and the roughness of the conduit walls. The existing formula for calculating the seepage amount of karstic conduits is revised and programmed. The seepage amount of rock masses in the measured grouting galleries is estimated under the condition that no seepage control measures are taken before reservoir impoundment, and the results will be helpful for design and construction optimization of seepage curtains of the Shuibuya hydropower project. This report is one part of the subject "Karstic hydrogeology and the structural model and seepage hydraulics of karstic rock masses", a sub-program of "Study on seepage hydraulics of multi-layer karstic rock masses and its application in seepage control curtain engineering", which is financially supported by the Hubei Provincial key science and technology programme.
Resumo:
Act2 is a highly concurrent programming language designed to exploit the processing power available from parallel computer architectures. The language supports advanced concepts in software engineering, providing high-level constructs suitable for implementing artificially-intelligent applications. Act2 is based on the Actor model of computation, consisting of virtual computational agents which communicate by message-passing. Act2 serves as a framework in which to integrate an actor language, a description and reasoning system, and a problem-solving and resource management system. This document describes issues in Act2's design and the implementation of an interpreter for the language.
Resumo:
Lee M.H., Model-Based Reasoning: A Principled Approach for Software Engineering, Software - Concepts and Tools,19(4), pp179-189, 2000.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
Recent measurement based studies reveal that most of the Internet connections are short in terms of the amount of traffic they carry (mice), while a small fraction of the connections are carrying a large portion of the traffic (elephants). A careful study of the TCP protocol shows that without help from an Active Queue Management (AQM) policy, short connections tend to lose to long connections in their competition for bandwidth. This is because short connections do not gain detailed knowledge of the network state, and therefore they are doomed to be less competitive due to the conservative nature of the TCP congestion control algorithm. Inspired by the Differentiated Services (Diffserv) architecture, we propose to give preferential treatment to short connections inside the bottleneck queue, so that short connections experience less packet drop rate than long connections. This is done by employing the RIO (RED with In and Out) queue management policy which uses different drop functions for different classes of traffic. Our simulation results show that: (1) in a highly loaded network, preferential treatment is necessary to provide short TCP connections with better response time and fairness without hurting the performance of long TCP connections; (2) the proposed scheme still delivers packets in FIFO manner at each link, thus it maintains statistical multiplexing gain and does not misorder packets; (3) choosing a smaller default initial timeout value for TCP can help enhance the performance of short TCP flows, however not as effectively as our scheme and at the risk of congestion collapse; (4) in the worst case, our proposal works as well as a regular RED scheme, in terms of response time and goodput.