915 resultados para Engineering design--Data processing
Resumo:
A previous study sponsored by the Smart Work Zone Deployment Initiative, “Feasibility of Visualization and Simulation Applications to Improve Work Zone Safety and Mobility,” demonstrated the feasibility of combining readily available, inexpensive software programs, such as SketchUp and Google Earth, with standard two-dimensional civil engineering design programs, such as MicroStation, to create animations of construction work zones. The animations reflect changes in work zone configurations as the project progresses, representing an opportunity to visually present complex information to drivers, construction workers, agency personnel, and the general public. The purpose of this study is to continue the work from the previous study to determine the added value and resource demands created by including more complex data, specifically traffic volume, movement, and vehicle type. This report describes the changes that were made to the simulation, including incorporating additional data and converting the simulation from a desktop application to a web application.
Resumo:
Expanded abstract: Iowa Department of Transportation (IA DOT) is finalizing research to streamline field inventory/inspection of culverts by Maintenance and Construction staff while maximizing the use of tablet technologies. The project began in 2011 to develop some new best practices for field staff to assist in the inventory, inspection and maintenance of assets along the roadway. The team has spent the past year working through the complexities of identifying the most appropriate tablet hardware for field data collection. A small scale deployment of tablets occurred in spring of 2013 to collect several safety related assets (culverts, signs, guardrail, and incidents). Data can be collected in disconnected or connected modes and there is an associated desktop environment where data can be viewed and queried after being synced into the master database. The development of a deployment plan and related workflow processes are underway; which will eventually feed information into IA DOTs larger asset management system and make the information available for decision making. The team is also working with the IA DOT Design Office on Computer Aided Drafting (CAD) data processing and the IA DOT Construction office with a new digital As-Built plan process to leverage the complete data life-cycle so information can be developed once and leveraged by the Maintenance staff farther along in the process.
Resumo:
This work proposes a parallel architecture for a motion estimation algorithm. It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great numbers of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. Due to its regular processing scheme, parallel implementation of correspondence problem can be an adequate approach to reduce the computation time. This work introduces parallel and real-time implementation of such low-level tasks to be carried out from the moment that the current image is acquired by the camera until the pairs of point-matchings are detected
Resumo:
The Iowa Department of Transportation (DOT) is responsible for approximately 4,100 bridges and structures that are a part of the state’s primary highway system, which includes the Interstate, US, and Iowa highway routes. A pilot study was conducted for six bridges in two Iowa river basins—the Cedar River Basin and the South Skunk River Basin—to develop a methodology to evaluate their vulnerability to climate change and extreme weather. The six bridges had been either closed or severely stressed by record streamflow within the past seven years. An innovative methodology was developed to generate streamflow scenarios given climate change projections. The methodology selected appropriate rainfall projection data to feed into a streamflow model that generated continuous peak annual streamflow series for 1960 through 2100, which were used as input to PeakFQ to estimate return intervals for floods. The methodology evaluated the plausibility of rainfall projections and credibility of streamflow simulation while remaining consistent with U.S. Geological Survey (USGS) protocol for estimating the return interval for floods. The results were conveyed in an innovative graph that combined historical and scenario-based design metrics for use in bridge vulnerability analysis and engineering design. The pilot results determined the annual peak streamflow response to climate change likely will be basin-size dependent, four of the six pilot study bridges would be exposed to increased frequency of extreme streamflow and would have higher frequency of overtopping, the proposed design for replacing the Interstate 35 bridges over the South Skunk River south of Ames, Iowa is resilient to climate change, and some Iowa DOT bridge design policies could be reviewed to consider incorporating climate change information.
Resumo:
Three pavement design software packages were compared with regards to how they were different in determining design input parameters and their influences on the pavement thickness. StreetPave designs the concrete pavement thickness based on the PCA method and the equivalent asphalt pavement thickness. The WinPAS software performs both concrete and asphalt pavements following the AASHTO 1993 design method. The APAI software designs asphalt pavements based on pre-mechanistic/empirical AASHTO methodology. First, the following four critical design input parameters were identified: traffic, subgrade strength, reliability, and design life. The sensitivity analysis of these four design input parameters were performed using three pavement design software packages to identify which input parameters require the most attention during pavement design. Based on the current pavement design procedures and sensitivity analysis results, a prototype pavement design and sensitivity analysis (PD&SA) software package was developed to retrieve the pavement thickness design value for a given condition and allow a user to perform a pavement design sensitivity analysis. The prototype PD&SA software is a computer program that stores pavement design results in database that is designed for the user to input design data from the variety of design programs and query design results for given conditions. The prototype Pavement Design and Sensitivity Analysis (PA&SA) software package was developed to demonstrate the concept of retrieving the pavement design results from the database for a design sensitivity analysis. This final report does not include the prototype software which will be validated and tested during the next phase.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
DnaSP is a software package for a comprehensive analysis of DNA polymorphism data. Version 5 implements a number of new features and analytical methods allowing extensive DNA polymorphism analyses on large datasets. Among other features, the newly implemented methods allow for: (i) analyses on multiple data files; (ii) haplotype phasing; (iii) analyses on insertion/deletion polymorphism data; (iv) visualizing sliding window results integrated with available genome annotations in the UCSC browser.
Resumo:
Peer-reviewed
Resumo:
Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.
Resumo:
Statistics has become an indispensable tool in biomedical research. Thanks, in particular, to computer science, the researcher has easy access to elementary "classical" procedures. These are often of a "confirmatory" nature: their aim is to test hypotheses (for example the efficacy of a treatment) prior to experimentation. However, doctors often use them in situations more complex than foreseen, to discover interesting data structures and formulate hypotheses. This inverse process may lead to misuse which increases the number of "statistically proven" results in medical publications. The help of a professional statistician thus becomes necessary. Moreover, good, simple "exploratory" techniques are now available. In addition, medical data contain quite a high percentage of outliers (data that deviate from the majority). With classical methods it is often very difficult (even for a statistician!) to detect them and the reliability of results becomes questionable. New, reliable ("robust") procedures have been the subject of research for the past two decades. Their practical introduction is one of the activities of the Statistics and Data Processing Department of the University of Social and Preventive Medicine, Lausanne.
Resumo:
This paper analyzes the possibilities of integrating cost information and engineering design. Special emphasis is on finding the potential of using the activity-based costing (ABC) method when formulating cost information for the needs of design engineers. This paper suggests that ABC is more useful than the traditional job order costing, but the negative issue is the fact that ABC models become easily too complicated, i.e. expensive to build and maintain, and difficult to use. For engineering design the most suitable elements of ABC are recognizing activities of the company, constructing acitivity chains, identifying resources, activity and cost drivers, as wellas calculating accurate product costs. ABC systems including numerous cost drivers can become complex. Therefore, a comprehensive ABC based cost information system for the use of design engineers should be considered criticaly. Combining the suitable ideas of ABC with engineering oriented thinking could give competentresults.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
It is often assumed that total head losses in a sand filter are solely due to the filtration media and that there are analytical solutions, such as the Ergun equation, to compute them. However, total head losses are also due to auxiliary elements (inlet and outlet pipes and filter nozzles), which produce undesirable head losses because they increase energy requirements without contributing to the filtration process. In this study, ANSYS Fluent version 6.3, a commercial computational fluid dynamics (CFD) software program, was used to compute head losses in different parts of a sand filter. Six different numerical filter models of varying complexities were used to understand the hydraulic behavior of the several filter elements and their importance in total head losses. The simulation results show that 84.6% of these were caused by the sand bed and 15.4% were due to auxiliary elements (4.4% in the outlet and inlet pipes, and 11.0% in the perforated plate and nozzles). Simulation results with different models show the important role of the nozzles in the hydraulic behavior of the sand filter. The relationship between the passing area through the nozzles and the passing area through the perforated plate is an important design parameter for the reduction of total head losses. A reduced relationship caused by nozzle clogging would disproportionately increase the total head losses in the sand filter