926 resultados para Box girder bridges Design and construction Evaluation Data processing
Resumo:
Building on previous research, the goal of this project was to identify significant influencing factors for the Iowa Department of Transportation (DOT) to consider in future updates of its Instructional Memorandum (I.M.) 3.213, which provides guidelines for determining the need for traffic barriers (guardrail and bridge rail) at secondary roadway bridges—specifically, factors that might be significant for the bridge rail rating system component of I.M. 3.213. A literature review was conducted of policies and guidelines in other states and, specifically, of studies related to traffic barrier safety countermeasures at bridges in several states. In addition, a safety impact study was conducted to evaluate possible non-driver-related behavior characteristics of crashes on secondary road structures in Iowa using road data, structure data, and crash data from 2004 to 2013. Statistical models (negative binomial regression) were used to determine which factors were significant in terms of crash volume and crash severity. The study found that crashes are somewhat more frequent on or at bridges possessing certain characteristics—traffic volume greater than 400 vehicles per day (vpd) (paved) or greater than 50 vpd (unpaved), bridge length greater than 150 ft (paved) or greater than 35 ft (unpaved), bridge width narrower than its approach (paved) or narrower than 20 ft (unpaved), and bridges older than 25 years (both paved and unpaved). No specific roadway or bridge characteristic was found to contribute to more serious crashes. The study also confirmed previous research findings that crashes with bridges on secondary roads are rare, low-severity events. Although the findings of the study support the need for appropriate use of bridge rails, it concludes that prescriptive guidelines for bridge rail use on secondary roads may not be necessary, given the limited crash expectancy and lack of differences in crash expectancy among the various combinations of explanatory characteristics.
Resumo:
The design number of gyrations (Ndesign) introduced by the Strategic Highway Research Program (SHRP) and used in the Superior Performing Asphalt Pavement (Superpave) mix design method has been commonly used in flexible pavement design throughout the US since 1996. Ndesign, also known as the compaction effort, is used to simulate field compaction during construction and has been reported to produce air voids that are unable to reach ultimate pavement density within the initial 2 to 3 years post-construction, potentially having an adverse impact on long-term performance. Other state transportation agencies have conducted studies validating the Ndesign for their specific regions, which resulted in modifications of the gyration effort for the various traffic levels. Validating this relationship for Iowa asphalt mix designs will lead to better correlations between mix design target voids, field voids, and performance. A comprehensive analysis of current Ndesign levels investigated the current levels with existing mixes and pavements and developed initial asphalt mix design recommendations that identify an optimum Ndesign through the use of performance data tests.
Resumo:
Portland cement concrete (PCC) pavement undergoes repeated environmental load-related deflection resulting from temperature and moisture variations across the pavement depth. This phenomenon, referred to as PCC pavement curling and warping, has been known and studied since the mid-1920s. Slab curvature can be further magnified under repeated traffic loads and may ultimately lead to fatigue failures, including top-down and bottom-up transverse, longitudinal, and corner cracking. It is therefore important to measure the “true” degree of curling and warping in PCC pavements, not only for quality control (QC) and quality assurance (QA) purposes, but also to achieve a better understanding of its relationship to long-term pavement performance. In order to better understand the curling and warping behavior of PCC pavements in Iowa and provide recommendations to mitigate curling and warping deflections, field investigations were performed at six existing sites during the late fall of 2015. These sites included PCC pavements with various ages, slab shapes, mix design aspects, and environmental conditions during construction. A stationary light detection and ranging (LiDAR) device was used to scan the slab surfaces. The degree of curling and warping along the longitudinal, transverse, and diagonal directions was calculated for the selected slabs based on the point clouds acquired using LiDAR. The results and findings are correlated to variations in pavement performance, mix design, pavement design, and construction details at each site. Recommendations regarding how to minimize curling and warping are provided based on a literature review and this field study. Some examples of using point cloud data to build three-dimensional (3D) models of the overall curvature of the slab shape are presented to show the feasibility of using this 3D analysis method for curling and warping analysis.
Resumo:
SD card (Secure Digital Memory Card) is widely used in portable storage medium. Currently, latest researches on SD card, are mainly SD card controller based on FPGA (Field Programmable Gate Array). Most of them are relying on API interface (Application Programming Interface), AHB bus (Advanced High performance Bus), etc. They are dedicated to the realization of ultra high speed communication between SD card and upper systems. Studies about SD card controller, really play a vital role in the field of high speed cameras and other sub-areas of expertise. This design of FPGA-based file systems and SD2.0 IP (Intellectual Property core) does not only exhibit a nice transmission rate, but also achieve the systematic management of files, while retaining a strong portability and practicality. The file system design and implementation on a SD card covers the main three IP innovation points. First, the combination and integration of file system and SD card controller, makes the overall system highly integrated and practical. The popular SD2.0 protocol is implemented for communication channels. Pure digital logic design based on VHDL (Very-High-Speed Integrated Circuit Hardware Description Language), integrates the SD card controller in hardware layer and the FAT32 file system for the entire system. Secondly, the document management system mechanism makes document processing more convenient and easy. Especially for small files in batch processing, it can ease the pressure of upper system to frequently access and process them, thereby enhancing the overall efficiency of systems. Finally, digital design ensures the superior performance. For transmission security, CRC (Cyclic Redundancy Check) algorithm is for data transmission protection. Design of each module is platform-independent of macro cells, and keeps a better portability. Custom integrated instructions and interfaces may facilitate easily to use. Finally, the actual test went through multi-platform method, Xilinx and Altera FPGA developing platforms. The timing simulation and debugging of each module was covered. Finally, Test results show that the designed FPGA-based file system IP on SD card can support SD card, TF card and Micro SD with 2.0 protocols, and the successful implementation of systematic management for stored files, and supports SD bus mode. Data read and write rates in Kingston class10 card is approximately 24.27MB/s and 16.94MB/s.
Resumo:
The PhD project addresses the potential of using concentrating solar power (CSP) plants as a viable alternative energy producing system in Libya. Exergetic, energetic, economic and environmental analyses are carried out for a particular type of CSP plants. The study, although it aims a particular type of CSP plant – 50 MW parabolic trough-CSP plant, it is sufficiently general to be applied to other configurations. The novelty of the study, in addition to modeling and analyzing the selected configuration, lies in the use of a state-of-the-art exergetic analysis combined with the Life Cycle Assessment (LCA). The modeling and simulation of the plant is carried out in chapter three and they are conducted into two parts, namely: power cycle and solar field. The computer model developed for the analysis of the plant is based on algebraic equations describing the power cycle and the solar field. The model was solved using the Engineering Equation Solver (EES) software; and is designed to define the properties at each state point of the plant and then, sequentially, to determine energy, efficiency and irreversibility for each component. The developed model has the potential of using in the preliminary design of CSPs and, in particular, for the configuration of the solar field based on existing commercial plants. Moreover, it has the ability of analyzing the energetic, economic and environmental feasibility of using CSPs in different regions of the world, which is illustrated for the Libyan region in this study. The overall feasibility scenario is completed through an hourly analysis on an annual basis in chapter Four. This analysis allows the comparison of different systems and, eventually, a particular selection, and it includes both the economic and energetic components using the “greenius” software. The analysis also examined the impact of project financing and incentives on the cost of energy. The main technological finding of this analysis is higher performance and lower levelized cost of electricity (LCE) for Libya as compared to Southern Europe (Spain). Therefore, Libya has the potential of becoming attractive for the establishment of CSPs in its territory and, in this way, to facilitate the target of several European initiatives that aim to import electricity generated by renewable sources from North African and Middle East countries. The analysis is presented a brief review of the current cost of energy and the potential of reducing the cost from parabolic trough- CSP plant. Exergetic and environmental life cycle assessment analyses are conducted for the selected plant in chapter Five; the objectives are 1) to assess the environmental impact and cost, in terms of exergy of the life cycle of the plant; 2) to find out the points of weakness in terms of irreversibility of the process; and 3) to verify whether solar power plants can reduce environmental impact and the cost of electricity generation by comparing them with fossil fuel plants, in particular, Natural Gas Combined Cycle (NGCC) plant and oil thermal power plant. The analysis also targets a thermoeconomic analysis using the specific exergy costing (SPECO) method to evaluate the level of the cost caused by exergy destruction. The main technological findings are that the most important contribution impact lies with the solar field, which reports a value of 79%; and the materials with the vi highest impact are: steel (47%), molten salt (25%) and synthetic oil (21%). The “Human Health” damage category presents the highest impact (69%) followed by the “Resource” damage category (24%). In addition, the highest exergy demand is linked to the steel (47%); and there is a considerable exergetic demand related to the molten salt and synthetic oil with values of 25% and 19%, respectively. Finally, in the comparison with fossil fuel power plants (NGCC and Oil), the CSP plant presents the lowest environmental impact, while the worst environmental performance is reported to the oil power plant followed by NGCC plant. The solar field presents the largest value of cost rate, where the boiler is a component with the highest cost rate among the power cycle components. The thermal storage allows the CSP plants to overcome solar irradiation transients, to respond to electricity demand independent of weather conditions, and to extend electricity production beyond the availability of daylight. Numerical analysis of the thermal transient response of a thermocline storage tank is carried out for the charging phase. The system of equations describing the numerical model is solved by using time-implicit and space-backward finite differences and which encoded within the Matlab environment. The analysis presented the following findings: the predictions agree well with the experiments for the time evolution of the thermocline region, particularly for the regions away from the top-inlet. The deviations observed in the near-region of the inlet are most likely due to the high-level of turbulence in this region due to the localized level of mixing resulting; a simple analytical model to take into consideration this increased turbulence level was developed and it leads to some improvement of the predictions; this approach requires practically no additional computational effort and it relates the effective thermal diffusivity to the mean effective velocity of the fluid at each particular height of the system. Altogether the study indicates that the selected parabolic trough-CSP plant has the edge over alternative competing technologies for locations where DNI is high and where land usage is not an issue, such as the shoreline of Libya.
Resumo:
Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Documentation control process of brazilian multipurpose reactor - conceptual design and basic design
Resumo:
Language provides an interesting lens to look at state-building processes because of its cross-cutting nature. For example, in addition to its symbolic value and appeal, a national language has other roles in the process, including: (a) becoming the primary medium of communication which permits the nation to function efficiently in its political and economic life, (b) promoting social cohesion, allowing the nation to develop a common culture, and (c) forming a primordial basis for self-determination. Moreover, because of its cross-cutting nature, language interventions are rarely isolated activities. Languages are adopted by speakers, taking root in and spreading between communities because they are legitimated by legislation, and then reproduced through institutions like the education and military systems. Pádraig Ó’ Riagáin (1997) makes a case for this observing that “Language policy is formulated, implemented, and accomplishes its results within a complex interrelated set of economic, social, and political processes which include, inter alia, the operation of other non-language state policies” (p. 45). In the Turkish case, its foundational role in the formation of the Turkish nation-state but its linkages to human rights issues raises interesting issues about how socio-cultural practices become reproduced through institutional infrastructure formation. This dissertation is a country-level case study looking at Turkey’s nation-state building process through the lens of its language and education policy development processes with a focus on the early years of the Republic between 1927 and 1970. This project examines how different groups self-identified or were self-identified (as the case may be) in official Turkish statistical publications (e.g., the Turkish annual statistical yearbooks and the population censuses) during that time period when language and ethnicity data was made publicly available. The overarching questions this dissertation explores include: 1.What were the geo-political conditions surrounding the development and influencing the Turkish government’s language and education policies? 2.Are there any observable patterns in the geo-spatial distribution of language, literacy, and education participation rates over time? In what ways, are these traditionally linked variables (language, literacy, education participation) problematic? 3.What do changes in population identifiers, e.g., language and ethnicity, suggest about the government’s approach towards nation-state building through the construction of a civic Turkish identity and institution building? Archival secondary source data was digitized, aggregated by categories relevant to this project at national and provincial levels and over the course of time (primarily between 1927 and 2000). The data was then re-aggregated into values that could be longitudinally compared and then layered on aspatial administrative maps. This dissertation contributes to existing body of social policy literature by taking an interdisciplinary approach in looking at the larger socio-economic contexts in which language and education policies are produced.
Resumo:
INTRODUCTION: In common with much of the developed world, Scotland has a severe and well established problem with overweight and obesity in childhood with recent figures demonstrating that 31% of Scottish children aged 2-15 years old were overweight including obese in 2014. This problem is more pronounced in socioeconomically disadvantaged groups and in older children across all economic groups (Scottish Health Survey, 2014). Children who are overweight or obese are at increased risk of a number of adverse health outcomes in the short term and throughout their life course (Lobstein and Jackson-Leach, 2006). The Scottish Government tasked all Scottish Health Boards with developing and delivering child healthy weight interventions to clinically overweight or obese children in an attempt to address this health problem. It is therefore imperative to deliver high quality, affordable, appropriately targeted interventions which can make a sustained impact on children’s lifestyles, setting them up for life as healthy weight adults. This research aimed to inform the design, readiness for application and Health Board suitability of an effective primary school-based curricular child healthy weight intervention. METHODS: the process involved in conceptualising a child healthy weight intervention, developing the intervention, planning for implementation and subsequent evaluation was guided by the PRECEDE-PROCEED Model (Green and Kreuter, 2005) and the Intervention Mapping protocol (Lloyd et al. 2011). RESULTS: The outputs from each stage of the development process were used to formulate a child healthy weight intervention conceptual model then develop plans for delivery and evaluation. DISCUSSION: The Fit for School conceptual model developed through this process has the potential to theoretically modify energy balance related behaviours associated with unhealthy weight gain in childhood. It also has the potential to be delivered at a Health Board scale within current organisational restrictions.
Resumo:
Ropivacaine (RVC) is an enantiomerically pure local anesthetic (LA) largely used in surgical procedures, which presents physico-chemical and therapeutic properties similar to those of bupivacaine (BPV), but associated to less systemic toxicity This study focuses on the development and pharmacological evaluation of a RVC in 2-hydroxypropyl-beta-cyclodextrin (HP-P-CD) inclusion complex. Phase-solubility diagrams allowed the determination of the association constant between RVC and HP-beta-CD (9.46 M-1) and showed an increase on RVC solubility upon complexation. Release kinetics revealed a decrease on RVC release rate and reduced hemolytic effects after complexation. (onset at 3.7 mM and 11.2 mM for RVC and RVCHP-beta-CD, respectively) were observed. Differential scanning calorimetry (DSC), scanning electron microscopy (SEM) and X-ray analysis (X-ray) showed the formation and the morphology of the complex. Nuclear magnetic resonance (NMR) and job-plot experiments afforded data regarding inclusion complex stoichiometry (1:1) and topology. Sciatic nerve blockade studies showed that RVCHP-beta-CD was able to reduce the latency without increasing the duration of motor blockade, but prolonging the duration and intensity of the sensory blockade (p < 0.001) induced by the LA in mice. These results identify the RVCHP-beta-CD complex as an effective novel approach to enhance the pharmacological effects of RVC, presenting it as a promising new anesthetic formulation. (c) 2007 Elsevier B.V All rights reserved.
Resumo:
International audience
Resumo:
Preserving the cultural heritage of the performing arts raises difficult and sensitive issues, as each performance is unique by nature and the juxtaposition between the performers and the audience cannot be easily recorded. In this paper, we report on an experimental research project to preserve another aspect of the performing arts—the history of their rehearsals. We have specifically designed non-intrusive video recording and on-site documentation techniques to make this process transparent to the creative crew, and have developed a complete workflow to publish the recorded video data and their corresponding meta-data online as Open Data using state-of-the-art audio and video processing to maximize non-linear navigation and hypervideo linking. The resulting open archive is made publicly available to researchers and amateurs alike and offers a unique account of the inner workings of the worlds of theater and opera.
Resumo:
Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.