918 resultados para Maximum design load


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A number of investigators have studied the application of oscillatory energy to a metal undergoing plastic deformation. Their results have shown that oscillatory stresses reduce both the stress required to initiate plastic deformation and the friction forces between the tool and workpiece. The first two sections in this thesis discuss historically and technically the devolopment of the use of oscillatory energy techniques to aid metal forming with particular reference to wire drawing. The remainder of the thesis discusses the research undertaken to study the effect of applying longitudinal oscillations to wire drawing. Oscillations were supplied from an electric hydraulic vibrator at frequencies in the range 25 to 500 c/s., and drawing tests were performed at drawing speeds up to 50 ft/m. on a 2000 lbf. bull-block. Equipment was designed to measure the drawing force, drawing torque, amplitude of die and drum oscillation and drawing speed. Reasons are given for selecting mild steel, pure and hard aluminium, stainless steel and hard copper as the materials to be drawn, and the experimental procedure and calibration of measuring equipment arc described. Results show that when oscillatory stresses are applied at frequencies within the range investigated : (a) There is no reduction in the maximum drawing load. (b) Using sodium stearate lubricant there is a negligible reduction in the coefficient of friction between the die and wire. (c) Pure aluminium does not absorb sufficient oscillatory energy to ease the movement of dislocations. (d) Hard aluminium is not softened by oscillatory energy accelerating the diffusion process. (e) Hard copper is not cyclically softened. A vibration analysis of the bull-block and wire showed that oscillatory drawiing in this frequency range, is a mechanical process of straining; and unstraining the drawn wire, and is dependent upon the stiffness of the material being drawn and the drawing machine. Directions which further work should take are suggested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Widespread damage to roofing materials (such as tiles and shingles) for low-rise buildings, even for weaker hurricanes, has raised concerns regarding design load provisions and construction practices. Currently the building codes used for designing low-rise building roofs are mainly based on testing results from building models which generally do not simulate the architectural features of roofing materials that may significantly influence the wind-induced pressures. Full-scale experimentation was conducted under high winds to investigate the effects of architectural details of high profile roof tiles and asphalt shingles on net pressures that are often responsible for damage to these roofing materials. Effects on the vulnerability of roofing materials were also studied. Different roof models with bare, tiled, and shingled roof decks were tested. Pressures acting on both top and bottom surfaces of the roofing materials were measured to understand their effects on the net uplift loading. The area-averaged peak pressure coefficients obtained from bare, tiled, and shingled roof decks were compared. In addition, a set of wind tunnel tests on a tiled roof deck model were conducted to verify the effects of tiles' cavity internal pressure. Both the full-scale and the wind tunnel test results showed that underside pressure of a roof tile could either aggravate or alleviate wind uplift on the tile based on its orientation on the roof with respect to the wind angle of attack. For shingles, the underside pressure could aggravate wind uplift if the shingle is located near the center of the roof deck. Bare deck modeling to estimate design wind uplift on shingled decks may be acceptable for most locations but not for field locations; it could underestimate the uplift on shingles by 30-60%. In addition, some initial quantification of the effects of roofing materials on wind uplift was performed by studying the wind uplift load ratio for tiled versus bare deck and shingled versus bare deck. Vulnerability curves, with and without considering the effects of tiles' cavity internal pressure, showed significant differences. Aerodynamic load provisions for low-rise buildings' roofs and their vulnerability can thus be more accurately evaluated by considering the effects of the roofing materials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A water quality model was developed to analyze the impact of hydrological events on mercury contamination of the Upper East Fork Poplar Creek, Tennessee. The model simulates surface and subsurface hydrology and transport (MIKE SHE and MIKE 11) and it is coupled with the reactive transport of sediments and mercury (ECOLAB). The model was used to simulate the distribution of mercury contamination in the water and sediments as a function of daily hydrological events. Results from the model show a high correlation between suspended solids and mercury in the water due to the affinity of mercury with suspended organics. The governing parameters for the distribution of total suspended solids and mercury contamination were the critical velocity of the stream for particle resuspension, the rates of resuspension and production of particles, settling velocity, soil-water partition coefficient, and desorption rate of mercury in the water. Flow and load duration curves at the watershed exit were used to calibrate the model and to determine the impact of hydrological events on the total maximum daily load at Station 17. The results confirmed the strong link between hydrology and mercury transport.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To better address stream impairments due to excess nitrogen and phosphorus and to accomplish the goals of the Clean Water Act, the U.S. Environmental Protection Agency (EPA) is requiring states to develop numeric nutrient criteria. An assessment of nutrient concentrations in streams on the Delmarva Peninsula showed that nutrient levels are mostly higher than numeric criteria derived by EPA for the Eastern Coastal Plain, indicating widespread water quality degradation. Here, various approaches were used to derive numeric nutrient criteria from a set of 52 streams sampled across Delmarva. Results of the percentile and y-intercept methods were similar to those obtained elsewhere. Downstream protection values show that if numeric nutrient criteria were implemented for Delmarva streams they would be protective of the Choptank River Estuary, meeting the goals of the Chesapeake Bay Total Maximum Daily Load (TMDL).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nonpoint sources (NPS) pollution from agriculture is the leading source of water quality impairment in U.S. rivers and streams, and a major contributor to lakes, wetlands, estuaries and coastal waters (U.S. EPA 2016). Using data from a survey of farmers in Maryland, this dissertation examines the effects of a cost sharing policy designed to encourage adoption of conservation practices that reduce NPS pollution in the Chesapeake Bay watershed. This watershed is the site of the largest Total Maximum Daily Load (TMDL) implemented to date, making it an important setting in the U.S. for water quality policy. I study two main questions related to the reduction of NPS pollution from agriculture. First, I examine the issue of additionality of cost sharing payments by estimating the direct effect of cover crop cost sharing on the acres of cover crops, and the indirect effect of cover crop cost sharing on the acres of two other practices: conservation tillage and contour/strip cropping. A two-stage simultaneous equation approach is used to correct for voluntary self-selection into cost sharing programs and account for substitution effects among conservation practices. Quasi-random Halton sequences are employed to solve the system of equations for conservation practice acreage and to minimize the computational burden involved. By considering patterns of agronomic complementarity or substitution among conservation practices (Blum et al., 1997; USDA SARE, 2012), this analysis estimates water quality impacts of the crowding-in or crowding-out of private investment in conservation due to public incentive payments. Second, I connect the econometric behavioral results with model parameters from the EPA’s Chesapeake Bay Program to conduct a policy simulation on water quality effects. I expand the econometric model to also consider the potential loss of vegetative cover due to cropland incentive payments, or slippage (Lichtenberg and Smith-Ramirez, 2011). Econometric results are linked with the Chesapeake Bay Program watershed model to estimate the change in abatement levels and costs for nitrogen, phosphorus and sediment under various behavioral scenarios. Finally, I use inverse sampling weights to derive statewide abatement quantities and costs for each of these pollutants, comparing these with TMDL targets for agriculture in Maryland.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Early water resources modeling efforts were aimed mostly at representing hydrologic processes, but the need for interdisciplinary studies has led to increasing complexity and integration of environmental, social, and economic functions. The gradual shift from merely employing engineering-based simulation models to applying more holistic frameworks is an indicator of promising changes in the traditional paradigm for the application of water resources models, supporting more sustainable management decisions. This dissertation contributes to application of a quantitative-qualitative framework for sustainable water resources management using system dynamics simulation, as well as environmental systems analysis techniques to provide insights for water quality management in the Great Lakes basin. The traditional linear thinking paradigm lacks the mental and organizational framework for sustainable development trajectories, and may lead to quick-fix solutions that fail to address key drivers of water resources problems. To facilitate holistic analysis of water resources systems, systems thinking seeks to understand interactions among the subsystems. System dynamics provides a suitable framework for operationalizing systems thinking and its application to water resources problems by offering useful qualitative tools such as causal loop diagrams (CLD), stock-and-flow diagrams (SFD), and system archetypes. The approach provides a high-level quantitative-qualitative modeling framework for "big-picture" understanding of water resources systems, stakeholder participation, policy analysis, and strategic decision making. While quantitative modeling using extensive computer simulations and optimization is still very important and needed for policy screening, qualitative system dynamics models can improve understanding of general trends and the root causes of problems, and thus promote sustainable water resources decision making. Within the system dynamics framework, a growth and underinvestment (G&U) system archetype governing Lake Allegan's eutrophication problem was hypothesized to explain the system's problematic behavior and identify policy leverage points for mitigation. A system dynamics simulation model was developed to characterize the lake's recovery from its hypereutrophic state and assess a number of proposed total maximum daily load (TMDL) reduction policies, including phosphorus load reductions from point sources (PS) and non-point sources (NPS). It was shown that, for a TMDL plan to be effective, it should be considered a component of a continuous sustainability process, which considers the functionality of dynamic feedback relationships between socio-economic growth, land use change, and environmental conditions. Furthermore, a high-level simulation-optimization framework was developed to guide watershed scale BMP implementation in the Kalamazoo watershed. Agricultural BMPs should be given priority in the watershed in order to facilitate cost-efficient attainment of the Lake Allegan's TP concentration target. However, without adequate support policies, agricultural BMP implementation may adversely affect the agricultural producers. Results from a case study of the Maumee River basin show that coordinated BMP implementation across upstream and downstream watersheds can significantly improve cost efficiency of TP load abatement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research is part of continued efforts to correlate the hydrology of East Fork Poplar Creek (EFPC) and Bear Creek (BC) with the long term distribution of mercury within the overland, subsurface, and river sub-domains. The main objective of this study was to add a sedimentation module (ECO Lab) capable of simulating the reactive transport mercury exchange mechanisms within sediments and porewater throughout the watershed. The enhanced model was then applied to a Total Maximum Daily Load (TMDL) mercury analysis for EFPC. That application used historical precipitation, groundwater levels, river discharges, and mercury concentrations data that were retrieved from government databases and input to the model. The model was executed to reduce computational time, predict flow discharges, total mercury concentration, flow duration and mercury mass rate curves at key monitoring stations under various hydrological and environmental conditions and scenarios. The computational results provided insight on the relationship between discharges and mercury mass rate curves at various stations throughout EFPC, which is important to best understand and support the management mercury contamination and remediation efforts within EFPC.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this article, a minimum weight design of carbon/epoxy laminates is carried out using genetic algorithms. New failure envelopes have been developed by the combination of two commonly used phenomenological failure criteria, namely Maximum Stress (MS) and Tsai-Wu (TW) are used to obtain the minimum weight of the laminate. These failure envelopes are the most conservative failure envelope (MCFE) and the least conservative failure envelope (LCFE). Uniaxial and biaxial loading conditions are considered for the study and the differences in the optimal weight of the laminate are compared for the MCFE and LCFE. The MCFE can be used for design of critical load-carrying composites, while the LCFE could be used for the design of composite structures where weight reduction is much more important than safety such as unmanned air vehicles.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Purpose: To evaluate the survival rate, success rate, load to fracture, and finite element analysis (FEA) of maxillary central incisors and canines restored using ceramic veneers and varying preparation designs.Methods and Materials: Thirty human maxillary central incisors and 30 canines were allocated to the following four groups (n=15) based on the preparation design and type of tooth: Gr1 = central incisor with a conservative preparation; Gr2 = central incisor with a conventional preparation with palatal chamfer; Gr3 = canine with a conservative preparation; Gr4 = canine with a conventional preparation with palatal chamfer. Ceramic veneers (lithium disilicate) were fabricated and adhesively cemented (Variolink Veneer). The specimens were subjected to 4 x 106 mechanical cycles and evaluated at every 500,000 cycles to detect failures. Specimens that survived were subjected to a load to fracture test. Bidimensional models were modeled (Rhinoceros 4.0) and evaluated (MSC.Patrans 2005r2 and MSC.Marc 2005r2) on the basis of their maximum principal stress (MPS) values. Survival rate values were analyzed using the Kaplan-Meier test (alpha = 0.05) and load to fracture values were analyzed using the Student t-test (alpha = 0.05).Results: All groups showed 100% survival rates. The Student t-test did not show any difference between the groups for load to fracture. FEA showed higher MPS values in the specimens restored using veneers with conventional preparation design with palatal chamfer.Conclusion: Preparation design did not affect the fracture load of canines and central incisors, but the veneers with conventional preparation design with palatal chamfer exhibited a tendency to generate higher MPS values.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Since privatisation, maintenance of DNO LV feeder maximum demand information has gradually demised in some Utility Areas, and it is postulated that lack of knowledge about 11kV and LV electrical networks is resulting in a less economical and energy efficient Network as a whole. In an attempt to quantify the negative impact, this paper examines ten postulated new connection scenarios for a set of real LV load readings, in order to find the difference in design solutions when LV load readings were and were not known. The load profiles of the substations were examined in order to explore the utilisation profile. It was found that in 70% of the scenarios explored, significant cost differences were found. These cost differences varied by an average of 1000%, between schemes designed with and without load readings. Obviously, over designing a system and therefore operating more, underutilised transformers becomes less financially beneficial and less energy efficient. The paper concludes that new connection design is improved in terms of cost when carried out based on known LV load information and enhances the case for regular maximum feeder demand information and/or metering of LV feeders. © 2013 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Falling represents a health risk for lower limb amputees fitted with an osseointegrated fixation mainly because of the potential damage to the fixation. The purpose of this study was to characterise a real forward fall that occurred inadvertently to a transfemoral amputee fitted with an osseointegrated fixation while attending a gait measurement session to assess the load applied on the residuum. The objective was to analyse the load applied on the fixation with an emphasis on the sequence of events, the pattern and the magnitude of the forces and moments. The load was measured directly at 200 Hz using a six-channel transducer. Complementary video footage was also studied. The fall was divided into four phases: loading (240 ms), descent (620 ms), impact (365 ms) and recovery (2495 ms). The main impact forces and moments occurred 870 ms and 915 ms after the heel contact, and corresponded to 133 %BW and 17 %BWm, or 1.2 and 11.2 times the maximum forces and moments applied during the previous steps of the participant, respectively. This study provided key information to engineers and clinicians facing the challenge to design equipment, and rehabilitation and exercise programs to restore safely the locomotion of lower limb amputees.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses the statistical analyses used to derive bridge live loads models for Hong Kong from a 10-year weigh-in-motion (WIM) data. The statistical concepts required and the terminologies adopted in the development of bridge live load models are introduced. This paper includes studies for representative vehicles from the large amount of WIM data in Hong Kong. Different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc are first analyzed by various stochastic processes in order to obtain the mathematical distributions of these parameters. As a prerequisite to determine accurate bridge design loadings in Hong Kong, this study not only takes advantages of code formulation methods used internationally but also presents a new method for modelling collected WIM data using a statistical approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fire safety of light gauge steel frame (LSF) stud walls is important in the design of buildings. Currently LSF walls are increasingly used in the building industry, and are usually made of cold-formed and thin-walled steel studs that are fire-protected by two layers of plasterboard on both sides. Many experimental and numerical studies have been undertaken to investigate the fire performance of load bearing LSF walls under standard fire conditions. However, the standard time-temperature curve does not represent the fire load present in typical residential and commercial buildings that include considerable amount of thermoplastic materials. Real building fires are unlikely to follow a standard time-temperature curve. However, only limited research has been undertaken to investigate the fire performance of load bearing LSF walls under realistic design fire conditions. Therefore in this research, finite element thermal models of the traditional LSF wall panels without cavity insulation and the new LSF composite wall panels were developed to simulate their fire performance under recently developed realistic design fire curves. Suitable thermal properties were proposed for plasterboards and insulations based on laboratory tests and literature review. The developed models were then validated by comparing their thermal performance results with available results from realistic design fire tests, and were later used in parametric studies. This paper presents the details of the developed finite element thermal models of load bearing LSF wall panels under realistic design fire time-temperature curves and the re-sults. It shows that finite element thermal models can be used to predict the fire performance of load bearing LSF walls with varying configurations of insulations and plasterboards under realistic design fires. Failure times of load bearing LSF walls were also predicted based on the results from finite element thermal analyses.