99 resultados para compression reinforcement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fire safety design of building structures has received greater attention in recent times due to continuing loss of properties and lives during fires. However, fire performance of light gauge cold-formed steel structures is not well understood despite its increased usage in buildings. Cold-formed steel compression members are susceptible to various buckling modes such as local and distortional buckling and their ultimate strength behaviour is governed by these buckling modes. Therefore a research project based on experimental and numerical studies was undertaken to investigate the distortional buckling behaviour of light gauge cold-formed steel compression members under simulated fire conditions. Lipped channel sections with and without additional lips were selected with three thicknesses of 0.6, 0.8, and 0.95 mm and both low and high strength steels (G250 and G550 steels). More than 150 compression tests were undertaken first at ambient and elevated temperatures. Finite element models of the tested compression members were then developed by including the degradation of mechanical properties with increasing temperatures. Comparison of finite element analysis and experimental results showed that the developed finite element models were capable of simulating the distortional buckling and strength behaviour at ambient and elevated temperatures up to 800 °C. The validated model was used to determine the effects of mechanical properties, geometric imperfections and residual stresses on the distortional buckling behaviour and strength of cold-formed steel columns. This paper presents the details of the numerical study and the results. It demonstrated the importance of using accurate mechanical properties at elevated temperatures in order to obtain reliable strength characteristics of cold-formed steel columns under fire conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

H. Simon and B. Szörényi have found an error in the proof of Theorem 52 of “Shifting: One-inclusion mistake bounds and sample compression”, Rubinstein et al. (2009). In this note we provide a corrected proof of a slightly weakened version of this theorem. Our new bound on the density of one-inclusion hypergraphs is again in terms of the capacity of the multilabel concept class. Simon and Szörényi have recently proved an alternate result in Simon and Szörényi (2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Markov Decision Process (MDP). The algorithm proceeds in episodes where, in each episode, it picks a policy using regularization based on the span of the optimal bias vector. For an MDP with S states and A actions whose optimal bias vector has span bounded by H, we show a regret bound of ~ O(HS p AT ). We also relate the span to various diameter-like quantities associated with the MDP, demonstrating how our results improve on previous regret bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mechanical damages such as bruising, collision and impact during food processing stages diminish quality and quantity of productions as well as efficiency of operations. Studying mechanical characteristics of food materials will help to enhance current industrial practices. Mechanical properties of fruits and vegetables describe how these materials behave under loading in real industrial operations. Optimizing and designing more efficient equipments require accurate and precise information of tissue behaviours. FE modelling of food industrial processes is an effective method of studying interrelation of variables during mechanical operation. In this study, empirical investigation has been done on mechanical properties of pumpkin peel. The test was a part of FE modelling and simulation of mechanical peeling stage of tough skinned vegetables. The compression test has been conducted on Jap variety of pumpkin. Additionally, stress strain curve, bio-yield and toughness of pumpkin skin have been calculated. The required energy for reaching bio-yield point was 493.75, 507.71 and 451.71 N.mm for 1.25, 10 and 20 mm/min loading speed respectively. Average value of force in bio-yield point for pumpkin peel was 310 N.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study undertook a physico-chemical characterisation of particle emissions from a single compression ignition engine operated at one test mode with 3 biodiesel fuels made from 3 different feedstocks (i.e. soy, tallow and canola) at 4 different blend percentages (20%, 40%, 60% and 80%) to gain insights into their particle-related health effects. Particle physical properties were inferred by measuring particle number size distributions both with and without heating within a thermodenuder (TD) and also by measuring particulate matter (PM) emission factors with an aerodynamic diameter less than 10 μm (PM10). The chemical properties of particulates were investigated by measuring particle and vapour phase Polycyclic Aromatic Hydrocarbons (PAHs) and also Reactive Oxygen Species (ROS) concentrations. The particle number size distributions showed strong dependency on feedstock and blend percentage with some fuel types showing increased particle number emissions, whilst others showed particle number reductions. In addition, the median particle diameter decreased as the blend percentage was increased. Particle and vapour phase PAHs were generally reduced with biodiesel, with the results being relatively independent of the blend percentage. The ROS concentrations increased monotonically with biodiesel blend percentage, but did not exhibit strong feedstock variability. Furthermore, the ROS concentrations correlated quite well with the organic volume percentage of particles – a quantity which increased with increasing blend percentage. At higher blend percentages, the particle surface area was significantly reduced, but the particles were internally mixed with a greater organic volume percentage (containing ROS) which has implications for using surface area as a regulatory metric for diesel particulate matter (DPM) emissions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The compressed gas industry and government agencies worldwide utilize "adiabatic compression" testing for qualifying high-pressure valves, regulators, and other related flow control equipment for gaseous oxygen service. This test methodology is known by various terms including adiabatic compression testing, gaseous fluid impact testing, pneumatic impact testing, and BAM testing as the most common terms. The test methodology will be described in greater detail throughout this document but in summary it consists of pressurizing a test article (valve, regulator, etc.) with gaseous oxygen within 15 to 20 milliseconds (ms). Because the driven gas1 and the driving gas2 are rapidly compressed to the final test pressure at the inlet of the test article, they are rapidly heated by the sudden increase in pressure to sufficient temperatures (thermal energies) to sometimes result in ignition of the nonmetallic materials (seals and seats) used within the test article. In general, the more rapid the compression process the more "adiabatic" the pressure surge is presumed to be and the more like an isentropic process the pressure surge has been argued to simulate. Generally speaking, adiabatic compression is widely considered the most efficient ignition mechanism for directly kindling a nonmetallic material in gaseous oxygen and has been implicated in many fire investigations. Because of the ease of ignition of many nonmetallic materials by this heating mechanism, many industry standards prescribe this testing. However, the results between various laboratories conducting the testing have not always been consistent. Research into the test method indicated that the thermal profile achieved (i.e., temperature/time history of the gas) during adiabatic compression testing as required by the prevailing industry standards has not been fully modeled or empirically verified, although attempts have been made. This research evaluated the following questions: 1) Can the rapid compression process required by the industry standards be thermodynamically and fluid dynamically modeled so that predictions of the thermal profiles be made, 2) Can the thermal profiles produced by the rapid compression process be measured in order to validate the thermodynamic and fluid dynamic models; and, estimate the severity of the test, and, 3) Can controlling parameters be recommended so that new guidelines may be established for the industry standards to resolve inconsistencies between various test laboratories conducting tests according to the present standards?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 4-cylinder Ford 2701C test engine was used in this study to explore the impact of ethanol fumigation on gaseous and particle emission concentrations. The fumigation technique delivered vaporised ethanol into the intake manifold of the engine, using an injector, a pump and pressure regulator, a heat exchanger for vaporising ethanol and a separate fuel tank and lines. Gaseous (Nitric oxide (NO), Carbon monoxide (CO) and hydrocarbons (HC)) and particulate emissions (particle mass (PM2.5) and particle number) testing was conducted at intermediate speed (1700 rpm) using 4 load settings with ethanol substitution percentages ranging from 10-40 % (by energy). With ethanol fumigation, NO and PM2.5 emissions were reduced, whereas CO and HC emissions increased considerably and particle number emissions increased at most test settings. It was found that ethanol fumigation reduced the excess air factor for the engine and this led to increased emissions of CO and HC, but decreased emissions of NO. PM2.5 emissions were reduced with ethanol fumigation, as ethanol has a very low “sooting” tendency. This is due to the higher hydrogen-to-carbon ratio of this fuel, and also because ethanol does not contain aromatics, both of which are known soot precursors. The use of a diesel oxidation catalyst (as an after-treatment device) is recommended to achieve a reduction in the four pollutants that are currently regulated for compression ignition engines. The increase in particle number emissions with ethanol fumigation was due to the formation of volatile (organic) particles; consequently, using a diesel oxidation catalyst will also assist in reducing particle number emissions.