944 resultados para Load factor design
Resumo:
A parametric study of cold-formed steel sections with web openings subjected to web crippling under end-one-flange (EOF) loading condition is undertaken, using finite element analysis, to investigate the effects of web holes and cross-section sizes. The holes are located either centred above the bearing plates or with a horizontal clear distance to the near edge of the bearing plates. It was demonstrated that the main factors influencing the web crippling strength are the ratio of the hole depth to the depth of the web, the ratio of the length of bearing plates to the flat depth of the web and the location of the holes as defined by the distance of the hole from the edge of the bearing plate divided by the flat depth of web. In this study, design recommendations in the form of web crippling strength reduction factor equations are proposed, which are conservative when compared with the experimental and finite element results.
Resumo:
Composites are fast becoming a cost effective option when considering the design of engineering structures in a broad range of applications. If the strength to weight benefits of these material systems can be exploited and challenges in developing lower cost manufacturing methods overcome, then the advanced composite systems will play a bigger role in the diverse range of sectors outside the aerospace industry where they have been used for decades.
This paper presents physical testing results that showcase the advantages of GRP (Glass Reinforced Plastics), such as the ability to endure loading with minimal deformation. The testing involved is a cross comparison of GRP grating vs. GRP encapsulated foam core. Resulting data gained within this paper will then be coupled with design optimization (utilising model simulation) to bring forward layup alterations to meet the specified load classifications involved.
Resumo:
The new engine plant by General Motors (GM) in Joinville-SC, inaugurated on February 27th 2013, incorporates the most advanced automotive technology processes and broad compliance with environmental standards and energy efficiency. The initiatives implemented in this industrial plant include processes with 100% of recycled industrial waste (landfill free) and pioneer systems in energy efficiency and environmental protection, qualifying the plant to obtain the global certification of Leadership in Energy and Environmental Design (LEED). This industrial project reveals the strategic importance of the region and of Brazil in the growth of GM in the world, becoming a reference for studies and project evaluations of "green" factories in the automotive sector. The present study performs an exploratory research based on scientific publications, assessing the direct and indirect impacts on the business outcome, resulting from implementation of industrial serviceoriented sustainability of its operations, referred to in this article as "Green Factory”. We concluded that the adopted technologies focused on sustainability, study and development, represent a new step for the design of new plants and future expansions of the company in the region, combining low operating cost, low environmental impact and conservation of natural resources.
Resumo:
Das Verfahren der Lebensmitteltrocknung wird häufig angewendet, um ein Produkt für längere Zeit haltbar zu machen. Obst und Gemüse sind aufgrund ihres hohen Wassergehalts leicht verderblich durch biochemische Vorgänge innerhalb des Produktes, nicht sachgemäße Lagerung und unzureichende Transportmöglichkeiten. Um solche Verluste zu vermeiden wird die direkte Trocknung eingesetzt, welche die älteste Methode zum langfristigen haltbarmachen ist. Diese Methode ist jedoch veraltet und kann den heutigen Herausforderungen nicht gerecht werden. In der vorliegenden Arbeit wurde ein neuer Chargentrockner, mit diagonalem Luftstömungskanal entlang der Länge des Trocknungsraumes und ohne Leitbleche entwickelt. Neben dem unbestreitbaren Nutzen der Verwendung von Leitblechen, erhöhen diese jedoch die Konstruktionskosten und führen auch zu einer Erhöhung des Druckverlustes. Dadurch wird im Trocknungsprozess mehr Energie verbraucht. Um eine räumlich gleichmäßige Trocknung ohne Leitbleche zu erreichen, wurden die Lebensmittelbehälter diagonal entlang der Länge des Trockners platziert. Das vorrangige Ziel des diagonalen Kanals war, die einströmende, warme Luft gleichmäßig auf das gesamte Produkt auszurichten. Die Simulation des Luftstroms wurde mit ANSYS-Fluent in der ANSYS Workbench Plattform durchgeführt. Zwei verschiedene Geometrien der Trocknungskammer, diagonal und nicht diagonal, wurden modelliert und die Ergebnisse für eine gleichmäßige Luftverteilung aus dem diagonalen Luftströmungsdesign erhalten. Es wurde eine Reihe von Experimenten durchgeführt, um das Design zu bewerten. Kartoffelscheiben dienten als Trocknungsgut. Die statistischen Ergebnisse zeigen einen guten Korrelationskoeffizienten für die Luftstromverteilung (87,09%) zwischen dem durchschnittlich vorhergesagten und der durchschnittlichen gemessenen Strömungsgeschwindigkeit. Um den Effekt der gleichmäßigen Luftverteilung auf die Veränderung der Qualität zu bewerten, wurde die Farbe des Produktes, entlang der gesamten Länge der Trocknungskammer kontaktfrei im on-line-Verfahren bestimmt. Zu diesem Zweck wurde eine Imaging-Box, bestehend aus Kamera und Beleuchtung entwickelt. Räumliche Unterschiede dieses Qualitätsparameters wurden als Kriterium gewählt, um die gleichmäßige Trocknungsqualität in der Trocknungskammer zu bewerten. Entscheidend beim Lebensmittel-Chargentrockner ist sein Energieverbrauch. Dafür wurden thermodynamische Analysen des Trockners durchgeführt. Die Energieeffizienz des Systems wurde unter den gewählten Trocknungsbedingungen mit 50,16% kalkuliert. Die durchschnittlich genutzten Energie in Form von Elektrizität zur Herstellung von 1kg getrockneter Kartoffeln wurde mit weniger als 16,24 MJ/kg und weniger als 4,78 MJ/kg Wasser zum verdampfen bei einer sehr hohen Temperatur von jeweils 65°C und Scheibendicken von 5mm kalkuliert. Die Energie- und Exergieanalysen für diagonale Chargentrockner wurden zudem mit denen anderer Chargentrockner verglichen. Die Auswahl von Trocknungstemperatur, Massenflussrate der Trocknungsluft, Trocknerkapazität und Heiztyp sind die wichtigen Parameter zur Bewertung der genutzten Energie von Chargentrocknern. Die Entwicklung des diagonalen Chargentrockners ist eine nützliche und effektive Möglichkeit um dei Trocknungshomogenität zu erhöhen. Das Design erlaubt es, das gesamte Produkt in der Trocknungskammer gleichmäßigen Luftverhältnissen auszusetzen, statt die Luft von einer Horde zur nächsten zu leiten.
Resumo:
Syftet med föreliggande studie har varit att kartlägga hur grafiska formgivare runt om i världen uppfattar skandinavisk grafisk design. 53 deltagare från industriländer deltog i studien som använde en kombination av mail-intervjuer och enkäter. Resultaten från denna studie indikerar att, oavsett kontinent, upplevs skandinavisk grafisk design som enkel och funktionell. Layouten uppfattades som rutnätsbaserad med mycket ljusrum och få grafiska element. Monokroma färger som svart, vit och grå utan gradienter och skuggor uppfattades som typiska för skandinavisk grafisk design; följt av jord- och pastellfärger. Sanserifer var mest förknippade med skandinavisk grafisk design. Motiv ansågs i denna studie att användas sparsamt, men när de används avbildar de naturen eller geometriska former. Foton och illustrationer ansågs användas ungefär lika mycket, illustrationer hade en smärre större preferens. Den upplevda påverkan av skandinavisk grafisk design varierade mellan deltagarna. Deltagarna som tyckte att påverkan var stor, ansåg att detta berodde på förespråkandet av enkelhet och funktion, sammanflätning av designområden och/eller frammaning av hållbar grafisk design. Deltagarna som ansåg att påverkan var låg, tyckte att för lite publicitet var en bidragande faktor.De flesta deltagarna i denna undersökning ansåg att skandinavisk grafisk design var enklare i jämförelse med vad de kunde se i sina hemländer. Vidare tyckte afrikanska, asiatiska och sydamerikanska deltagare att färgerna hade lägre kroma.
Resumo:
The dynamic interaction of vehicles and bridges results in live loads being induced into bridges that are greater than the vehicle’s static weight. To limit this dynamic effect, the Iowa Department of Transportation (DOT) currently requires that permitted trucks slow to five miles per hour and span the roadway centerline when crossing bridges. However, this practice has other negative consequences such as the potential for crashes, impracticality for bridges with high traffic volumes, and higher fuel consumption. The main objective of this work was to provide information and guidance on the allowable speeds for permitted vehicles and loads on bridges .A field test program was implemented on five bridges (i.e., two steel girder bridges, two pre-stressed concrete girder bridges, and one concrete slab bridge) to investigate the dynamic response of bridges due to vehicle loadings. The important factors taken into account during the field tests included vehicle speed, entrance conditions, vehicle characteristics (i.e., empty dump truck, full dump truck, and semi-truck), and bridge geometric characteristics (i.e., long span and short span). Three entrance conditions were used: As-is and also Level 1 and Level 2, which simulated rough entrance conditions with a fabricated ramp placed 10 feet from the joint between the bridge end and approach slab and directly next to the joint, respectively. The researchers analyzed and utilized the field data to derive the dynamic impact factors (DIFs) for all gauges installed on each bridge under the different loading scenarios.
Resumo:
This study had three objectives: (1) to develop a comprehensive truck simulation that executes rapidly, has a modular program construction to allow variation of vehicle characteristics, and is able to realistically predict vehicle motion and the tire-road surface interaction forces; (2) to develop a model of doweled portland cement concrete pavement that can be used to determine slab deflection and stress at predetermined nodes, and that allows for the variation of traditional thickness design factors; and (3) to implement these two models on a work station with suitable menu driven modules so that both existing and proposed pavements can be evaluated with respect to design life, given specific characteristics of the heavy vehicles that will be using the facility. This report summarizes the work that has been performed during the first year of the study. Briefly, the following has been accomplished: A two dimensional model of a typical 3-S2 tractor-trailer combination was created. A finite element structural analysis program, ANSYS, was used to model the pavement. Computer runs have been performed varying the parameters defining both vehicle and road elements. The resulting time specific displacements for each node are plotted, and the displacement basin is generated for defined vehicles. Relative damage to the pavement can then be estimated. A damage function resulting from load replications must be assumed that will be reflected by further pavement deterioration. Comparison with actual damage on Interstate 80 will eventually allow verification of these procedures.
Resumo:
Abstract: Highway bridges have great values in a country because in case of any natural disaster they may serve as lines to save people’s lives. Being vulnerable under significant seismic loads, different methods can be considered to design resistant highway bridges and rehabilitate the existing ones. In this study, base isolation has been considered as one efficient method in this regards which in some cases reduces significantly the seismic load effects on the structure. By reducing the ductility demand on the structure without a notable increase of strength, the structure is designed to remain elastic under seismic loads. The problem associated with the isolated bridges, especially with elastomeric bearings, can be their excessive displacements under service and seismic loads. This can defy the purpose of using elastomeric bearings for small to medium span typical bridges where expansion joints and clearances may result in significant increase of initial and maintenance cost. Thus, supplementing the structure with dampers with some stiffness can serve as a solution which in turn, however, may increase the structure base shear. The main objective of this thesis is to provide a simplified method for the evaluation of optimal parameters for dampers in isolated bridges. Firstly, performing a parametric study, some directions are given for the use of simple isolation devices such as elastomeric bearings to rehabilitate existing bridges with high importance. Parameters like geometry of the bridge, code provisions and the type of soil on which the structure is constructed have been introduced to a typical two span bridge. It is concluded that the stiffness of the substructure, soil type and special provisions in the code can determine the employment of base isolation for retrofitting of bridges. Secondly, based on the elastic response coefficient of isolated bridges, a simplified design method of dampers for seismically isolated regular highway bridges has been presented in this study. By setting objectives for reduction of displacement and base shear variation, the required stiffness and damping of a hysteretic damper can be determined. By modelling a typical two span bridge, numerical analyses have followed to verify the effectiveness of the method. The method has been used to identify equivalent linear parameters and subsequently, nonlinear parameters of hysteretic damper for various designated scenarios of displacement and base shear requirements. Comparison of the results of the nonlinear numerical model without damper and with damper has shown that the method is sufficiently accurate. Finally, an innovative and simple hysteretic steel damper was designed. Five specimens were fabricated from two steel grades and were tested accompanying a real scale elastomeric isolator in the structural laboratory of the Université de Sherbrooke. The test procedure was to characterize the specimens by cyclic displacement controlled tests and subsequently to test them by real-time dynamic substructuring (RTDS) method. The test results were then used to establish a numerical model of the system which went through nonlinear time history analyses under several earthquakes. The outcome of the experimental and numerical showed an acceptable conformity with the simplified method.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
On the national scene, soybean crop occupies a prominent position in cultivated area and volume production, being cultivated largely in the no tillage system. This system, due to the intense traffic of machines and implements on its surface has caused soil compaction problems, which has caused the yield loss of crops. In order to minimize this effect the seeder-drill uses the systems to opening the furrow by shank or the double disc type. The use of the shank has become commonplace for allowing the disruption of the compacted surface layer, however requires greater energy demand and may cause excessive tillage in areas where there is not observed high levels of compaction. Thus, this study aimed to evaluate the effects of furrowers mechanisms and levels of soil compacting on traction requirement by a seeder-drill and on the growing and productivity of soybean in an Oxisol texture clay, in a two growing seasons. The experimental design consisted of randomized blocks with split plots with the main plots composed of four levels of soil compaction (N0 – no tillage without additional compaction, N1, N2 and N3 – no tillage subjected to compaction through two, four and six passes with tractor, respectively) corresponding to densities of soil 1.16, 1.20, 1.22 and 1.26 g cm-3, and subplots by two furrowers mechanisms (shank and double disc) with four replicates. To evaluate the average, maximum and specific traction force requested by the seeder-drill, was used a load cell, with capacity of 50 kN and sensitivity of 2 mV V-1, coupled between the tractor and seeder-drill, whose data are stored in a datalogger system model CR800 of Campbell Scientific. In addition, were evaluated the bulk density, soil mechanical resistance to penetration, sowing depth, depth and groove width, soil area mobilized, emergence speed index, emergence operation, final plant stand, stem diameter, plant height, average number of seeds per pod, weight of 1,000 seeds, number of pods per plant and crop productivity. Data were subjected to analysis of variance, the mean of furrowers were compared by Tukey test (p≤0.05), while for the factor soil compaction, polynomial regression analysis was adopted, selected models by the criterion of greater R2 and significance (p≤0.05) of equation parameters. Regardless of the crop season, penetration resistance increase as soil compaction levels up to around 0.20 m deep, and bulk density influenced the sowing quality parameters, however, did not affect the crop yield. In the first season, there was a higher productivity with the use of the shank type. In the second crop season, the shank demanded greater energetic requirement with the increase of bulk density and opposite situation with the double disc. The locking of sowing lines allow better performance of the shank to break the compacted layer.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
Die Weiterentwicklungen in der Betontechnologie führten in den letzten Jahrzehnten zu Hochleistungsbetonen mit immer höheren Festigkeiten. Der Ermüdungsnachweis wurde jedoch kaum weiterentwickelt und beinhaltet immer noch sehr grobe Herangehensweisen bei der Berücksichtigung des Materialwiderstands von Beton. Für eine grundlegende Weiterentwicklung dieses Nachweises fehlt noch das notwendige Wissen zu den Mechanismen der Betonermüdung. Das Ziel dieser Arbeit war es daher, grundlegende Erkenntnisse zum Ermüdungsverhalten hochfester Betone bei unterschiedlichen zyklischen Beanspruchungen zu ermitteln und hierdurch zu einem besseren Verständnis der Mechanismen der Betonermüdung beizutragen. In der vorliegenden Arbeit wurde das Ermüdungsverhalten eines hochfesten Betons bei Druckschwellbeanspruchung anhand der Dehnungs- und Steifigkeitsentwicklungen untersucht. Betrachtet wurden dabei die Einflüsse der bezogenen Oberspannung, der Belastungsfrequenz und der Wellenform. Zusätzlich wurden, ausgehend von in der Literatur dokumentierten Ansätzen, Versuche bei monoton steigender Beanspruchung und Dauerstandbeanspruchung vergleichend durchgeführt. Die Dehnungs- und Steifigkeitsentwicklungen werden durch die untersuchten Belastungsparameter der Ermüdungsbeanspruchung eindeutig beeinflusst. Charakteristische Zusammenhänge zwischen der Beeinflussung einzelner Kenngrößen der Dehnungs- und Steifigkeitsentwicklung und der Beeinflussung der Bruchlastwechselzahlen wurden aufgezeigt. Anhand der Dehnungen und Steifigkeiten an den Phasenübergängen konnten Hinweise auf beanspru-chungsartabhängige Gefügezustände abgeleitet werden. Die vergleichende Auswertung des Dehnungsverhaltens bei monoton steigender Beanspruchung, Ermüdungsbeanspruchung und Dauerstandbeanspruchung zeigte, dass das Ermüdungsverhalten von Beton nicht adäquat in Anlehnung an andere Beanspruchungsarten beschrieben werden kann. Die Untersuchungsergebnisse wurden in eine Modellvorstellung übertragen, die zur Beurteilung der baustofflichen Phänomene bei zyklischen Beanspruchungen geeignet ist. Dabei wurde die Hypothese aufgestellt, dass sich unterschiedlich ausgeprägte Kleinst-Gefügeveränderungen beanspruchungsabhängig einstellen, die die Entstehung und Ausbreitung von Mikrorissen beeinflussen. Die detaillierte Untersuchung der Dehnungs- und Steifigkeitsentwicklungen führte zu neuen und tiefergehenden Erkenntnissen und sollte ergänzt durch die Betrachtungen von Gefügezuständen zukünftig weiterverfolgt werden.
Resumo:
We describe a new geometry for electrostatic actuators to be used in sensitive laser interferometers, suited for prototype and table top experiments related to gravitational wave detection with mirrors of 100 g or less. The arrangement consists of two plates at the sides of the mirror (test mass), and therefore does not reduce its clear aperture as a conventional electrostatic drive (ESD) would do. Using the sample case of the AEI-10 m prototype interferometer, we investigate the actuation range and the influence of the relative misalignment of the ESD plates with respect to the test mass. We find that in the case of the AEI-10 m prototype interferometer, this new kind of ESD could provide a range of 0.28 mu m when operated at a voltage of 1 kV. In addition, the geometry presented is shown to provide a reduction factor of about 100 in the magnitude of the actuator motion coupling to the test mass displacement. We show that therefore in the specific case of the AEI-10 m interferometer, it is possible to mount the ESD actuators directly on the optical table without spoiling the seismic isolation performance of the triple stage suspension of the main test masses.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.