913 resultados para Variable design parameters
Resumo:
In this paper, a set of design parameters, such as the slopes of upstream and downstream faces of the dam, radius of the upper arch, width of the dam at the top level and height of the vertical upper part of the dam, are given as function of the valley characteristics when the dam is situated, such as its geometry and its geotechnical properties. These tables have been obtained using a regression of the design parameters of an arch-gravity dam with a minimum concrete volume, placed in a large number of valleys with different characteristics and properties. Elasticites for these design parameters are also discussed.
Resumo:
El planteamiento tradicional de análisis de la accidentalidad en carretera pasa por la consideración de herramientas paliativas, como son la identificación y gestión de los puntos negros o tramos de concentración de accidentes, o preventivas, como las auditorías e inspecciones de seguridad vial. En esta tesis doctoral se presenta un planteamiento complementario a estas herramientas, desde una perspectiva novedosa: la consideración de los tramos donde no se producen accidentes; son los denominados Tramos Blancos. La tesis persigue demostrar que existen determinados parámetros del diseño de las carreteras y del tráfico que, bajo características generales similares de las vías, tienen influencia en el hecho de que se produzcan o no accidentes, adicionalmente a la exposición al riesgo, como factor principal, y a otros factores. La propia definición de los Tramos Blancos, entendidos como tramos de carreteras de longitud representativa donde no se han producido accidentes con víctimas mortales o heridos graves durante un periodo largo de tiempo, garantiza que esta situación no se produzca como consecuencia de la aleatoriedad de los accidentes, sino que pudiera deberse a una confluencia específica de determinados parámetros de la geometría de la vía y del tráfico total y de vehículos pesados. Para el desarrollo de esta investigación se han considerado la red de autopistas de peaje y las carreteras convencionales de la Red del Estado de España, que supone un total de 17.000 kilómetros, y los datos de accidentes con víctimas mortales y heridos graves en el periodo 2006-2010, ambos incluidos, en estas redes (un total de 10.000 accidentes). La red viaria objeto de análisis supone el 65% de la longitud de la Red de Carreteras del Estado, por la que circula el 33% de su tráfico; en ella se produjeron en el año 2013 el 47% de los accidentes con víctimas y el 60% de las víctimas mortales de la Red de Carreteras del Estado. Durante la investigación se ha desarrollado una base de datos de 250.130 registros y más de 3.5 millones de datos en el caso de las autopistas de peaje de la Red de Carreteras del Estado y de 935.402 registros y más de 14 millones de datos en el caso de la red convencional del Estado analizada. Tanto las autopistas de peaje como las carreteras convencionales han sido clasificadas según sus características de tráfico, de manera que se valoren vías con nivel de exposición al riesgo similar. Para cada tipología de vía, se ha definido como longitud de referencia para que un tramo se considere Tramo Blanco la longitud igual al percentil 95 de las longitudes de tramos sin accidentes con heridos graves o víctimas mortales durante el periodo 2006-2010. En el caso de las autopistas de peaje, en la tipología que ha sido considerada para la definición del modelo, esta longitud de referencia se estableció en 14.5 kilómetros, mientras que en el caso de las carreteras convencionales, se estableció en 7.75 kilómetros. Para cada uno de los tipos de vía considerados se han construido una base de datos en la que se han incluido las variables de existencia o no de Tramo Blanco, así como las variables de tráfico (intensidad media diaria total, intensidad de vehículos pesados y porcentaje de vehículos pesados ), la velocidad media y las variables de geometría (número de carriles, ancho de carril, ancho de arcén derecho e izquierdo, ancho de calzada y plataforma, radio, peralte, pendiente y visibilidad directa e inversa en los casos disponibles); como variables adicionales, se han incluido el número de accidentes con víctimas, los fallecidos y heridos graves, índices de peligrosidad, índices de mortalidad y exposición al riesgo. Los trabajos desarrollados para explicar la presencia de Tramos Blancos en la red de autopistas de peaje han permitido establecer las diferencias entre los valores medios de las variables de tráfico y diseño geométrico en Tramos Blancos respecto a tramos no blancos y comprobar que estas diferencias son significativas. Así mismo, se ha podido calibrar un modelo de regresión logística que explica parcialmente la existencia de Tramos Blancos, para rangos de tráfico inferiores a 10.000 vehículos diarios y para tráficos entre 10.000 y 15.000 vehículos diarios. Para el primer grupo (menos de 10.000 vehículos al día), las variables que han demostrado tener una mayor influencia en la existencia de Tramo Blanco son la velocidad media de circulación, el ancho de carril, el ancho de arcén izquierdo y el porcentaje de vehículos pesados. Para el segundo grupo (entre 10.000 y 15.000 vehículos al día), las variables independientes más influyentes en la existencia de Tramo Blanco han sido la velocidad de circulación, el ancho de calzada y el porcentaje de vehículos pesados. En el caso de las carreteras convencionales, los diferentes análisis realizados no han permitido identificar un modelo que consiga una buena clasificación de los Tramos Blancos. Aun así, se puede afirmar que los valores medios de las variables de intensidad de tráfico, radio, visibilidad, peralte y pendiente presentan diferencias significativas en los Tramos Blancos respecto a los no blancos, que varían en función de la intensidad de tráfico. Los resultados obtenidos deben considerarse como la conclusión de un análisis preliminar, dado que existen otros parámetros, tanto de diseño de la vía como de la circulación, el entorno, el factor humano o el vehículo que podrían tener una influencia en el hecho que se analiza, y no se han considerado por no disponer de esta información. En esta misma línea, el análisis de las circunstancias que rodean al viaje que el usuario de la vía realiza, su tipología y motivación es una fuente de información de interés de la que no se tienen datos y que permitiría mejorar el análisis de accidentalidad en general, y en particular el de esta investigación. Adicionalmente, se reconocen limitaciones en el desarrollo de esta investigación, en las que sería preciso profundizar en el futuro, reconociendo así nuevas líneas de investigación de interés. The traditional approach to road accidents analysis has been based in the use of palliative tools, such as black spot (or road sections) identification and management, or preventive tools, such as road safety audits and inspections. This thesis shows a complementary approach to the existing tools, from a new perspective: the consideration of road sections where no accidents have occurred; these are the so-called White Road Sections. The aim of this thesis is to show that there are certain design parameters and traffic characteristics which, under similar circumstances for roads, have influence in the fact that accidents occur, in addition to the main factor, which is the risk exposure, and others. White Road Sections, defined as road sections of a representative length, where no fatal accidents or accidents involving serious injured have happened during a long period of time, should not be a product of randomness of accidents; on the contrary, they might be the consequence of a confluence of specific parameters of road geometry, traffic volumes and heavy vehicles traffic volumes. For this research, the toll motorway network and single-carriageway network of the Spanish National Road Network have been considered, which is a total of 17.000 kilometers; fatal accidents and those involving serious injured from the period 2006-2010 have been considered (a total number of 10.000 accidents). The road network covered means 65% of the total length of the National Road Network, which allocates 33% of traffic volume; 47% of accidents with victims and 60% of fatalities happened in these road networks during 2013. During the research, a database of 250.130 registers and more than 3.5 million data for toll motorways and 935.042 registers and more than 14 million data for single carriageways of the National Road Network was developed. Both toll motorways and single-carriageways have been classified according to their traffic characteristics, so that the analysis is performed over roads with similar risk exposure. For each road type, a reference length for White Road Section has been defined, as the 95 percentile of all road sections lengths without accidents (with fatalities or serious injured) for 2006-2010. For toll motorways, this reference length concluded to be 14.5 kilometers, while for single-carriageways, it was defined as 7.75 kilometers. A detailed database was developed for each type of road, including the variable “existence of White Road Section”, as well as variables of traffic (average daily traffic volume, heavy vehicles average daily traffic and percentage of heavy vehicles from the total traffic volume), average speed and geometry variables (number of lanes, width of lane, width of shoulders, carriageway width, platform width, radius, superelevation, slope and visibility); additional variables, such as number of accidents with victims, number of fatalities or serious injured, risk and fatality rates and risk exposure, have also been included. Research conducted for the explanation of the presence of White Road Sections in the toll motorway network have shown statistically significant differences in the average values of variables of traffic and geometric design in White Road Sections compared with other road sections. In addition, a binary logistic model for the partial explanation of the presence of White Road Sections was developed, for traffic volumes lower than 10.000 daily vehicles and for those running from 10.000 to 15.000 daily vehicles. For the first group, the most influent variables for the presence of White Road Sections were the average speed, width of lane, width of left shoulder and percentage of heavy vehicles. For the second group, the most influent variables were found to be average speed, carriageway width and percentage of heavy vehicles. For single-carriageways, the different analysis developed did not reach a proper model for the explanation of White Road Sections. However, it can be assumed that the average values of the variables of traffic volume, radius, visibility, superelevation and slope show significant differences in White Road Sections if compared with others, which also vary with traffic volumes. Results obtained should be considered as a conclusion of a preliminary analysis, as there are other parameters, not only design-related, but also regarding traffic, environment, human factor and vehicle which could have an influence in the fact under research, but this information has not been considered in the analysis, as it was not available. In parallel, the analysis of the circumstances around the trip, including its typology and motivation is an interesting source of information, from which data are not available; the availability of this information would be useful for the improvement of accident analysis, in general, and for this research work, in particular. In addition, there are some limitations in the development of the research work; it would be necessary to develop an in-depth analysis in the future, thus assuming new research lines of interest.
Resumo:
Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.
Resumo:
The design of liquid-retaining structures involves many decisions to be made by the designer based on rules of thumb, heuristics, judgement, codes of practice and previous experience. Structural design problems are often ill structured and there is a need to develop programming environments that can incorporate engineering judgement along with algorithmic tools. Recent developments in artificial intelligence have made it possible to develop an expert system that can provide expert advice to the user in the selection of design criteria and design parameters. This paper introduces the development of an expert system in the design of liquid-retaining structures using blackboard architecture. An expert system shell, Visual Rule Studio, is employed to facilitate the development of this prototype system. It is a coupled system combining symbolic processing with traditional numerical processing. The expert system developed is based on British Standards Code of Practice BS8007. Explanations are made to assist inexperienced designers or civil engineering students to learn how to design liquid-retaining structures effectively and sustainably in their design practices. The use of this expert system in disseminating heuristic knowledge and experience to practitioners and engineering students is discussed.
Resumo:
Process optimisation and optimal control of batch and continuous drum granulation processes are studied in this paper. The main focus of the current research has been: (i) construction of optimisation and control relevant, population balance models through the incorporation of moisture content, drum rotation rate and bed depth into the coalescence kernels; (ii) investigation of optimal operational conditions using constrained optimisation techniques; (iii) development of optimal control algorithms based on discretized population balance equations; and (iv) comprehensive simulation studies on optimal control of both batch and continuous granulation processes. The objective of steady state optimisation is to minimise the recycle rate with minimum cost for continuous processes. It has been identified that the drum rotation-rate, bed depth (material charge), and moisture content of solids are practical decision (design) parameters for system optimisation. The objective for the optimal control of batch granulation processes is to maximize the mass of product-sized particles with minimum time and binder consumption. The objective for the optimal control of the continuous process is to drive the process from one steady state to another in a minimum time with minimum binder consumption, which is also known as the state-driving problem. It has been known for some time that the binder spray-rate is the most effective control (manipulative) variable. Although other possible manipulative variables, such as feed flow-rate and additional powder flow-rate have been investigated in the complete research project, only the single input problem with the binder spray rate as the manipulative variable is addressed in the paper to demonstrate the methodology. It can be shown from simulation results that the proposed models are suitable for control and optimisation studies, and the optimisation algorithms connected with either steady state or dynamic models are successful for the determination of optimal operational conditions and dynamic trajectories with good convergence properties. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The literature relating to sieve plate liquid extraction columns and relevant hydrodynamic phenomena have been surveyed. Mass transfer characteristics during drop formation, rise and coalescence, and related models were also reviewed. Important design parameters i.e. flooding, dispersed phase hold-up, drop size distribution, mean drop size, coalescence/flocculation zone height beneath a plate and jetting phenomena were investigated under non-mass transfer and mass transfer conditions in a 0.45m diameter, 2.3m high sieve plate column. This column had provision for four different plate designs, and variable plate spacing and downcomer heights, and the system used was Clairsol `350' (dispersed) - acetone - deionised water (continuous) with either direction of mass transfer. Drop size distributions were best described by the functions proposed by Gal-or, and then Mugele-Evans. Using data from this study and the literature, correlations were developed for dispersed phase hold-up, mean drop size in the preferred jetting regime and in the non-jetting regime, and coalescence zone height. A method to calculate the theoretical overall mass transfer coefficient allowing for the range of drop sizes encountered in the column gave the best fit to experimental data. This applied the drop size distribution diagram to estimate the volume percentage of stagnant, circulating and oscillating drops in the drop population. The overall coefficient Kcal was then calculated as the fractional sum of the predicted individual single drop coefficients and their proportion in the drop population. In a comparison between the experimental and calculated overall mass transfer coefficients for cases in which all the drops were in the oscillating regime (i.e. 6.35mm hole size plate), and for transfer from the dispersed(d) to continuous(c) phase, the film coefficient kd predicted from the Rose-Kintner correlation together with kc from that of Garner-Tayeban gave the best representation. Droplets from the 3.175mm hole size plate, were of a size to be mainly circulating and oscillating; a combination of kd from the Kronig-Brink (circulating) and Rose-Kintner (oscillating) correlations with the respective kc gave the best agreement. The optimum operating conditions for the SPC were identified and a procedure proposed for design from basic single drop data.
Resumo:
The purpose of this thesis was to identify the optimal design parameters for a jet nozzle which obtains a local maximum shear stress while maximizing the average shear stress on the floor of a fluid filled system. This research examined how geometric parameters of a jet nozzle, such as the nozzle's angle, height, and orifice, influence the shear stress created on the bottom surface of a tank. Simulations were run using a Computational Fluid Dynamics (CFD) software package to determine shear stress values for a parameterized geometric domain including the jet nozzle. A response surface was created based on the shear stress values obtained from 112 simulated designs. A multi-objective optimization software utilized the response surface to generate designs with the best combination of parameters to achieve maximum shear stress and maximum average shear stress. The optimal configuration of parameters achieved larger shear stress values over a commercially available design.
Resumo:
The objective in this work is to build a rapid and automated numerical design method that makes optimal design of robots possible. In this work, two classes of optimal robot design problems were specifically addressed: (1) When the objective is to optimize a pre-designed robot, and (2) when the goal is to design an optimal robot from scratch. In the first case, to reach the optimum design some of the critical dimensions or specific measures to optimize (design parameters) are varied within an established range. Then the stress is calculated as a function of the design parameter(s), the design parameter(s) that optimizes a pre-determined performance index provides the optimum design. In the second case, this work focuses on the development of an automated procedure for the optimal design of robotic systems. For this purpose, Pro/Engineer© and MatLab© software packages are integrated to draw the robot parts, optimize them, and then re-draw the optimal system parts.
Resumo:
Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.
Resumo:
The absence of rapid, low cost and highly sensitive biodetection platform has hindered the implementation of next generation cheap and early stage clinical or home based point-of-care diagnostics. Label-free optical biosensing with high sensitivity, throughput, compactness, and low cost, plays an important role to resolve these diagnostic challenges and pushes the detection limit down to single molecule. Optical nanostructures, specifically the resonant waveguide grating (RWG) and nano-ribbon cavity based biodetection are promising in this context. The main element of this dissertation is design, fabrication and characterization of RWG sensors for different spectral regions (e.g. visible, near infrared) for use in label-free optical biosensing and also to explore different RWG parameters to maximize sensitivity and increase detection accuracy. Design and fabrication of the waveguide embedded resonant nano-cavity are also studied. Multi-parametric analyses were done using customized optical simulator to understand the operational principle of these sensors and more important the relationship between the physical design parameters and sensor sensitivities. Silicon nitride (SixNy) is a useful waveguide material because of its wide transparency across the whole infrared, visible and part of UV spectrum, and comparatively higher refractive index than glass substrate. SixNy based RWGs on glass substrate are designed and fabricated applying both electron beam lithography and low cost nano-imprint lithography techniques. A Chromium hard mask aided nano-fabrication technique is developed for making very high aspect ratio optical nano-structure on glass substrate. An aspect ratio of 10 for very narrow (~60 nm wide) grating lines is achieved which is the highest presented so far. The fabricated RWG sensors are characterized for both bulk (183.3 nm/RIU) and surface sensitivity (0.21nm/nm-layer), and then used for successful detection of Immunoglobulin-G (IgG) antibodies and antigen (~1μg/ml) both in buffer and serum. Widely used optical biosensors like surface plasmon resonance and optical microcavities are limited in the separation of bulk response from the surface binding events which is crucial for ultralow biosensing application with thermal or other perturbations. A RWG based dual resonance approach is proposed and verified by controlled experiments for separating the response of bulk and surface sensitivity. The dual resonance approach gives sensitivity ratio of 9.4 whereas the competitive polarization based approach can offer only 2.5. The improved performance of the dual resonance approach would help reducing probability of false reading in precise bio-assay experiments where thermal variations are probable like portable diagnostics.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
On February 6, 1994, a large debris flow developed because of intense rains in a 800-m-high mountain range called Serra do Cubatao, the local name for the Serra do Mar, located along the coast of the state of Sao Paulo, Brazil. It affected the Presidente Bernardes Refinery, owned by Petrobras, in Cubatao. The damages amounted to about US $40 million because of the muck cleaning, repairs, and 3-week interruption of the operations. This prompted Petrobras to conduct studies, carried out by the authors, to develop protection works, which were done at a cost of approximately US $12 million. The paper describes the studies conducted on debris flow mechanics. A new criteria to define rainfall intensities that trigger debris flows is presented, as well as a correlation of slipped area with soil porosity and rain intensity. Also presented are (a) an actual grain size distribution of a deposited material, determined by laboratory and a large-scale field test, and (b) the size distribution of large boulders along the river bed. Based on theory, empirical experience and back-analysis of the events, the main parameters as the front velocity, the peak discharge and the volume of the transported sediments were determined in a rational basis for the design of the protection works. Finally, the paper describes the set of the protection works built, emphasizing their concept and function. They also included some low-cost innovative works.
Resumo:
Transmission and switching in digital telecommunication networks require distribution of precise time signals among the nodes. Commercial systems usually adopt a master-slave (MS) clock distribution strategy building slave nodes with phase-locked loop (PLL) circuits. PLLs are responsible for synchronizing their local oscillations with signals from master nodes, providing reliable clocks in all nodes. The dynamics of a PLL is described by an ordinary nonlinear differential equation, with order one plus the order of its internal linear low-pass filter. Second-order loops are commonly used because their synchronous state is asymptotically stable and the lock-in range and design parameters are expressed by a linear equivalent system [Gardner FM. Phaselock techniques. New York: John Wiley & Sons: 1979]. In spite of being simple and robust, second-order PLLs frequently present double-frequency terms in PD output and it is very difficult to adapt a first-order filter in order to cut off these components [Piqueira JRC, Monteiro LHA. Considering second-harmonic terms in the operation of the phase detector for second order phase-locked loop. IEEE Trans Circuits Syst [2003;50(6):805-9; Piqueira JRC, Monteiro LHA. All-pole phase-locked loops: calculating lock-in range by using Evan`s root-locus. Int J Control 2006;79(7):822-9]. Consequently, higher-order filters are used, resulting in nonlinear loops with order greater than 2. Such systems, due to high order and nonlinear terms, depending on parameters combinations, can present some undesirable behaviors, resulting from bifurcations, as error oscillation and chaos, decreasing synchronization ranges. In this work, we consider a second-order Sallen-Key loop filter [van Valkenburg ME. Analog filter design. New York: Holt, Rinehart & Winston; 1982] implying a third order PLL The resulting lock-in range of the third-order PLL is determined by two bifurcation conditions: a saddle-node and a Hopf. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Briefing: Factored material properties and limit state loads-unlikely extreme or impossible pretense
Resumo:
In the limit state design (LSD) method each design criterion is formally stated and assessed using a performance function. The performance function defines the relationship between the design parameters and the design criterion. In practice, LSD involves factoring up loads and factoring down calculated strengths and material parameters. This provides a convenient way to carry out routine probabilistic-based design. The factors are statistically calculated to produce a design with an acceptably low probability of failure. Hence the ultimate load and the design material properties are mathematical concepts that have no physical interpretation. They may be physically impossible. Similarly, the appropriate analysis model is also defined by the performance function and may not describe the real behaviour at the perceived physical equivalent limit condition. These points must be understood to avoid confusion in the discussion and application of partial factor LSD methods.