963 resultados para Reliability level
Resumo:
An approach for the analysis of uncertainty propagation in reliability-based design optimization of composite laminate structures is presented. Using the Uniform Design Method (UDM), a set of design points is generated over a domain centered on the mean reference values of the random variables. A methodology based on inverse optimal design of composite structures to achieve a specified reliability level is proposed, and the corresponding maximum load is outlined as a function of ply angle. Using the generated UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on an evolutionary learning process. Then, a Monte Carlo simulation using ANN development is performed to simulate the behavior of the critical Tsai number, structural reliability index, and their relative sensitivities as a function of the ply angle of laminates. The results are generated for uniformly distributed random variables on a domain centered on mean values. The statistical analysis of the results enables the study of the variability of the reliability index and its sensitivity relative to the ply angle. Numerical examples showing the utility of the approach for robust design of angle-ply laminates are presented.
Resumo:
This paper presents a chance-constrained linear programming formulation for reservoir operation of a multipurpose reservoir. The release policy is defined by a chance constraint that the probability of irrigation release in any period equalling or exceeding the irrigation demand is at least equal to a specified value P (called reliability level). The model determines the maximum annual hydropower produced while meeting the irrigation demand at a specified reliability level. The model considers variation in reservoir water level elevation and also the operating range within which the turbine operates. A linear approximation for nonlinear power production function is assumed and the solution obtained within a specified tolerance limit. The inflow into the reservoir is considered random. The chance constraint is converted into its deterministic equivalent using a linear decision rule and inflow probability distribution. The model application is demonstrated through a case study.
Resumo:
Dissertação de Mestrado, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
Resumen tomado de la publicación. Con el apoyo económico del departamento MIDE de la UNED
Resumo:
Among the industries, those that produce ceramic porcelain for use in construction industry and oil, during the exploration and production period, play an important role in the production of waste. Much research has been carried out both by academia and the productive sector, sometimes reintroducing them in the same production line that generated them, sometimes in areas unrelated to their generation, as in the production of concrete and mortar for the construction, for example, but each one in an isolated way. In this research, the aim is to study the combined incorporation of the waste drill cuttings of oil well and the residue of the polishing of porcelain, generated in the final stage of finishing of this product in a clay matrix, for the production of red pottery, specifically bricks, ceramic blocks and tiles. The clay comes from the municipality of São Gonçalo, RN, the drilling waste is from the Natal basin, in Rio Grande do Norte, and the residue of the polishing proceeds from a ceramic porcelain of the State of Paraíba. For this purpose, we used a mixture of a plastic clay with a non-plastic, in a ratio of 50% each, settling formulations with the addition of these two residues in this clay matrix. In the formulations, both residues were incorporated with a minimum percentage of 2.5% and maximum of 12.5%, varying from 2.5% each, in each formulation, which the sum of the waste be no more than 15%. It should be noted that the residue of the polishing of ceramic porcelain is a IIa class (not inert). The materials were characterized by XRF, XRD, TG, DTA, laser granulometry and the plasticity index. The technological properties of water absorption, apparent porosity, linear shrinkage of burning, flexural tensile strength and bulk density were evaluated after the sintering of the pieces to 850 °C, 950 °C and 1050 °C, with a burning time of 3 hr, 3 hr and 30 minutes, and 3 hr and 50 minutes, respectively, with a heating rate of 10 °C/minute, for all formulations and landing of 30 minutes. To better understand the influence of each residue and temperature on the evaluated properties, we used the factorial planning and its surfaces of response for the interpretation of the results. It was found that the temperature has no statistical significance at a 95% of reliability level in flexural tensile strength and that it decreases the water absorption and the porosity, but increases the shrinkage and the bulk density. The results showed the feasibility of the desired incorporation, but adjusting the temperature to each product and formulation, and that the temperatures of 850 °C and 950 °C were the one that responded to the largest number of formulations
Resumo:
Problems as voltage increase at the end of a feeder, demand supply unbalance in a fault condition, power quality decline, increase of power losses, and reduction of reliability levels may occur if Distributed Generators (DGs) are not properly allocated. For this reason, researchers have been employed several solution techniques to solve the problem of optimal allocation of DGs. This work is focused on the ancillary service of reactive power support provided by DGs. The main objective is to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). The LOC will be determined for different allocation alternatives of DGs as a result of a multi-objective optimization process, aiming the minimization of losses in the lines of the system and costs of active power generation from DGs, and the maximization of the static voltage stability margin of the system. The effectiveness of the proposed methodology in improving the goals outlined was demonstrated using the IEEE 34 bus distribution test feeder with two DGs cosidered to be allocated. © 2011 IEEE.
Resumo:
Las aplicaciones distribuidas que precisan de un servicio multipunto fiable son muy numerosas, y entre otras es posible citar las siguientes: bases de datos distribuidas, sistemas operativos distribuidos, sistemas de simulación interactiva distribuida y aplicaciones de distribución de software, publicaciones o noticias. Aunque en sus orígenes el dominio de aplicación de tales sistemas distribuidos estaba reducido a una única subred (por ejemplo una Red de Área Local) posteriormente ha surgido la necesidad de ampliar su aplicabilidad a interredes. La aproximación tradicional al problema del multipunto fiable en interredes se ha basado principalmente en los dos siguientes puntos: (1) proporcionar en un mismo protocolo muchas garantías de servicio (por ejemplo fiabilidad, atomicidad y ordenación) y a su vez algunas de éstas en distintos grados, sin tener en cuenta que muchas aplicaciones multipunto que precisan fiabilidad no necesitan otras garantías; y (2) extender al entorno multipunto las soluciones ya adoptadas en el entorno punto a punto sin considerar las características diferenciadoras; y de aquí, que se haya tratado de resolver el problema de la fiabilidad multipunto con protocolos extremo a extremo (protocolos de transporte) y utilizando esquemas de recuperación de errores, centralizados (las retransmisiones se hacen desde un único punto, normalmente la fuente) y globales (los paquetes solicitados se vuelven a enviar al grupo completo). En general, estos planteamientos han dado como resultado protocolos que son ineficientes en tiempo de ejecución, tienen problemas de escalabilidad, no hacen un uso óptimo de los recursos de red y no son adecuados para aplicaciones sensibles al retardo. En esta Tesis se investiga el problema de la fiabilidad multipunto en interredes operando en modo datagrama y se presenta una forma novedosa de enfocar el problema: es más óptimo resolver el problema de la fiabilidad multipunto a nivel de red y separar la fiabilidad de otras garantías de servicio, que pueden ser proporcionadas por un protocolo de nivel superior o por la propia aplicación. Siguiendo este nuevo enfoque se ha diseñado un protocolo multipunto fiable que opera a nivel de red (denominado RMNP). Las características más representativas del RMNP son las siguientes; (1) sigue una aproximación orientada al emisor, lo cual permite lograr un grado muy alto de fiabilidad; (2) plantea un esquema de recuperación de errores distribuido (las retransmisiones se hacen desde ciertos encaminadores intermedios que siempre estarán más cercanos a los miembros que la propia fuente) y de ámbito restringido (el alcance de las retransmisiones está restringido a un cierto número de miembros). Este esquema hace posible optimizar el retardo medio de distribución y disminuir la sobrecarga introducida por las retransmisiones; (3) incorpora en ciertos encaminadores funciones de agregación y filtrado de paquetes de control, que evitan problemas de implosión y reducen el tráfico que fluye hacia la fuente. Con el fin de evaluar el comportamiento del protocolo diseñado, se han realizado pruebas de simulación obteniéndose como principales conclusiones que, el RMNP escala correctamente con el tamaño del grupo, hace un uso óptimo de los recursos de red y es adecuado para aplicaciones sensibles al retardo.---ABSTRACT---There are many distributed applications that require a reliable multicast service, including: distributed databases, distributed operating systems, distributed interactive simulation systems and distribution applications of software, publications or news. Although the application domain of distributed systems of this type was originally confíned to a single subnetwork (for example, a Local Área Network), it later became necessary extend their applicability to internetworks. The traditional approach to the reliable multicast problem in internetworks is based mainly on the following two points: (1) provide a lot of service guarantees in one and the same protocol (for example, reliability, atomicity and ordering) and different levéis of guarantee in some cases, without taking into account that many multicast applications that require reliability do not need other guarantees, and (2) extend solutions adopted in the unicast environment to the multicast environment without taking into account their distinctive characteristics. So, the attempted solutions to the multicast reliability problem were end-to-end protocols (transport protocols) and centralized error recovery schemata (retransmissions made from a single point, normally the source) and global error retrieval schemata (the requested packets are retransmitted to the whole group). Generally, these approaches have resulted in protocols that are inefficient in execution time, have scaling problems, do not make optimum use of network resources and are not suitable for delay-sensitive applications. Here, the multicast reliability problem is investigated in internetworks operating in datagram mode and a new way of approaching the problem is presented: it is better to solve to the multicast reliability problem at network level and sepárate reliability from other service guarantees that can be supplied by a higher protocol or the application itself. A reliable multicast protocol that operates at network level (called RMNP) has been designed on the basis of this new approach. The most representative characteristics of the RMNP are as follows: (1) it takes a transmitter-oriented approach, which provides for a very high reliability level; (2) it provides for an error retrieval schema that is distributed (the retransmissions are made from given intermedíate routers that will always be closer to the members than the source itself) and of restricted scope (the scope of the retransmissions is confined to a given number of members), and this schema makes it possible to optimize the mean distribution delay and reduce the overload caused by retransmissions; (3) some routers include control packet aggregation and filtering functions that prevent implosión problems and reduce the traffic flowing towards the source. Simulation test have been performed in order to evalúate the behaviour of the protocol designed. The main conclusions are that the RMNP scales correctly with group size, makes optimum use of network resources and is suitable for delay-sensitive applications.
Resumo:
Among the industries, those that produce ceramic porcelain for use in construction industry and oil, during the exploration and production period, play an important role in the production of waste. Much research has been carried out both by academia and the productive sector, sometimes reintroducing them in the same production line that generated them, sometimes in areas unrelated to their generation, as in the production of concrete and mortar for the construction, for example, but each one in an isolated way. In this research, the aim is to study the combined incorporation of the waste drill cuttings of oil well and the residue of the polishing of porcelain, generated in the final stage of finishing of this product in a clay matrix, for the production of red pottery, specifically bricks, ceramic blocks and tiles. The clay comes from the municipality of São Gonçalo, RN, the drilling waste is from the Natal basin, in Rio Grande do Norte, and the residue of the polishing proceeds from a ceramic porcelain of the State of Paraíba. For this purpose, we used a mixture of a plastic clay with a non-plastic, in a ratio of 50% each, settling formulations with the addition of these two residues in this clay matrix. In the formulations, both residues were incorporated with a minimum percentage of 2.5% and maximum of 12.5%, varying from 2.5% each, in each formulation, which the sum of the waste be no more than 15%. It should be noted that the residue of the polishing of ceramic porcelain is a IIa class (not inert). The materials were characterized by XRF, XRD, TG, DTA, laser granulometry and the plasticity index. The technological properties of water absorption, apparent porosity, linear shrinkage of burning, flexural tensile strength and bulk density were evaluated after the sintering of the pieces to 850 °C, 950 °C and 1050 °C, with a burning time of 3 hr, 3 hr and 30 minutes, and 3 hr and 50 minutes, respectively, with a heating rate of 10 °C/minute, for all formulations and landing of 30 minutes. To better understand the influence of each residue and temperature on the evaluated properties, we used the factorial planning and its surfaces of response for the interpretation of the results. It was found that the temperature has no statistical significance at a 95% of reliability level in flexural tensile strength and that it decreases the water absorption and the porosity, but increases the shrinkage and the bulk density. The results showed the feasibility of the desired incorporation, but adjusting the temperature to each product and formulation, and that the temperatures of 850 °C and 950 °C were the one that responded to the largest number of formulations
Resumo:
Antecedentes De las complicaciones crónicas de la diabetes, la más destacada es la neuropatía. Esta produce disminución en la calidad de vida, ya que conlleva al paciente a tener dificultades como: dolor, parestesias, ulceraciones, e incluso alteración en la deambulación. Objetivos Determinar la prevalencia de neuropatía simétrica distal en pacientes que integran los clubes de diabéticos del distrito 01D01 y su relación con sus estilos de vida. Materiales y Métodos Es un estudio transversal con una muestra de 162 pacientes, seleccionados al azar, sobre la base del 30% de prevalencia de neuropatía, con un nivel de confianza del 95% y error de inferencia del 5%. Fueron aplicados dos cuestionarios avalados (NSS+NDS). Para el análisis, usamos el programa SPSS 15. Las variables demográficas, se analizaron por estadística descriptiva. La relación entre las variables dependientes y variables independientes se evaluó a través de la razón de prevalencia, con un intervalo de confianza del 95%, chi cuadrado y valor de p. Resultados La prevalencia de neuropatía diabética es del 54.9%, en hombres 73.7% y en mujeres 49.2%. Los factores protectores son: Recibir indicaciones sobre cuidado de los pies con 66.7% (RP 2 y valor de p 0.033), y el buen control de glucosa en sangre con el 95.9% (RP 2.3 y valor de p 0.001). Conclusiones La neuropatía diabética simétrica distal se presenta más en hombres que en mujeres. Además se encontró que el buen control de glucosa en sangre y recibir indicaciones sobre cuidado de los pies, disminuyen el riesgo de neuropatía.
Resumo:
This paper discusses major obstacles for the adoption of low cost level crossing warning devices (LCLCWDs) in Australia and reviews those trialed in Australia and internationally. The argument for the use of LCLCWDs is that for a given investment, more passive level crossings can be treated, therefore increasing safety benefits across the rail network. This approach, in theory, reduces risk across the network by utilizing a combination of low-cost and conventional level crossing interventions, similar to what is done in the road environment. This paper concludes that in order to determine if this approach can produce better safety outcomes than the current approach, involving the incremental upgrade of level crossings with conventional interventions, it is necessary to perform rigorous risk assessments and cost-benefit analyses of LCLCWDs. Further research is also needed to determine how best to differentiate less reliable LCCLWDs from conventional warning devices through the use of different warning signs and signals. This paper presents a strategy for progressing research and development of LCLCWDs and details how the Cooperative Research Centre (CRC) for Rail Innovation is fulfilling this strategy through the current and future affordable level crossing projects.
Resumo:
Level II reliability theory provides an approximate method whereby the reliability of a complex engineering structure which has multiple strength and loading variables may be estimated. This technique has been applied previously to both civil and offshore structures with considerable success. The aim of the present work is to assess the applicability of the method for aircraft structures, and to this end landing gear design is considered in detail. It is found that the technique yields useful information regarding the structural reliability, and further it enables the critical design parameters to be identified.
Resumo:
中国计算机学会