984 resultados para Reliability level
Resumo:
An approach for the analysis of uncertainty propagation in reliability-based design optimization of composite laminate structures is presented. Using the Uniform Design Method (UDM), a set of design points is generated over a domain centered on the mean reference values of the random variables. A methodology based on inverse optimal design of composite structures to achieve a specified reliability level is proposed, and the corresponding maximum load is outlined as a function of ply angle. Using the generated UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on an evolutionary learning process. Then, a Monte Carlo simulation using ANN development is performed to simulate the behavior of the critical Tsai number, structural reliability index, and their relative sensitivities as a function of the ply angle of laminates. The results are generated for uniformly distributed random variables on a domain centered on mean values. The statistical analysis of the results enables the study of the variability of the reliability index and its sensitivity relative to the ply angle. Numerical examples showing the utility of the approach for robust design of angle-ply laminates are presented.
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
Aquesta memòria de projecte final de carrera és un estudi i implementació bàsica de l’estat actual en quant a les tecnologies/protocols sense fils existents, en particular centrat en ZigBee sobre la plataforma CC2430 de Texas Instruments, per a una aplicació industrial amb una taxa de transferència de dades baixa, que no presenta un alt grau de complexitat, però que requereix gran versatilitat i fiabilitat.
Resumo:
TEIXEIRA, José João Lopes. Departamento de Engenharia Agrícola, Centro de Ciências Agrárias da Universidade Federal do Ceará, Agosto de 2011. Hidrossedimentologia e disponibilidade hídrica da bacia hidrográfica da Barragem de Poilão, Cabo Verde. Orientador: José Carlos de Araújo. Examinadores: George Leite Mamede, Pedro Henrique Augusto Medeiros. O Arquipélago de Cabo Verde, situado na costa ocidental africana, sofre influência do deserto de Saara tornando o clima caraterizado por pluviometria muito baixa e distribuída irregularmente no espaço e no tempo. As chuvas são muito concentradas, gerando grandes escoamentos para o mar. O aumento da disponibilidade hídrica requer além da construção e manutenção de infraestrutura de captação e conservação de águas pluviais, uma gestão eficiente destes recursos. Atualmente, constitui um dos eixos estratégicos da política do estado de Cabo Verde, a captação, armazenamento e mobilização de águas superficiais através de construção de barragens. Estudos do comportamento hidrológico e sedimentológico do reservatório e da sua bacia de contribuição constituem premissas básicas para um ótimo dimensionamento, gestão e monitoramento da referida infraestrutura. É neste sentido que o presente estudo objetivou sistematizar informações hidrológicas e sedimentológicas da bacia hidrográfica da Barragem de Poilão (BP) e apresentar proposta operacional de longo prazo. A área de estudo ocupa 28 km² a montante da Bacia Hidrográfica da Ribeira Seca (BHRS) na Ilha de Santiago. A altitude da bacia varia de 99 m, situada na cota da barragem, até 1394 m. Para o estudo, foram utilizados e sistematizados, série pluviométrica de 1973 a 2010, registos de vazão instantânea do período 1984 a 2000 e registos agroclimáticos da área de estudo (1981 a 2004). Para o preenchimento das falhas tanto dos escoamentos como da descarga sólida em suspensão, foi utilizado o método de curva chave. Para estimativa de produção de sedimentos na bacia, aplicou-se a Equação Universal de Perda de Solo (USLE) e a razão de aporte de sedimentos (SDR). O índice de retenção de sedimentos no reservatório foi estimado pelo método de Brune e a distribuição de sedimento pelo método empírico de redução de área descrito por Borland e Miller e, revisado por Lara. Para gerar e simular curvas de vazão versus garantia foi utilizado código computacional VYELAS, desenvolvido por Araújo e baseado na abordagem de Campos. Também foi avaliada a redução da vazão de retirada do período 2006 a 2026, provocado pelo assoreamento do reservatório. Concluiu-se que em média a precipitação anual é de 323 mm, concentrando-se 73% nos meses de agosto e setembro; a bacia de contribuição apresenta como valor um número de curva (CN) de 76, com abstração inicial (Ia) de 26 mm, coeficiente de escoamento de 19% e uma vazão anual afluente de 1,7 hm³(cv= 0,73); a disponibilidade hídrica para uma garantia de 85% é avaliada em 0,548 hm³/ano e não 0,671 hm³/ano como indica o projeto original. Com uma descarga sólida estimada em 22.185 m³/ano conclui-se que até o ano de 2026, a capacidade do reservatório reduz a uma taxa de 1,8 % ao ano, devido ao assoreamento, provocando uma redução de 41% da disponibilidade hídrica inicial. Nessa altura, as perdas por evaporação e sangria serão da ordem de 81% da vazão afluente de entrada no reservatório. Na base desses resultados se apresentou proposta de operação da BP.
Resumo:
Resumen tomado de la publicación. Con el apoyo económico del departamento MIDE de la UNED
Resumo:
Among the industries, those that produce ceramic porcelain for use in construction industry and oil, during the exploration and production period, play an important role in the production of waste. Much research has been carried out both by academia and the productive sector, sometimes reintroducing them in the same production line that generated them, sometimes in areas unrelated to their generation, as in the production of concrete and mortar for the construction, for example, but each one in an isolated way. In this research, the aim is to study the combined incorporation of the waste drill cuttings of oil well and the residue of the polishing of porcelain, generated in the final stage of finishing of this product in a clay matrix, for the production of red pottery, specifically bricks, ceramic blocks and tiles. The clay comes from the municipality of São Gonçalo, RN, the drilling waste is from the Natal basin, in Rio Grande do Norte, and the residue of the polishing proceeds from a ceramic porcelain of the State of Paraíba. For this purpose, we used a mixture of a plastic clay with a non-plastic, in a ratio of 50% each, settling formulations with the addition of these two residues in this clay matrix. In the formulations, both residues were incorporated with a minimum percentage of 2.5% and maximum of 12.5%, varying from 2.5% each, in each formulation, which the sum of the waste be no more than 15%. It should be noted that the residue of the polishing of ceramic porcelain is a IIa class (not inert). The materials were characterized by XRF, XRD, TG, DTA, laser granulometry and the plasticity index. The technological properties of water absorption, apparent porosity, linear shrinkage of burning, flexural tensile strength and bulk density were evaluated after the sintering of the pieces to 850 °C, 950 °C and 1050 °C, with a burning time of 3 hr, 3 hr and 30 minutes, and 3 hr and 50 minutes, respectively, with a heating rate of 10 °C/minute, for all formulations and landing of 30 minutes. To better understand the influence of each residue and temperature on the evaluated properties, we used the factorial planning and its surfaces of response for the interpretation of the results. It was found that the temperature has no statistical significance at a 95% of reliability level in flexural tensile strength and that it decreases the water absorption and the porosity, but increases the shrinkage and the bulk density. The results showed the feasibility of the desired incorporation, but adjusting the temperature to each product and formulation, and that the temperatures of 850 °C and 950 °C were the one that responded to the largest number of formulations
Resumo:
Problems as voltage increase at the end of a feeder, demand supply unbalance in a fault condition, power quality decline, increase of power losses, and reduction of reliability levels may occur if Distributed Generators (DGs) are not properly allocated. For this reason, researchers have been employed several solution techniques to solve the problem of optimal allocation of DGs. This work is focused on the ancillary service of reactive power support provided by DGs. The main objective is to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). The LOC will be determined for different allocation alternatives of DGs as a result of a multi-objective optimization process, aiming the minimization of losses in the lines of the system and costs of active power generation from DGs, and the maximization of the static voltage stability margin of the system. The effectiveness of the proposed methodology in improving the goals outlined was demonstrated using the IEEE 34 bus distribution test feeder with two DGs cosidered to be allocated. © 2011 IEEE.
Resumo:
Las aplicaciones distribuidas que precisan de un servicio multipunto fiable son muy numerosas, y entre otras es posible citar las siguientes: bases de datos distribuidas, sistemas operativos distribuidos, sistemas de simulación interactiva distribuida y aplicaciones de distribución de software, publicaciones o noticias. Aunque en sus orígenes el dominio de aplicación de tales sistemas distribuidos estaba reducido a una única subred (por ejemplo una Red de Área Local) posteriormente ha surgido la necesidad de ampliar su aplicabilidad a interredes. La aproximación tradicional al problema del multipunto fiable en interredes se ha basado principalmente en los dos siguientes puntos: (1) proporcionar en un mismo protocolo muchas garantías de servicio (por ejemplo fiabilidad, atomicidad y ordenación) y a su vez algunas de éstas en distintos grados, sin tener en cuenta que muchas aplicaciones multipunto que precisan fiabilidad no necesitan otras garantías; y (2) extender al entorno multipunto las soluciones ya adoptadas en el entorno punto a punto sin considerar las características diferenciadoras; y de aquí, que se haya tratado de resolver el problema de la fiabilidad multipunto con protocolos extremo a extremo (protocolos de transporte) y utilizando esquemas de recuperación de errores, centralizados (las retransmisiones se hacen desde un único punto, normalmente la fuente) y globales (los paquetes solicitados se vuelven a enviar al grupo completo). En general, estos planteamientos han dado como resultado protocolos que son ineficientes en tiempo de ejecución, tienen problemas de escalabilidad, no hacen un uso óptimo de los recursos de red y no son adecuados para aplicaciones sensibles al retardo. En esta Tesis se investiga el problema de la fiabilidad multipunto en interredes operando en modo datagrama y se presenta una forma novedosa de enfocar el problema: es más óptimo resolver el problema de la fiabilidad multipunto a nivel de red y separar la fiabilidad de otras garantías de servicio, que pueden ser proporcionadas por un protocolo de nivel superior o por la propia aplicación. Siguiendo este nuevo enfoque se ha diseñado un protocolo multipunto fiable que opera a nivel de red (denominado RMNP). Las características más representativas del RMNP son las siguientes; (1) sigue una aproximación orientada al emisor, lo cual permite lograr un grado muy alto de fiabilidad; (2) plantea un esquema de recuperación de errores distribuido (las retransmisiones se hacen desde ciertos encaminadores intermedios que siempre estarán más cercanos a los miembros que la propia fuente) y de ámbito restringido (el alcance de las retransmisiones está restringido a un cierto número de miembros). Este esquema hace posible optimizar el retardo medio de distribución y disminuir la sobrecarga introducida por las retransmisiones; (3) incorpora en ciertos encaminadores funciones de agregación y filtrado de paquetes de control, que evitan problemas de implosión y reducen el tráfico que fluye hacia la fuente. Con el fin de evaluar el comportamiento del protocolo diseñado, se han realizado pruebas de simulación obteniéndose como principales conclusiones que, el RMNP escala correctamente con el tamaño del grupo, hace un uso óptimo de los recursos de red y es adecuado para aplicaciones sensibles al retardo.---ABSTRACT---There are many distributed applications that require a reliable multicast service, including: distributed databases, distributed operating systems, distributed interactive simulation systems and distribution applications of software, publications or news. Although the application domain of distributed systems of this type was originally confíned to a single subnetwork (for example, a Local Área Network), it later became necessary extend their applicability to internetworks. The traditional approach to the reliable multicast problem in internetworks is based mainly on the following two points: (1) provide a lot of service guarantees in one and the same protocol (for example, reliability, atomicity and ordering) and different levéis of guarantee in some cases, without taking into account that many multicast applications that require reliability do not need other guarantees, and (2) extend solutions adopted in the unicast environment to the multicast environment without taking into account their distinctive characteristics. So, the attempted solutions to the multicast reliability problem were end-to-end protocols (transport protocols) and centralized error recovery schemata (retransmissions made from a single point, normally the source) and global error retrieval schemata (the requested packets are retransmitted to the whole group). Generally, these approaches have resulted in protocols that are inefficient in execution time, have scaling problems, do not make optimum use of network resources and are not suitable for delay-sensitive applications. Here, the multicast reliability problem is investigated in internetworks operating in datagram mode and a new way of approaching the problem is presented: it is better to solve to the multicast reliability problem at network level and sepárate reliability from other service guarantees that can be supplied by a higher protocol or the application itself. A reliable multicast protocol that operates at network level (called RMNP) has been designed on the basis of this new approach. The most representative characteristics of the RMNP are as follows: (1) it takes a transmitter-oriented approach, which provides for a very high reliability level; (2) it provides for an error retrieval schema that is distributed (the retransmissions are made from given intermedíate routers that will always be closer to the members than the source itself) and of restricted scope (the scope of the retransmissions is confined to a given number of members), and this schema makes it possible to optimize the mean distribution delay and reduce the overload caused by retransmissions; (3) some routers include control packet aggregation and filtering functions that prevent implosión problems and reduce the traffic flowing towards the source. Simulation test have been performed in order to evalúate the behaviour of the protocol designed. The main conclusions are that the RMNP scales correctly with group size, makes optimum use of network resources and is suitable for delay-sensitive applications.
Resumo:
Among the industries, those that produce ceramic porcelain for use in construction industry and oil, during the exploration and production period, play an important role in the production of waste. Much research has been carried out both by academia and the productive sector, sometimes reintroducing them in the same production line that generated them, sometimes in areas unrelated to their generation, as in the production of concrete and mortar for the construction, for example, but each one in an isolated way. In this research, the aim is to study the combined incorporation of the waste drill cuttings of oil well and the residue of the polishing of porcelain, generated in the final stage of finishing of this product in a clay matrix, for the production of red pottery, specifically bricks, ceramic blocks and tiles. The clay comes from the municipality of São Gonçalo, RN, the drilling waste is from the Natal basin, in Rio Grande do Norte, and the residue of the polishing proceeds from a ceramic porcelain of the State of Paraíba. For this purpose, we used a mixture of a plastic clay with a non-plastic, in a ratio of 50% each, settling formulations with the addition of these two residues in this clay matrix. In the formulations, both residues were incorporated with a minimum percentage of 2.5% and maximum of 12.5%, varying from 2.5% each, in each formulation, which the sum of the waste be no more than 15%. It should be noted that the residue of the polishing of ceramic porcelain is a IIa class (not inert). The materials were characterized by XRF, XRD, TG, DTA, laser granulometry and the plasticity index. The technological properties of water absorption, apparent porosity, linear shrinkage of burning, flexural tensile strength and bulk density were evaluated after the sintering of the pieces to 850 °C, 950 °C and 1050 °C, with a burning time of 3 hr, 3 hr and 30 minutes, and 3 hr and 50 minutes, respectively, with a heating rate of 10 °C/minute, for all formulations and landing of 30 minutes. To better understand the influence of each residue and temperature on the evaluated properties, we used the factorial planning and its surfaces of response for the interpretation of the results. It was found that the temperature has no statistical significance at a 95% of reliability level in flexural tensile strength and that it decreases the water absorption and the porosity, but increases the shrinkage and the bulk density. The results showed the feasibility of the desired incorporation, but adjusting the temperature to each product and formulation, and that the temperatures of 850 °C and 950 °C were the one that responded to the largest number of formulations
Resumo:
Antecedentes De las complicaciones crónicas de la diabetes, la más destacada es la neuropatía. Esta produce disminución en la calidad de vida, ya que conlleva al paciente a tener dificultades como: dolor, parestesias, ulceraciones, e incluso alteración en la deambulación. Objetivos Determinar la prevalencia de neuropatía simétrica distal en pacientes que integran los clubes de diabéticos del distrito 01D01 y su relación con sus estilos de vida. Materiales y Métodos Es un estudio transversal con una muestra de 162 pacientes, seleccionados al azar, sobre la base del 30% de prevalencia de neuropatía, con un nivel de confianza del 95% y error de inferencia del 5%. Fueron aplicados dos cuestionarios avalados (NSS+NDS). Para el análisis, usamos el programa SPSS 15. Las variables demográficas, se analizaron por estadística descriptiva. La relación entre las variables dependientes y variables independientes se evaluó a través de la razón de prevalencia, con un intervalo de confianza del 95%, chi cuadrado y valor de p. Resultados La prevalencia de neuropatía diabética es del 54.9%, en hombres 73.7% y en mujeres 49.2%. Los factores protectores son: Recibir indicaciones sobre cuidado de los pies con 66.7% (RP 2 y valor de p 0.033), y el buen control de glucosa en sangre con el 95.9% (RP 2.3 y valor de p 0.001). Conclusiones La neuropatía diabética simétrica distal se presenta más en hombres que en mujeres. Además se encontró que el buen control de glucosa en sangre y recibir indicaciones sobre cuidado de los pies, disminuyen el riesgo de neuropatía.
Resumo:
OBJECTIVE: To assess the intraobserver reliability of the information about the history of diagnosis and treatment of hypertension. METHODS: A multidimensional health questionnaire, which was filled out by the interviewees, was applied twice with an interval of 2 weeks, in July '99, to 192 employees of the University of the State of Rio de Janeiro (UERJ), stratified by sex, age, and educational level. The intraobserver reliability of the answers provided was estimated by the kappa statistic and by the coefficient of intraclass correlation (CICC). RESULTS: The general kappa (k) statistic was 0.75 (95% CI=0.73-0.77). Reliability was higher among females (k=0.88, 95% CI=0.85-0.91) than among males (k=0.62, 95% CI=0.59-0.65).The reliability was higher among individuals 40 years of age or older (k=0.79; 95% CI=0.73-0.84) than those from 18 to 39 years (k=0.52; 95% CI=0.45-0.57). Finally, the kappa statistic was higher among individuals with a university educational level (k=0.86; 95% CI=0.81-0.91) than among those with high school educational level (k=0.61; 95% CI=0.53-0.70) or those with middle school educational level (k=0.68; 95% CI=0.64-0.72). The coefficient of intraclass correlation estimated by the intraobserver agreement in regard to age at the time of the diagnosis of hypertension was 0.74. A perfect agreement between the 2 answers (k=1.00) was observed for 22 interviewees who reported prior prescription of antihypertensive medication. CONCLUSION: In the population studied, estimates of the reliability of the history of medical diagnosis of hypertension and its treatment ranged from substantial to almost perfect reliability.
Resumo:
Introduction: Carbon monoxide (CO) poisoning is one of the mostcommon causes of fatal poisoning. Symptoms of CO poisoning arenonspecific and the documentation of elevated carboxyhemoglobin(HbCO) levels in arterial blood sample is the only standard ofconfirming suspected exposure. The treatment of CO poisoning requiresnormobaric or hyperbaric oxygen therapy, according to the symptomsand HbCO levels. A new device, the Rad-57 pulse CO-oximeter allowsnoninvasive transcutaneous measurement of blood carboxyhemoglobinlevel (SpCO) by measurement of light wavelength absorptions.Methods: Prospective cohort study with a sample of patients, admittedbetween October 2008 - March 2009 and October 2009 - March 2010,in the emergency services (ES) of a Swiss regional hospital and aSwiss university hospital (Burn Center). In case of suspected COpoisoning, three successive noninvasive measurements wereperformed, simultaneously with one arterial blood HbCO test. A controlgroup includes patients admitted in the ES for other complaints (cardiacinsufficiency, respiratory distress, acute renal failure), but necessitatingarterial blood testing. Informed consent was obtained from all patients.The primary endpoint was to assess the agreement of themeasurements made by the Rad-57 (SpCO) and the blood levels(HbCO).Results: 50 patients were enrolled, among whom 32 were admittedfor suspected CO poisoning. Baseline demographic and clinicalcharacteristics of patients are presented in table 1. The median age was37.7 ans ± 11.8, 56% being male. Median laboratory carboxyhemoglobinlevels (HbCO) were 4.25% (95% IC 0.6-28.5) for intoxicated patientsand 1.8% (95% IC 1.0-5.3) for control patients. Only five patientspresented with HbCO levels >= 15%. The results disclose relatively faircorrelations between the SpCO levels obtained by the Rad-57 and thestandard HbCO, without any false negative results. However, theRad-57 tend to under-estimate the value of SpCO for patientsintoxicated HbCO levels >10% (fig. 1).Conclusion: Noninvasive transcutaneous measurement of bloodcarboxyhemoglobin level is easy to use. The correlation seems to becorrect for low to moderate levels (<15%). For higher values, weobserve a trend of the Rad-57 to under-estimate the HbCO levels. Apartfrom this potential limitation and a few cases of false-negative resultsdescribed in the literature, the Rad-57 may be useful for initial triageand diagnosis of CO.
Resumo:
A reliability approach to tunnel support design is presented in this paper. The aim of the work is the incorporation of classical Level II techniques to the current design method based on the study of the ground-support interaction diagram.
Resumo:
In tunnel construction, as in every engineering work, it is usual the decision making, with incomplete data. Nevertheless, consciously or not, the builder weighs the risks (even if this is done subjectively) so that he can offer a cost. The objective of this paper is to recall the existence of a methodology to treat the uncertainties in the data so that it is possible to see their effect on the output of the computational model used and then to estimate the failure probability or the safety margin of a structure. In this scheme it is possible to include the subjective knowledge on the statistical properties of the random variables and, using a numerical model consistent with the degree of complexity appropiate to the problem at hand, to make rationally based decisions. As will be shown with the method it is possible to quantify the relative importance of the random variables and, in addition, it can be used, under certain conditions, to solve the inverse problem. It is then a method very well suited both to the project and to the control phases of tunnel construction.
Resumo:
Pushover methods are being used as an everyday tool in engineering practice and some of them have been included in Regulatory Codes. Recently several efforts have been done trying to look at them from a probabilistic viewpoint. In this paper the authors shall present a Level 2 approach based on a probabilistic definition of the characteristic points defining the response spectra as well as a probabilistic definition of the elasto-plastic pushover curve representing the structural behavior. Comparisons with Montecarlo simulations will help to precise the accuracy of the proposed approach.