12 resultados para Reliability level
em Universidad Politécnica de Madrid
Resumo:
Las aplicaciones distribuidas que precisan de un servicio multipunto fiable son muy numerosas, y entre otras es posible citar las siguientes: bases de datos distribuidas, sistemas operativos distribuidos, sistemas de simulación interactiva distribuida y aplicaciones de distribución de software, publicaciones o noticias. Aunque en sus orígenes el dominio de aplicación de tales sistemas distribuidos estaba reducido a una única subred (por ejemplo una Red de Área Local) posteriormente ha surgido la necesidad de ampliar su aplicabilidad a interredes. La aproximación tradicional al problema del multipunto fiable en interredes se ha basado principalmente en los dos siguientes puntos: (1) proporcionar en un mismo protocolo muchas garantías de servicio (por ejemplo fiabilidad, atomicidad y ordenación) y a su vez algunas de éstas en distintos grados, sin tener en cuenta que muchas aplicaciones multipunto que precisan fiabilidad no necesitan otras garantías; y (2) extender al entorno multipunto las soluciones ya adoptadas en el entorno punto a punto sin considerar las características diferenciadoras; y de aquí, que se haya tratado de resolver el problema de la fiabilidad multipunto con protocolos extremo a extremo (protocolos de transporte) y utilizando esquemas de recuperación de errores, centralizados (las retransmisiones se hacen desde un único punto, normalmente la fuente) y globales (los paquetes solicitados se vuelven a enviar al grupo completo). En general, estos planteamientos han dado como resultado protocolos que son ineficientes en tiempo de ejecución, tienen problemas de escalabilidad, no hacen un uso óptimo de los recursos de red y no son adecuados para aplicaciones sensibles al retardo. En esta Tesis se investiga el problema de la fiabilidad multipunto en interredes operando en modo datagrama y se presenta una forma novedosa de enfocar el problema: es más óptimo resolver el problema de la fiabilidad multipunto a nivel de red y separar la fiabilidad de otras garantías de servicio, que pueden ser proporcionadas por un protocolo de nivel superior o por la propia aplicación. Siguiendo este nuevo enfoque se ha diseñado un protocolo multipunto fiable que opera a nivel de red (denominado RMNP). Las características más representativas del RMNP son las siguientes; (1) sigue una aproximación orientada al emisor, lo cual permite lograr un grado muy alto de fiabilidad; (2) plantea un esquema de recuperación de errores distribuido (las retransmisiones se hacen desde ciertos encaminadores intermedios que siempre estarán más cercanos a los miembros que la propia fuente) y de ámbito restringido (el alcance de las retransmisiones está restringido a un cierto número de miembros). Este esquema hace posible optimizar el retardo medio de distribución y disminuir la sobrecarga introducida por las retransmisiones; (3) incorpora en ciertos encaminadores funciones de agregación y filtrado de paquetes de control, que evitan problemas de implosión y reducen el tráfico que fluye hacia la fuente. Con el fin de evaluar el comportamiento del protocolo diseñado, se han realizado pruebas de simulación obteniéndose como principales conclusiones que, el RMNP escala correctamente con el tamaño del grupo, hace un uso óptimo de los recursos de red y es adecuado para aplicaciones sensibles al retardo.---ABSTRACT---There are many distributed applications that require a reliable multicast service, including: distributed databases, distributed operating systems, distributed interactive simulation systems and distribution applications of software, publications or news. Although the application domain of distributed systems of this type was originally confíned to a single subnetwork (for example, a Local Área Network), it later became necessary extend their applicability to internetworks. The traditional approach to the reliable multicast problem in internetworks is based mainly on the following two points: (1) provide a lot of service guarantees in one and the same protocol (for example, reliability, atomicity and ordering) and different levéis of guarantee in some cases, without taking into account that many multicast applications that require reliability do not need other guarantees, and (2) extend solutions adopted in the unicast environment to the multicast environment without taking into account their distinctive characteristics. So, the attempted solutions to the multicast reliability problem were end-to-end protocols (transport protocols) and centralized error recovery schemata (retransmissions made from a single point, normally the source) and global error retrieval schemata (the requested packets are retransmitted to the whole group). Generally, these approaches have resulted in protocols that are inefficient in execution time, have scaling problems, do not make optimum use of network resources and are not suitable for delay-sensitive applications. Here, the multicast reliability problem is investigated in internetworks operating in datagram mode and a new way of approaching the problem is presented: it is better to solve to the multicast reliability problem at network level and sepárate reliability from other service guarantees that can be supplied by a higher protocol or the application itself. A reliable multicast protocol that operates at network level (called RMNP) has been designed on the basis of this new approach. The most representative characteristics of the RMNP are as follows: (1) it takes a transmitter-oriented approach, which provides for a very high reliability level; (2) it provides for an error retrieval schema that is distributed (the retransmissions are made from given intermedíate routers that will always be closer to the members than the source itself) and of restricted scope (the scope of the retransmissions is confined to a given number of members), and this schema makes it possible to optimize the mean distribution delay and reduce the overload caused by retransmissions; (3) some routers include control packet aggregation and filtering functions that prevent implosión problems and reduce the traffic flowing towards the source. Simulation test have been performed in order to evalúate the behaviour of the protocol designed. The main conclusions are that the RMNP scales correctly with group size, makes optimum use of network resources and is suitable for delay-sensitive applications.
Resumo:
A reliability approach to tunnel support design is presented in this paper. The aim of the work is the incorporation of classical Level II techniques to the current design method based on the study of the ground-support interaction diagram.
Resumo:
In tunnel construction, as in every engineering work, it is usual the decision making, with incomplete data. Nevertheless, consciously or not, the builder weighs the risks (even if this is done subjectively) so that he can offer a cost. The objective of this paper is to recall the existence of a methodology to treat the uncertainties in the data so that it is possible to see their effect on the output of the computational model used and then to estimate the failure probability or the safety margin of a structure. In this scheme it is possible to include the subjective knowledge on the statistical properties of the random variables and, using a numerical model consistent with the degree of complexity appropiate to the problem at hand, to make rationally based decisions. As will be shown with the method it is possible to quantify the relative importance of the random variables and, in addition, it can be used, under certain conditions, to solve the inverse problem. It is then a method very well suited both to the project and to the control phases of tunnel construction.
Resumo:
Pushover methods are being used as an everyday tool in engineering practice and some of them have been included in Regulatory Codes. Recently several efforts have been done trying to look at them from a probabilistic viewpoint. In this paper the authors shall present a Level 2 approach based on a probabilistic definition of the characteristic points defining the response spectra as well as a probabilistic definition of the elasto-plastic pushover curve representing the structural behavior. Comparisons with Montecarlo simulations will help to precise the accuracy of the proposed approach.
Resumo:
Laser shock processing (LSP) is being increasingly applied as an effective technology for the improvement of metallic materials surface properties in different types of components as a means of enhancement of their corrosion and fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses field into metallic alloy pieces allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view) are presented in this paper. Concretely, follow-on experimental results on the residual stress profiles and associated surface properties modification successfully reached in typical materials (especially Al and Ti alloys) under different LSP irradiation conditions are presented along with a practical correlated analysis on the protective character of the residual stress profiles obtained under different irradiation strategies and the evaluation of the corresponding induced properties as material specific volume reduction at the surface, microhardness and wear resistance. Additional remarks on the improved character of the LSP technique over the traditional “shot peening” technique in what concerns depth of induced compressive residual stresses fields are also made through the paper.
Resumo:
ATM, SDH or satellite have been used in the last century as the contribution network of Broadcasters. However the attractive price of IP networks is changing the infrastructure of these networks in the last decade. Nowadays, IP networks are widely used, but their characteristics do not offer the level of performance required to carry high quality video under certain circumstances. Data transmission is always subject to errors on line. In the case of streaming, correction is attempted at destination, while on transfer of files, retransmissions of information are conducted and a reliable copy of the file is obtained. In the latter case, reception time is penalized because of the low priority this type of traffic on the networks usually has. While in streaming, image quality is adapted to line speed, and line errors result in a decrease of quality at destination, in the file copy the difference between coding speed vs line speed and errors in transmission are reflected in an increase of transmission time. The way news or audiovisual programs are transferred from a remote office to the production centre depends on the time window and the type of line available; in many cases, it must be done in real time (streaming), with the resulting image degradation. The main purpose of this work is the workflow optimization and the image quality maximization, for that reason a transmission model for multimedia files adapted to JPEG2000, is described based on the combination of advantages of file transmission and those of streaming transmission, putting aside the disadvantages that these models have. The method is based on two patents and consists of the safe transfer of the headers and data considered to be vital for reproduction. Aside, the rest of the data is sent by streaming, being able to carry out recuperation operations and error concealment. Using this model, image quality is maximized according to the time window. In this paper, we will first give a briefest overview of the broadcasters requirements and the solutions with IP networks. We will then focus on a different solution for video file transfer. We will take the example of a broadcast center with mobile units (unidirectional video link) and regional headends (bidirectional link), and we will also present a video file transfer file method that satisfies the broadcaster requirements.
Resumo:
Laser shock processing (LSP) is increasingly applied as an effective technology for the improvement of metallic materials mechanical properties in different types of components as a means of enhancement of their fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses fields into metallic components allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view) are presented in this paper. Concretely, experimental results on the residual stress profiles and associated mechanical properties modification successfully reached in typical materials under different LSP irradiation conditions are presented. In this case, the specific behavior of a widely used material in high reliability components (especially in nuclear and biomedical applications) as AISI 316L is analyzed, the effect of possible “in-service” thermal conditions on the relaxation of the LSP effects being specifically characterized. I.
Resumo:
Laser shock processing (LSP) is being increasingly applied as an effective technology for the improvement of metallic materials mechanical and surface properties in different types of components as a means of enhancement of their corrosion and fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses field into metallic alloy pieces allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view) are presented in this paper. Concretely, follow-on experimental results on the residual stress profiles and associated surface properties modification successfully reached in typical materials (especially Al and Ti alloys characteristic of high reliability components in the aerospace, nuclear and biomedical sectors) under different LSP irradiation conditions are presented along with a practical correlated analysis on the protective character of the residual stress profiles obtained under different irradiation strategies. Additional remarks on the improved character of the LSP technique over the traditional “shot peening” technique in what concerns depth of induced compressive residual stresses fields are also made through the paper
Resumo:
The aim of the study was to evaluate the inter-operator reliability of OPTA Client System which is used to collect live football match statistics by OPTA Sportsdata Company. Two groups of experienced operators were required to analyze a Spanish league match independently. Results showed that team events coded by independent operators reached a very good agreement (kappa values were 0.92 and 0.94) and average difference of event time was 0.06±0.04 s. The reliability of goalkeeper actions was also at high level, kappa values were 0.92 and 0.86. The high intra-class correlation coefficients (ranged from 0.88 to 1.00) and low standardized typical errors (varied from 0.00 to 0.37) of different match actions and indicators of individual outfield players showed a high level of inter-operator reliability as well. These results suggest that the OPTA Client System is reliable to be used to collect live football match statistics by well trained operators.
Resumo:
Laser shock processing (LSP) is increasingly applied as an effective technology for the improvement of metallic materials mechanical properties in different types of components as a means of enhancement of their fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses fields into metallic components allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view)are presented in this paper. Concretely, experimental results on the residual stress profiles and associated mechanical properties modification successfully reached in typical materials under different LSP irradiation conditions are presented. In this case, the specific behavior of a widely used material in high reliability components (especially in nuclear and biomedical applications) as AISI 316L is analyzed, the effect of possible “in-service” thermal conditions on the relaxation of the LSP effects being specifically characterized.
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
Buses are considered a slow, low comfort and low reliability transport system, thus its negative and por image. In the framework of the 3iBS project (2012), several examples of innovative and/or effective solutions regarding the Level of Service (LoS) were analysed aiming to provide operators, practitioners and policy makers with a set of Good Practice Guidelines to strengthen the competitiveness of the bus in the urban environment. The identification of the key indicators regarding vehicles, infrastructure and operation was possible through the analysis of a set of case studies -among which Barcelona (Spain), Cagliari (Italy), London (United Kingdom), Paris and Nantes (France). A cross comparison between the case studies was carried out for contrasting the level of achievement of the different criteria considered. The information provided on Regulatory, Financial and Technical issues allows the identification of a number of specific factors influencing the implementation of a high quality transport scheme, and set the basis for the elaboration of a set of Guidelines for the implementation of an intelligent, innovative and integrated bus system, including the main barriers to be tackled.