960 resultados para Accelerated failure time model
Resumo:
A nested ice flow model was developed for eastern Dronning Maud Land to assist with the dating and interpretation of the EDML deep ice core. The model consists of a high-resolution higher-order ice dynamic flow model that was nested into a comprehensive 3-D thermomechanical model of the whole Antarctic ice sheet. As the drill site is on a flank position the calculations specifically take into account the effects of horizontal advection as deeper ice in the core originated from higher inland. First the regional velocity field and ice sheet geometry is obtained from a forward experiment over the last 8 glacial cycles. The result is subsequently employed in a Lagrangian backtracing algorithm to provide particle paths back to their time and place of deposition. The procedure directly yields the depth-age distribution, surface conditions at particle origin, and a suite of relevant parameters such as initial annual layer thickness. This paper discusses the method and the main results of the experiment, including the ice core chronology, the non-climatic corrections needed to extract the climatic part of the signal, and the thinning function. The focus is on the upper 89% of the ice core (appr. 170 kyears) as the dating below that is increasingly less robust owing to the unknown value of the geothermal heat flux. It is found that the temperature biases resulting from variations of surface elevation are up to half of the magnitude of the climatic changes themselves.
Resumo:
Calcareous nannoplankton assemblages and benthic d18O isotopes of Pliocene deep-sea sediments of ODP site 1172 (East of Tasmania) have been studied to improve our knowledge of the Southern Ocean paleoceanography. Our study site is located just north of the Subtropical Front (STF), an ideal setting to monitor migrations of the STF during our study period, between 3.45 and 2.45 Ma. The assemblage identified at ODP site 1172 has been interpreted as characteristic for the transitional zone water mass, located south of the STF, based on: (i) the low abundances (< 1%) of subtropical taxa, (ii) relatively high percentages of Coccolithus pelagicus, a subpolar type species, (iii) abundances from 2-10% of Calcidiscus leptoporus, a species that frequently inhabits the zone south of the STF and (iv) the high abundances of small Noelaerhabdaceae which at present dominates the zone south of the STF. Across our interval the calcareous nannoplankton manifests glacial-interglacial variability. We have identified cold events, characterized by high abundances of C. pelagicus which coincide with glacial periods, except during G7. After 3.1 Ma cold events are more frequent, in concordance with global cooling trends. Around 2.75 Ma, the interglacial stage G7 is characterized by anomalous low temperatures which most likely are linked to definite closure of the Central American Seaway (CAS), an event that is believed to have had global consequences. A gradual increase of very small Reticulofenestra across our section marks a significant trend in the small Noelaerhabdaceae species group and has been linked to a general enhanced mixing of the water column in agreement with previous studies. It is suggested that a rapid decline of small Gephyrocapsa after isotopic stage G7 might be related to the cooling observed in our study site after the closure of the CAS.
Resumo:
The environment of ebb-tidal deltas between barrier island systems is characterized by a complex morphology with ebb- and flood-dominated channels, shoals and swash bars connecting the ebb-tidal delta platform to the adjacent island. These morphological features reveal characteristic surface sediment grain-size distributions and are subject to a continuous adaptation to the prevailing hydrodynamic forces. The mixed-energy tidal inlet Otzumer Balje between the East Frisian barrier islands of Langeoog and Spiekeroog in the southern North Sea has been chosen here as a model study area for the identification of relevant hydrodynamic drivers of morphology and sedimentology. We compare the effect of high-energy, wave-dominated storm conditions to mid-term, tide-dominated fair-weather conditions on tidal inlet morphology and sedimentology with a process-based numerical model. A multi-fractional approach with five grain-size fractions between 150 and 450 µm allows for the simulation of corresponding surface sediment grain-size distributions. Net sediment fluxes for distinct conditions are identified: during storm conditions, bed load sediment transport is generally onshore directed on the shallower ebb-tidal delta shoals, whereas fine-grained suspended sediment bypasses the tidal inlet by wave-driven currents. During fair weather the sediment transport mainly focuses on the inlet throat and the marginal flood channels. We show how the observed sediment grain-size distribution and the morphological response at mixed-energy tidal inlets are the result of both wave-dominated less frequent storm conditions and mid-term, tide-dominant fair-weather conditions.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
This paper presents some fundamental properties of independent and-parallelism and extends its applicability by enlarging the class of goals eligible for parallel execution. A simple model of (independent) and-parallel execution is proposed and issues of correctness and efficiency discussed in the light of this model. Two conditions, "strict" and "non-strict" independence, are defined and then proved sufficient to ensure correctness and efñciency of parallel execution: if goals which meet these conditions are executed in parallel the solutions obtained are the same as those produced by standard sequential execution. Also, in absence of failure, the parallel proof procedure does not genérate any additional work (with respect to standard SLD-resolution) while the actual execution time is reduced. Finally, in case of failure of any of the goals no slow down will occur. For strict independence the results are shown to hold independently of whether the parallel goals execute in the same environment or in sepárate environments. In addition, a formal basis is given for the automatic compile-time generation of independent and-parallelism: compile-time conditions to efficiently check goal independence at run-time are proposed and proved sufficient. Also, rules are given for constructing simpler conditions if information regarding the binding context of the goals to be executed in parallel is available to the compiler.
Resumo:
Specific tests to assess reliability of high luminosity AlInGaP LED for outdoor applications are needed. In this paper tests to propose a model involving three parameters: temperature, humidity and current have been carried out. Temperature, humidity and current accelerated model has been proposed to evaluate the reliability of this type of LED. Degradation and catastrophic failure mechanisms have been analyzed. Finally we analyze the effect of serial resistance in power luminosity degradation.
Resumo:
The purpose of this work is twofold: first, to develop a process to automatically create parametric models of the aorta that can adapt to any possible intraoperative deformation of the vessel. Second, it intends to provide the tools needed to perform this deformation in real time, by means of a non-rigid registration method. This dynamically deformable model will later be used in a VR-based surgery guidance system for aortic catheterism procedures, showing the vessel changes in real time.
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
A quantitative temperature accelerated life test on sixty GaInP/GaInAs/Ge triple-junction commercial concentrator solar cells is being carried out. The final objective of this experiment is to evaluate the reliability, warranty period, and failure mechanism of high concentration solar cells in a moderate period of time. The acceleration of the degradation is realized by subjecting the solar cells at temperatures markedly higher than the nominal working temperature under a concentrator Three experiments at three different temperatures are necessary in order to obtain the acceleration factor which relates the time at the stress level with the time at nominal working conditions. . However, up to now only the test at the highest temperature has finished. Therefore, we can not provide complete reliability information but we have analyzed the life data and the failure mode of the solar cells inside the climatic chamber at the highest temperature. The failures have been all of them catastrophic. In fact, the solar cells have turned into short circuits. We have fitted the failure distribution to a two parameters Weibull function. The failures are wear-out type. We have observed that the busbar and the surrounding fingers are completely deteriorate
Resumo:
Steam Generator Tube Rupture (SGTR) sequences in Pressurized Water Reactors are known to be one of the most demanding transients for the operating crew. SGTR are a special kind of transient as they could lead to radiological releases without core damage or containment failure, as they can constitute a direct path from the reactor coolant system to the environment. The first methodology used to perform the Deterministic Safety Analysis (DSA) of a SGTR did not credit the operator action for the first 30 min of the transient, assuming that the operating crew was able to stop the primary to secondary leakage within that period of time. However, the different real SGTR accident cases happened in the USA and over the world demonstrated that the operators usually take more than 30 min to stop the leakage in actual sequences. Some methodologies were raised to overcome that fact, considering operator actions from the beginning of the transient, as it is done in Probabilistic Safety Analysis. This paper presents the results of comparing different assumptions regarding the single failure criteria and the operator action taken from the most common methodologies included in the different Deterministic Safety Analysis. One single failure criteria that has not been analysed previously in the literature is proposed and analysed in this paper too. The comparison is done with a PWR Westinghouse three loop model in TRACE code (Almaraz NPP) with best estimate assumptions but including deterministic hypothesis such as single failure criteria or loss of offsite power. The behaviour of the reactor is quite diverse depending on the different assumptions made regarding the operator actions. On the other hand, although there are high conservatisms included in the hypothesis, as the single failure criteria, all the results are quite far from the regulatory limits. In addition, some improvements to the Emergency Operating Procedures to minimize the offsite release from the damaged SG in case of a SGTR are outlined taking into account the offsite dose sensitivity results.