958 resultados para failure time model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

En el presente trabajo se desarrolla una metodología para caracterizar fallas activas como fuentes sísmicas independientes en combinación con zonas sismogenéticas tipo área de cara a la estimación probabilista poissoniana de la peligrosidad sísmica. Esta metodología está basada en el reparto de la tasa de momento sísmico registrada en una región entre las fuentes potencialmente activas subyacentes (fallas activas modelizadas de forma independiente y una zonificación sismogenética), haciendo especial hincapié en regiones de sismicidad moderada y fallas de lento movimiento. Se desarrolla una aplicación de la metodología en el sureste de España, incorporando al cálculo 106 fuentes sísmicas independientes: 95 de tipo falla (catalogadas como fallas activas en la base de datos QAFI) y 11 zonas sismogenéticas de tipo área. Del mismo modo, se estima la peligrosidad sísmica con el método clásico zonificado y se comparan los resultados, analizando la influencia de la inclusión de las fallas de forma independiente en la estimación de la peligrosidad. Por último, se desarrolla una aplicación de la metodología propuesta en la estimación de la peligrosidad sísmica considerando un modelo temporal no poissoniano. La aplicación se centra en la falla de Carboneras, mostrando la repercusión que puede tener este cambio de modelo temporal en la estimación final de la peligrosidad. ABSTRACT A new methodology of seismic source characterization to be included in poissonian, probabilistic seismic hazard assessments, is developed in this work. Active faults are considered as independent seismogenic sources in combination with seismogenic area sources. This methodology is based in the distribution of the seismic moment rate recorded in a region between the potentially active underlying seismic sources that it contains (active faults modeled independently and an area-source seismic model), with special emphasis on regions with moderate seismicity and faults with slow deformation rates. An application of the methodology is carried out in the southeastern part of Spain, incorporating 106 independent seismic sources in the computations: 95 of fault type (catalogued as active faults in the Quaternary Active Fault Database, QAFI) and 11 of area-source type. At the same time, the seismic hazard is estimated following the classical area-source method. The results obtained using both methodologies (the classical one and the one proposed in this work9 are compared, analyzing the influence of the inclusion of faults as independent sources in hazard estimates. Finally, an application of the proposed methodology considering a non-poissonian time model is shown. This application is carried out in the Carboneras fault and shows the repercussion that this change of time model has in the final hazard estimates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We explore the role of business services in knowledge accumulation and growth and the determinants of knowledge diffusion including the role of distance. A continuous time model is estimated on several European countries, Japan, and the US. Policy simulations illustrate the benefits for EU growth of the deepening of the single market, the reduction of regulatory barriers, and the accumulation of technology and human capital. Our results support the basic insights of the Lisbon Agenda. Economic growth in Europe is enhanced to the extent that: trade in services increases, technology accumulation and diffusion increase, regulation becomes both less intensive and more uniform across countries, and human capital accumulation increases in all countries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measurements of Fe(II) and H2O2 were carried out in the Atlantic sector of the Southern Ocean during EisenEx, an iron enrichment experiment. Iron was added on three separate occasions, approximately every 8 days, as a ferrous sulfate (FeSO4) solution. Vertical profiles of Fe(II) showed maxima consistent with the plume of the iron infusion. While H2O2 profiles revealed a corresponding minima showing the effect of oxidation of Fe(II) by H2O2, observations showed detectable Fe(II) concentrations existed for up to 8 days after an iron infusion. H2O2 concentrations increased at the depth of the chlorophyll maximum when iron concentrations returned to pre-infusion concentrations (<80 pM) possibly due to biological production related to iron reductase activity. In this work, Fe(II) and dissolved iron were used as tracers themselves for subsequent iron infusions when no further SF6 was added. EisenEx was subject to periods of weak and strong mixing. Slow mixing after the second infusion allowed significant concentrations of Fe(II) and Fe to exist for several days. During this time, dissolved and total iron in the infusion plume behaved almost conservatively as it was trapped between a relict mixed layer and a new rain-induced mixed layer. Using dissolved iron, a value for the vertical diffusion coefficient Kz=6.7±0.7 cm**2/s was obtained for this 2-day period. During a subsequent surface survey of the iron-enriched patch, elevated levels of Fe(II) were found in surface waters presumably from Fe(II) dissolved in the rainwater that was falling at this time. Model results suggest that the reaction between uncomplexed Fe(III) and O2? was a significant source of Fe(II) during EisenEx and helped to maintain high levels of Fe(II) in the water column. This phenomenon may occur in iron enrichment experiments when two conditions are met: (i) When Fe is added to a system already saturated with regard to organic complexation and (ii) when mixing processes are slow, thereby reducing the dispersion of iron into under-saturated waters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present time.--Model prison.--Downing Street.--The new Downing Street.--Stump-orator.--Parliaments.--Hudson's statue.--Jesuitism

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Application of neural network algorithm for increasing the accuracy of navigation systems are showing. Various navigation systems, where a couple of sensors are used in the same device in different positions and the disturbances act equally on both sensors, the trained neural network can be advantageous for increasing the accuracy of system. The neural algorithm had used for determination the interconnection between the sensors errors in two channels to avoid the unobservation of navigation system. Representation of thermal error of two- component navigation sensors by time model, which coefficients depend only on parameters of the device, its orientations relative to disturbance vector allows to predict thermal errors change, measuring the current temperature and having identified preliminary parameters of the model for the set position. These properties of thermal model are used for training the neural network and compensation the errors of navigation system in non- stationary thermal fields.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Climatic changes cause alterations in circulation patterns of the world oceans. The highly saline Mediterranean Outflow Water (MOW), built within the Mediterranean Sea crosses the Strait of Gibraltar in westerly directions, turning north-westward to stick to the Iberian Slope within 600-1500m water depths. Circulation pattern and current speed of the MOW are strongly influenced by climatically induced variations and thus control sedimentation processes along the South- and West - Iberian Continental Slope. Sedimentation characteristics of the investigated area are therefore suitable to reconstruct temporal hydrodynamic changes of the MOW. Detailed investigations on the silt-sized grain distribution, physical properties and hydroacoustic data were performed to recalculate paleo-current-velocities and to understand the sedimentation history in the Golf of Cadiz and the Portuguese Continental Slope. A time model based on d18Odata and 14C-datings of planktic foraminifera allowed the stratigraphical classification of the core material and thus the dating of the current induced sediment layers showing the variations of paleo-current intensities. The evaluation and interpretation of the gathered data sets enabled us to reconstruct lateral and temporal sedimentation patterns of the MOW for the Holocene and the late Pleistocene, back to the Last Glacial Maximum (LGM).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivated by new and innovative rental business models, this paper develops a novel discrete-time model of a rental operation with random loss of inventory due to customer use. The inventory level is chosen before the start of a finite rental season, and customers not immediately served are lost. Our analysis framework uses stochastic comparisons of sample paths to derive structural results that hold under good generality for demands, rental durations, and rental unit lifetimes. Considering different \recirculation" rules | i.e., which rental unit to choose to meet each demand | we prove the concavity of the expected profit function and identify the optimal recirculation rule. A numerical study clarifies when considering rental unit loss and recirculation rules matters most for the inventory decision: Accounting for rental unit loss can increase the expected profit by 7% for a single season and becomes even more important as the time horizon lengthens. We also observe that the optimal inventory level in response to increasing loss probability is non-monotonic. Finally, we show that choosing the optimal recirculation rule over another simple policy allows more rental units to be profitably added, and the profit-maximizing service level increases by up to 6 percentage points.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents quantitative studies of T cell and dendritic cell (DC) behaviour in mouse lymph nodes (LNs) in the naive state and following immunisation. These processes are of importance and interest in basic immunology, and better understanding could improve both diagnostic capacity and therapeutic manipulations, potentially helping in producing more effective vaccines or developing treatments for autoimmune diseases. The problem is also interesting conceptually as it is relevant to other fields where 3D movement of objects is tracked with a discrete scanning interval. A general immunology introduction is presented in chapter 1. In chapter 2, I apply quantitative methods to multi-photon imaging data to measure how T cells and DCs are spatially arranged in LNs. This has been previously studied to describe differences between the naive and immunised state and as an indicator of the magnitude of the immune response in LNs, but previous analyses have been generally descriptive. The quantitative analysis shows that some of the previous conclusions may have been premature. In chapter 3, I use Bayesian state-space models to test some hypotheses about the mode of T cell search for DCs. A two-state mode of movement where T cells can be classified as either interacting to a DC or freely migrating is supported over a model where T cells would home in on DCs at distance through for example the action of chemokines. In chapter 4, I study whether T cell migration is linked to the geometric structure of the fibroblast reticular network (FRC). I find support for the hypothesis that the movement is constrained to the fibroblast reticular cell (FRC) network over an alternative 'random walk with persistence time' model where cells would move randomly, with a short-term persistence driven by a hypothetical T cell intrinsic 'clock'. I also present unexpected results on the FRC network geometry. Finally, a quantitative method is presented for addressing some measurement biases inherent to multi-photon imaging. In all three chapters, novel findings are made, and the methods developed have the potential for further use to address important problems in the field. In chapter 5, I present a summary and synthesis of results from chapters 3-4 and a more speculative discussion of these results and potential future directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atualmente, as organizações tendem a desenvolverem-se com o objetivo de se tornarem mais eficazes e eficientes. Neste contexto, esta investigação visa propor um modelo que permita calcular os Custos da Qualidade (CQ) na manutenção e sustentação dos Sistemas de Armas da Força Aérea (FA), contribuindo para a melhoria contínua do Sistema de Gestão da Qualidade e Aeronavegabilidade (SGQA). Assim, neste estudo é avaliada a utilização do modelo “Prevenção, Avaliação e Falhas” (PAF) para o cálculo dos CQ no SGQA, a forma como os Sistemas de Informação (SI) podem contribuir para este cálculo e qual a estrutura do sistema que deverá integrar e operacionalizar este modelo. Esta investigação desenvolve-se mediante um raciocínio hipotético-dedutivo, utilizando uma estratégia qualitativa aplicada num estudo de caso ao SA Epsilon Tb-30. Após apresentar um enquadramento teórico, são testadas as hipóteses identificadas através de análise documental e entrevistas a elementos com funções-chave neste âmbito. Verifica-se então a possibilidade de utilizar o modelo PAF para o cálculo dos CQ no SGQA. Contudo, é necessário adaptar os SI e os processos do sistema para a sua operacionalização. Finalmente, é proposto um plano para implementação do modelo de CQ, assim como são apresentadas algumas recomendações para o seu desenvolvimento. Abstract: Nowadays, the organizations tend to self-develop in order to increase their efficiency and effectiveness. In this context, this study has the purpose to propose a Quality Cost (CQ) model within the scope of maintenance and sustainability of Portuguese Air Force (FA) weapon systems, contributing to the continuous improvement of its Airworthiness and Quality Management System (SGQA). Therefore, throughout this study is evaluated the implementation of Prevention, Appraisal and Failure (PAF) model for CQ calculation, how the Information Systems (SI) can contribute for this calculus and what SGQA structure should integrate and operationalize this model. In this investigation is used a hypothetical-deductive reasoning, through a qualitative strategy applied to a case study in Epsilon TB-30 aircraft. After presenting an initial theoretical study, the raised hypotheses are tested through the relevant document analysis and interviews with elements in key functions within this scope. With this study it’s shown the possibility to use PAF model to calculate CQ of the SGQA. However, it’s necessary to adapt the SI and the system processes to get the operationalization of this model. Finally, an implementation plan of the evaluated CQ model is proposed, and some recommendations are made for its future development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Doutoramento em Matemática

Relevância:

50.00% 50.00%

Publicador:

Resumo:

An effective prognostics program will provide ample lead time for maintenance engineers to schedule a repair and to acquire replacement components before catastrophic failures occur. This paper presents a technique for accurate assessment of the remnant life of machines based on health state probability estimation technique. For comparative study of the proposed model with the proportional hazard model (PHM), experimental bearing failure data from an accelerated bearing test rig were used. The result shows that the proposed prognostic model based on health state probability estimation can provide a more accurate prediction capability than the commonly used PHM in bearing failure case study.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We study the scaling behaviors of a time-dependent fiber-bundle model with local load sharing. Upon approaching the complete failure of the bundle, the breaking rate of fibers diverges according to r(t)proportional to(T-f-t)(-xi) where T-f is the lifetime of the bundle and xi approximate to 1.0 is a universal scaling exponent. The average lifetime of the bundle [T-f] scales with the system size as N-delta, where delta depends on the distribution of individual fiber as well as the breakdown rule. [S1063-651X(99)13902-3].

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In many network applications, the nature of traffic is of burst type. Often, the transient response of network to such traffics is the result of a series of interdependant events whose occurrence prediction is not a trivial task. The previous efforts in IEEE 802.15.4 networks often followed top-down approaches to model those sequences of events, i.e., through making top-view models of the whole network, they tried to track the transient response of network to burst packet arrivals. The problem with such approaches was that they were unable to give station-level views of network response and were usually complex. In this paper, we propose a non-stationary analytical model for the IEEE 802.15.4 slotted CSMA/CA medium access control (MAC) protocol under burst traffic arrival assumption and without the optional acknowledgements. We develop a station-level stochastic time-domain method from which the network-level metrics are extracted. Our bottom-up approach makes finding station-level details such as delay, collision and failure distributions possible. Moreover, network-level metrics like the average packet loss or transmission success rate can be extracted from the model. Compared to the previous models, our model is proven to be of lower memory and computational complexity order and also supports contention window sizes of greater than one. We have carried out extensive and comparative simulations to show the high accuracy of our model.