963 resultados para failure time model
Resumo:
We explore the role of business services in knowledge accumulation and growth and the determinants of knowledge diffusion including the role of distance. A continuous time model is estimated on several European countries, Japan, and the US. Policy simulations illustrate the benefits for EU growth of the deepening of the single market, the reduction of regulatory barriers, and the accumulation of technology and human capital. Our results support the basic insights of the Lisbon Agenda. Economic growth in Europe is enhanced to the extent that: trade in services increases, technology accumulation and diffusion increase, regulation becomes both less intensive and more uniform across countries, and human capital accumulation increases in all countries.
Resumo:
Measurements of Fe(II) and H2O2 were carried out in the Atlantic sector of the Southern Ocean during EisenEx, an iron enrichment experiment. Iron was added on three separate occasions, approximately every 8 days, as a ferrous sulfate (FeSO4) solution. Vertical profiles of Fe(II) showed maxima consistent with the plume of the iron infusion. While H2O2 profiles revealed a corresponding minima showing the effect of oxidation of Fe(II) by H2O2, observations showed detectable Fe(II) concentrations existed for up to 8 days after an iron infusion. H2O2 concentrations increased at the depth of the chlorophyll maximum when iron concentrations returned to pre-infusion concentrations (<80 pM) possibly due to biological production related to iron reductase activity. In this work, Fe(II) and dissolved iron were used as tracers themselves for subsequent iron infusions when no further SF6 was added. EisenEx was subject to periods of weak and strong mixing. Slow mixing after the second infusion allowed significant concentrations of Fe(II) and Fe to exist for several days. During this time, dissolved and total iron in the infusion plume behaved almost conservatively as it was trapped between a relict mixed layer and a new rain-induced mixed layer. Using dissolved iron, a value for the vertical diffusion coefficient Kz=6.7±0.7 cm**2/s was obtained for this 2-day period. During a subsequent surface survey of the iron-enriched patch, elevated levels of Fe(II) were found in surface waters presumably from Fe(II) dissolved in the rainwater that was falling at this time. Model results suggest that the reaction between uncomplexed Fe(III) and O2? was a significant source of Fe(II) during EisenEx and helped to maintain high levels of Fe(II) in the water column. This phenomenon may occur in iron enrichment experiments when two conditions are met: (i) When Fe is added to a system already saturated with regard to organic complexation and (ii) when mixing processes are slow, thereby reducing the dispersion of iron into under-saturated waters.
Resumo:
The present time.--Model prison.--Downing Street.--The new Downing Street.--Stump-orator.--Parliaments.--Hudson's statue.--Jesuitism
Resumo:
Application of neural network algorithm for increasing the accuracy of navigation systems are showing. Various navigation systems, where a couple of sensors are used in the same device in different positions and the disturbances act equally on both sensors, the trained neural network can be advantageous for increasing the accuracy of system. The neural algorithm had used for determination the interconnection between the sensors errors in two channels to avoid the unobservation of navigation system. Representation of thermal error of two- component navigation sensors by time model, which coefficients depend only on parameters of the device, its orientations relative to disturbance vector allows to predict thermal errors change, measuring the current temperature and having identified preliminary parameters of the model for the set position. These properties of thermal model are used for training the neural network and compensation the errors of navigation system in non- stationary thermal fields.
Resumo:
Climatic changes cause alterations in circulation patterns of the world oceans. The highly saline Mediterranean Outflow Water (MOW), built within the Mediterranean Sea crosses the Strait of Gibraltar in westerly directions, turning north-westward to stick to the Iberian Slope within 600-1500m water depths. Circulation pattern and current speed of the MOW are strongly influenced by climatically induced variations and thus control sedimentation processes along the South- and West - Iberian Continental Slope. Sedimentation characteristics of the investigated area are therefore suitable to reconstruct temporal hydrodynamic changes of the MOW. Detailed investigations on the silt-sized grain distribution, physical properties and hydroacoustic data were performed to recalculate paleo-current-velocities and to understand the sedimentation history in the Golf of Cadiz and the Portuguese Continental Slope. A time model based on d18Odata and 14C-datings of planktic foraminifera allowed the stratigraphical classification of the core material and thus the dating of the current induced sediment layers showing the variations of paleo-current intensities. The evaluation and interpretation of the gathered data sets enabled us to reconstruct lateral and temporal sedimentation patterns of the MOW for the Holocene and the late Pleistocene, back to the Last Glacial Maximum (LGM).
Resumo:
Motivated by new and innovative rental business models, this paper develops a novel discrete-time model of a rental operation with random loss of inventory due to customer use. The inventory level is chosen before the start of a finite rental season, and customers not immediately served are lost. Our analysis framework uses stochastic comparisons of sample paths to derive structural results that hold under good generality for demands, rental durations, and rental unit lifetimes. Considering different \recirculation" rules | i.e., which rental unit to choose to meet each demand | we prove the concavity of the expected profit function and identify the optimal recirculation rule. A numerical study clarifies when considering rental unit loss and recirculation rules matters most for the inventory decision: Accounting for rental unit loss can increase the expected profit by 7% for a single season and becomes even more important as the time horizon lengthens. We also observe that the optimal inventory level in response to increasing loss probability is non-monotonic. Finally, we show that choosing the optimal recirculation rule over another simple policy allows more rental units to be profitably added, and the profit-maximizing service level increases by up to 6 percentage points.
Resumo:
This thesis presents quantitative studies of T cell and dendritic cell (DC) behaviour in mouse lymph nodes (LNs) in the naive state and following immunisation. These processes are of importance and interest in basic immunology, and better understanding could improve both diagnostic capacity and therapeutic manipulations, potentially helping in producing more effective vaccines or developing treatments for autoimmune diseases. The problem is also interesting conceptually as it is relevant to other fields where 3D movement of objects is tracked with a discrete scanning interval. A general immunology introduction is presented in chapter 1. In chapter 2, I apply quantitative methods to multi-photon imaging data to measure how T cells and DCs are spatially arranged in LNs. This has been previously studied to describe differences between the naive and immunised state and as an indicator of the magnitude of the immune response in LNs, but previous analyses have been generally descriptive. The quantitative analysis shows that some of the previous conclusions may have been premature. In chapter 3, I use Bayesian state-space models to test some hypotheses about the mode of T cell search for DCs. A two-state mode of movement where T cells can be classified as either interacting to a DC or freely migrating is supported over a model where T cells would home in on DCs at distance through for example the action of chemokines. In chapter 4, I study whether T cell migration is linked to the geometric structure of the fibroblast reticular network (FRC). I find support for the hypothesis that the movement is constrained to the fibroblast reticular cell (FRC) network over an alternative 'random walk with persistence time' model where cells would move randomly, with a short-term persistence driven by a hypothetical T cell intrinsic 'clock'. I also present unexpected results on the FRC network geometry. Finally, a quantitative method is presented for addressing some measurement biases inherent to multi-photon imaging. In all three chapters, novel findings are made, and the methods developed have the potential for further use to address important problems in the field. In chapter 5, I present a summary and synthesis of results from chapters 3-4 and a more speculative discussion of these results and potential future directions.
Resumo:
Atualmente, as organizações tendem a desenvolverem-se com o objetivo de se tornarem mais eficazes e eficientes. Neste contexto, esta investigação visa propor um modelo que permita calcular os Custos da Qualidade (CQ) na manutenção e sustentação dos Sistemas de Armas da Força Aérea (FA), contribuindo para a melhoria contínua do Sistema de Gestão da Qualidade e Aeronavegabilidade (SGQA). Assim, neste estudo é avaliada a utilização do modelo “Prevenção, Avaliação e Falhas” (PAF) para o cálculo dos CQ no SGQA, a forma como os Sistemas de Informação (SI) podem contribuir para este cálculo e qual a estrutura do sistema que deverá integrar e operacionalizar este modelo. Esta investigação desenvolve-se mediante um raciocínio hipotético-dedutivo, utilizando uma estratégia qualitativa aplicada num estudo de caso ao SA Epsilon Tb-30. Após apresentar um enquadramento teórico, são testadas as hipóteses identificadas através de análise documental e entrevistas a elementos com funções-chave neste âmbito. Verifica-se então a possibilidade de utilizar o modelo PAF para o cálculo dos CQ no SGQA. Contudo, é necessário adaptar os SI e os processos do sistema para a sua operacionalização. Finalmente, é proposto um plano para implementação do modelo de CQ, assim como são apresentadas algumas recomendações para o seu desenvolvimento. Abstract: Nowadays, the organizations tend to self-develop in order to increase their efficiency and effectiveness. In this context, this study has the purpose to propose a Quality Cost (CQ) model within the scope of maintenance and sustainability of Portuguese Air Force (FA) weapon systems, contributing to the continuous improvement of its Airworthiness and Quality Management System (SGQA). Therefore, throughout this study is evaluated the implementation of Prevention, Appraisal and Failure (PAF) model for CQ calculation, how the Information Systems (SI) can contribute for this calculus and what SGQA structure should integrate and operationalize this model. In this investigation is used a hypothetical-deductive reasoning, through a qualitative strategy applied to a case study in Epsilon TB-30 aircraft. After presenting an initial theoretical study, the raised hypotheses are tested through the relevant document analysis and interviews with elements in key functions within this scope. With this study it’s shown the possibility to use PAF model to calculate CQ of the SGQA. However, it’s necessary to adapt the SI and the system processes to get the operationalization of this model. Finally, an implementation plan of the evaluated CQ model is proposed, and some recommendations are made for its future development.
Resumo:
Doutoramento em Matemática
Resumo:
The first paper sheds light on the informational content of high frequency data and daily data. I assess the economic value of the two family models comparing their performance in forecasting asset volatility through the Value at Risk metric. In running the comparison this paper introduces two key assumptions: jumps in prices and leverage effect in volatility dynamics. Findings suggest that high frequency data models do not exhibit a superior performance over daily data models. In the second paper, building on Majewski et al. (2015), I propose an affine-discrete time model, labeled VARG-J, which is characterized by a multifactor volatility specification. In the VARG-J model volatility experiences periods of extreme movements through a jump factor modeled as an Autoregressive Gamma Zero process. The estimation under historical measure is done by quasi-maximum likelihood and the Extended Kalman Filter. This strategy allows to filter out both volatility factors introducing a measurement equation that relates the Realized Volatility to latent volatility. The risk premia parameters are calibrated using call options written on S&P500 Index. The results clearly illustrate the important contribution of the jump factor in the pricing performance of options and the economic significance of the volatility jump risk premia. In the third paper, I analyze whether there is empirical evidence of contagion at the bank level, measuring the direction and the size of contagion transmission between European markets. In order to understand and quantify the contagion transmission on banking market, I estimate the econometric model by Aït-Sahalia et al. (2015) in which contagion is defined as the within and between countries transmission of shocks and asset returns are directly modeled as a Hawkes jump diffusion process. The empirical analysis indicates that there is a clear evidence of contagion from Greece to European countries as well as self-contagion in all countries.
Resumo:
This work resumes a wide variety of research activities carried out with the main objective of increasing the efficiency and reducing the fuel consumption of Gasoline Direct Injection engines, especially under high loads. For this purpose, two main innovative technologies have been studied, Water Injection and Low-Pressure Exhaust Gas Recirculation, which help to reduce the temperature of the gases inside the combustion chamber and thus mitigate knock, being this one of the main limiting factors for the efficiency of modern downsized engines that operate at high specific power. A prototypal Port Water Injection system was developed and extensive experimental work has been carried out, initially to identify the benefits and limitations of this technology. This led to the subsequent development and testing of a combustion controller, which has been implemented on a Rapid Control Prototyping environment, capable of managing water injection to achieve knock mitigation and a more efficient combustion phase. Regarding Low-Pressure Exhaust Gas Recirculation, a commercial engine that was already equipped with this technology was used to carry out experimental work in a similar fashion to that of water injection. Another prototypal water injection system has been mounted to this second engine, to be able to test both technologies, at first separately to compare them on equal conditions, and secondly together in the search of a possible synergy. Additionally, based on experimental data from several engines that have been tested during this study, including both GDI and GCI engines, a real-time model (or virtual sensor) for the estimation of the maximum in-cylinder pressure has been developed and validated. This parameter is of vital importance to determine the speed at which damage occurs on the engine components, and therefore to extract the maximum performance without inducing permanent damages.
Resumo:
In face of the current economic and financial environment, predicting corporate bankruptcy is arguably a phenomenon of increasing interest to investors, creditors, borrowing firms, and governments alike. Within the strand of literature focused on bankruptcy forecasting we can find diverse types of research employing a wide variety of techniques, but only a few researchers have used survival analysis for the examination of this issue. We propose a model for the prediction of corporate bankruptcy based on survival analysis, a technique which stands on its own merits. In this research, the hazard rate is the probability of ‘‘bankruptcy’’ as of time t, conditional upon having survived until time t. Many hazard models are applied in a context where the running of time naturally affects the hazard rate. The model employed in this paper uses the time of survival or the hazard risk as dependent variable, considering the unsuccessful companies as censured observations.
Resumo:
First published online: December 16, 2014.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.