856 resultados para Time-to-collision
Resumo:
The stochastic version of Pontryagin's maximum principle is applied to determine an optimal maintenance policy of equipment subject to random deterioration. The deterioration of the equipment with age is modelled as a random process. Next the model is generalized to include random catastrophic failure of the equipment. The optimal maintenance policy is derived for two special probability distributions of time to failure of the equipment, namely, exponential and Weibull distributions Both the salvage value and deterioration rate of the equipment are treated as state variables and the maintenance as a control variable. The result is illustrated by an example
Resumo:
The BeiDou system is the first global navigation satellite system in which all satellites transmit triple-frequency signals that can provide the positioning, navigation, and timing independently. A benefit of triple-frequency signals is that more useful combinations can be formed, including some extrawide-lane combinations whose ambiguities can generally be instantaneously fixed without distance restriction, although the narrow-lane ambiguity resolution (NL AR) still depends on the interreceiver distance or requires a long time to achieve. In this paper, we synthetically study decimeter and centimeter kinematic positioning using BeiDou triple-frequency signals. It starts with AR of two extrawide-lane signals based on the ionosphere-free or ionosphere-reduced geometry-free model. For decimeter positioning, one can immediately use two ambiguity-fixed extrawide-lane observations without pursuing NL AR. To achieve higher accuracy, NL AR is the necessary next step. Despite the fact that long-baseline NL AR is still challenging, some NL ambiguities can indeed be fixed with high reliability. Partial AR for NL signals is acceptable, because as long as some ambiguities for NL signals are fixed, positioning accuracy will be certainly improved.With accumulation of observations, more and more NL ambiguities are fixed and the positioning accuracy continues to improve. An efficient Kalman-filtering system is established to implement the whole process. The formulated system is flexible, since the additional constraints can be easily applied to enhance the model's strength. Numerical results from a set of real triple-frequency BeiDou data on a 50 km baseline show that decimeter positioning is achievable instantaneously.With only five data epochs, 84% of NL ambiguities can be fixed so that the real-time kinematic accuracies are 4.5, 2.5, and 16 cm for north, east, and height components (respectively), while with 10 data epochs more than 90% of NL ambiguities are fixed, and the rea- -time kinematic solutions are improved to centimeter level for all three coordinate components.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.
Resumo:
An analysis is performed to study the unsteady laminar incompressible boundary-layer flow of an electrically conducting fluid in a cone due to a point sink with an applied magnetic field. The unsteadiness in the flow is considered for two types of motion, viz. the motion arising due to the free stream velocity varying continuously with time and the transient motion occurring due to an impulsive change either in the strength of the point sink or in the wall temperature. The partial differential equations governing the flow have been solved numerically using an implicit finite-difference scheme in combination with the quasilinearization technique. The magnetic field increases the skin friction but reduces heat transfer. The heat transfer and temperature field are strongly influenced by the viscous dissipation and Prandtl number. The velocity field is more affected at the early stage of the transient motion, caused by an impulsive change in the strength of the point sink, as compared to the temperature field. When the transient motion is caused by a sudden change in the wall temperature, both skin friction and heat transfer take more time to reach a new steady state. The transient nature of the flow and heat transfer is active for a short time in the case of suction and for a long time in the case of injection. The viscous dissipation prolongs the transient behavior of the flow.
Resumo:
Uveal melanoma (UM) is the second most common primary intraocular cancer worldwide. It is a relatively rare cancer, but still the second most common type of primary malignant melanoma in humans. UM is a slowly growing tumor, and gives rise to distant metastasis mainly to the liver via the bloodstream. About 40% of patients with UM die of metastatic disease within 10 years of diagnosis, irrespective of the type of treatment. During the last decade, two main lines of research have aimed to achieve enhanced understanding of the metastasis process and accurate prognosis of patients with UM. One emphasizes the characteristics of tumor cells, particularly their nucleoli, and markers of proliferation, and the other the characteristics of tumor blood vessels. Of several morphometric measurements, the mean diameter of the ten largest nucleoli (MLN) has become the most widely applied. A large MLN has consistently been associated with high likelihood of dying from UM. Blood vessels are of paramount importance in metastasis of UM. Different extravascular matrix patterns can be seen in UM, like loops and networks. This presence is associated with death from metastatic melanoma. However, the density of microvessels is also of prognostic importance. This study was undertaken to help understanding some histopathological factors which might contribute to developing metastasis in UM patients. Factors which could be related to tumor progression to metastasis disease, namely nucleolar size, MLN, microvascular density (MVD), cell proliferation, and The Insulin-like Growth Factor 1 Receptor(IGF-1R), were investigated. The primary aim of this thesis was to study the relationship between prognostic factors such as tumor cell nucleolar size, proliferation, extravascular matrix patterns, and dissemination of UM, and to assess to what extent there is a relationship to metastasis. The secondary goal was to develop a multivariate model which includes MLN and cell proliferation in addition to MVD, and which would fit better with population-based, melanoma-related survival data than previous models. I studied 167 patients with UM, who developed metastasis even after a very long time following removal of the eye, metastatic disease was the main cause of death, as documented in the Finnish Cancer Registry and on death certificates. Using an independent population-based data set, it was confirmed that MLN and extravascular matrix loops and networks were unrelated, independent predictors of survival in UM. Also, it has been found that multivariate models including MVD in addition to MLN fitted significantly better with survival data than models which excluded MVD. This supports the idea that both the characteristics of the blood vessels and the cells are important, and the future direction would be to look for the gene expression profile, whether it is associated more with MVD or MLN. The former relates to the host response to the tumor and may not be as tightly associated with the gene expression profile, yet most likely involved in the process of hematogenous metastasis. Because fresh tumor material is needed for reliable genetic analysis, such analysis could not be performed Although noninvasive detection of certain extravascular matrix patterns is now technically possible,in managing patients with UM, this study and tumor genetics suggest that such noninvasive methods will not fully capture the process of clinical metastasis. Progress in resection and biopsy techniques is likely in the near future to result in fresh material for the ophthalmic pathologist to correlate angiographic data, histopathological characteristics such as MLN, and genetic data. This study supported the theory that tumors containing epithelioid cells grow faster and have poorer prognosis when studied by cell proliferation in UM based on Ki-67 immunoreactivity. Cell proliferation index fitted best with the survival data when combined with MVD, MLN, and presence of epithelioid cells. Analogous with the finding that high MVD in primary UM is associated with shorter time to metastasis than low MVD, high MVD in hepatic metastasis tends to be associated with shorter survival after diagnosis of metastasis. Because the liver is the main organ for metastasis from UM, growth factors largely produced in the liver hepatocyte growth factor, epidermal growth factor and insulin-like growth factor-1 (IGF-1) together with their receptors may have a role in the homing and survival of metastatic cells. Therefore the association between immunoreactivity for IGF-1R in primary UM and metastatic death was studied. It was found that immunoreactivity for IGF-IR did not independently predict metastasis from primary UM in my series.
Resumo:
A better understanding of vacuum arcs is desirable in many of today's 'big science' projects including linear colliders, fusion devices, and satellite systems. For the Compact Linear Collider (CLIC) design, radio-frequency (RF) breakdowns occurring in accelerating cavities influence efficiency optimisation and cost reduction issues. Studying vacuum arcs both theoretically as well as experimentally under well-defined and reproducible direct-current (DC) conditions is the first step towards exploring RF breakdowns. In this thesis, we have studied Cu DC vacuum arcs with a combination of experiments, a particle-in-cell (PIC) model of the arc plasma, and molecular dynamics (MD) simulations of the subsequent surface damaging mechanism. We have also developed the 2D Arc-PIC code and the physics model incorporated in it, especially for the purpose of modelling the plasma initiation in vacuum arcs. Assuming the presence of a field emitter at the cathode initially, we have identified the conditions for plasma formation and have studied the transitions from field emission stage to a fully developed arc. The 'footing' of the plasma is the cathode spot that supplies the arc continuously with particles; the high-density core of the plasma is located above this cathode spot. Our results have shown that once an arc plasma is initiated, and as long as energy is available, the arc is self-maintaining due to the plasma sheath that ensures enhanced field emission and sputtering. The plasma model can already give an estimate on how the time-to-breakdown changes with the neutral evaporation rate, which is yet to be determined by atomistic simulations. Due to the non-linearity of the problem, we have also performed a code-to-code comparison. The reproducibility of plasma behaviour and time-to-breakdown with independent codes increased confidence in the results presented here. Our MD simulations identified high-flux, high-energy ion bombardment as a possible mechanism forming the early-stage surface damage in vacuum arcs. In this mechanism, sputtering occurs mostly in clusters, as a consequence of overlapping heat spikes. Different-sized experimental and simulated craters were found to be self-similar with a crater depth-to-width ratio of about 0.23 (sim) - 0.26 (exp). Experiments, which we carried out to investigate the energy dependence of DC breakdown properties, point at an intrinsic connection between DC and RF scaling laws and suggest the possibility of accumulative effects influencing the field enhancement factor.
Resumo:
We consider a discrete time queue with finite capacity and i.i.d. and Markov modulated arrivals, Efficient algorithms are developed to calculate the moments and the distributions of the first time to overflow and the regeneration length, Results are extended to the multiserver queue. Some illustrative numerical examples are provided.
Resumo:
We recast the reconstruction problem of diffuse optical tomography (DOT) in a pseudo-dynamical framework and develop a method to recover the optical parameters using particle filters, i.e., stochastic filters based on Monte Carlo simulations. In particular, we have implemented two such filters, viz., the bootstrap (BS) filter and the Gaussian-sum (GS) filter and employed them to recover optical absorption coefficient distribution from both numerically simulated and experimentally generated photon fluence data. Using either indicator functions or compactly supported continuous kernels to represent the unknown property distribution within the inhomogeneous inclusions, we have drastically reduced the number of parameters to be recovered and thus brought the overall computation time to within reasonable limits. Even though the GS filter outperformed the BS filter in terms of accuracy of reconstruction, both gave fairly accurate recovery of the height, radius, and location of the inclusions. Since the present filtering algorithms do not use derivatives, we could demonstrate accurate contrast recovery even in the middle of the object where the usual deterministic algorithms perform poorly owing to the poor sensitivity of measurement of the parameters. Consistent with the fact that the DOT recovery, being ill posed, admits multiple solutions, both the filters gave solutions that were verified to be admissible by the closeness of the data computed through them to the data used in the filtering step (either numerically simulated or experimentally generated). (C) 2011 Optical Society of America
Resumo:
Power semiconductor devices have finite turn on and turn off delays that may not be perfectly matched. In a leg of a voltage source converter, the simultaneous turn on of one device and the turn off of the complementary device will cause a DC bus shoot through, if the turn off delay is larger than the turn on delay time. To avoid this situation it is common practice to blank the two complementary devices in a leg for a small duration of time while switching, which is called dead time. This paper proposes a logic circuit for digital implementation required to control the complementary devices of a leg independently and at the same time preventing cross conduction of devices in a leg, and while providing accurate and stable dead time. This implementation is based on the concept of finite state machines. This circuit can also block improper PWM pulses to semiconductor switches and filters small pulses notches below a threshold time width as the narrow pulses do not provide any significant contribution to average pole voltage, but leads to increased switching loss. This proposed dead time logic has been implemented in a CPLD and is implemented in a protection and delay card for 3- power converters.
Resumo:
As power systems grow in their size and interconnections, their complexity increases. Rising costs due to inflation and increased environmental concerns has made transmission, as well as generation systems be operated closer to design limits. Hence power system voltage stability and voltage control are emerging as major problems in the day-to-day operation of stressed power systems. For secure operation and control of power systems under normal and contingency conditions it is essential to provide solutions in real time to the operator in energy control center (ECC). Artificial neural networks (ANN) are emerging as an artificial intelligence tool, which give fast, though approximate, but acceptable solutions in real time as they mostly use the parallel processing technique for computation. The solutions thus obtained can be used as a guide by the operator in ECC for power system control. This paper deals with development of an ANN architecture, which provide solutions for monitoring, and control of voltage stability in the day-to-day operation of power systems.
Resumo:
Urea-based molecular constructs are shown for the first time to be nonlinear optically (NLO) active in solution. We demonstrate self-assembly triggered large amplification and specific anion recognition driven attenuation of the NLO activity. This orthogonal modulation along with an excellent nonlinearity-transparency trade-off makes them attractive NLO probes for studies related to weak self-assembly and anion transportation by second harmonic microscopy.
Resumo:
In wireless sensor networks (WSNs) the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting at the same time. Such a situation is known as spatially correlated contention. The random access methods to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration and therefore generating an optimal or sub-optimal schedule is not very useful. On the other hand, if the algorithm takes very large time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. To efficiently handle the spatially correlated contention in WSNs, we present a distributed TDMA slot scheduling algorithm, called DTSS algorithm. The DTSS algorithm is designed with the primary objective of reducing the time required to perform scheduling, while restricting the schedule length to maximum degree of interference graph. The algorithm uses randomized TDMA channel access as the mechanism to transmit protocol messages, which bounds the message delay and therefore reduces the time required to get a feasible schedule. The DTSS algorithm supports unicast, multicast and broadcast scheduling, simultaneously without any modification in the protocol. The protocol has been simulated using Castalia simulator to evaluate the run time performance. Simulation results show that our protocol is able to considerably reduce the time required to schedule.
Resumo:
The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In WSNs the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting simultaneously. Such a situation is known as spatially correlated contention. The random access method to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration, and therefore generating an optimal or suboptimal schedule is not very useful. Additionally, if an algorithm takes very long time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. In this paper, we present a distributed TDMA slot scheduling (DTSS) algorithm, which considerably reduces the time required to perform scheduling, while restricting the schedule length to the maximum degree of interference graph. The DTSS algorithm supports unicast, multicast, and broadcast scheduling, simultaneously without any modification in the protocol. We have analyzed the protocol for average case performance and also simulated it using Castalia simulator to evaluate its runtime performance. Both analytical and simulation results show that our protocol is able to considerably reduce the time required for scheduling.