17 resultados para Time-to-collision

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method of selection of time-to-go (t(go)) for Generalized Vector Explicit Guidance (GENEX) law have been proposed in this paper. t(go) is known to be an important parameter in the control and cost function of GENEX guidance law. In this paper the formulation has been done to find an optimal value of t(go) that minimizes the performance cost. Mechanization of GENEX with this optimal t(go) reduces the lateral acceleration demand and consequently increases the range of the interceptor. This new formulation of computing t(go) comes in closed form and thus it can be implemented onboard. This new formulation is applied in the terminal phase of an surface-to-air interceptor for an angle constrained engagement. Results generated by simulation justify the use of optimal t(go).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, sliding mode control-based impact time guidance laws are proposed. Even for large heading angle errors and negative initial closing speeds, the desired impact time is achieved by enforcing a sliding mode on a switching surface designed by using the concepts of collision course and estimated time-to-go. Unlike existing guidance laws, the proposed guidance strategy achieves impact time successfully even when the estimated interception time is greater than the desired impact time. Simulation results are also presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The unsteady magnetohydrodynamic viscous flow and heat transfer of Newtonian fluids induced by an impulsively stretched plane surface in two lateral directions are studied by using an analytic technique, namely, the homotopy method. The analytic series solution presented here is highly accurate and uniformly valid for all time in the entire region. The effects of the stretching ratio and the magnetic field on the surface shear stresses and heat transfer are studied. The surface shear stresses in x- and y-directions and the surface heat transfer are enchanced by increasing stretching ratio for a fixed value of the magnetic parameter. For a fixed stretching ratio, the surface shear stresses increase with the magnetic parameter, but the heat transfer decreases. The Nusselt number takes longer time to reach the steady state than the skin friction coefficients. There is a smooth transition from the initial unsteady state to the steady state.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The stochastic version of Pontryagin's maximum principle is applied to determine an optimal maintenance policy of equipment subject to random deterioration. The deterioration of the equipment with age is modelled as a random process. Next the model is generalized to include random catastrophic failure of the equipment. The optimal maintenance policy is derived for two special probability distributions of time to failure of the equipment, namely, exponential and Weibull distributions Both the salvage value and deterioration rate of the equipment are treated as state variables and the maintenance as a control variable. The result is illustrated by an example

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An analysis is performed to study the unsteady laminar incompressible boundary-layer flow of an electrically conducting fluid in a cone due to a point sink with an applied magnetic field. The unsteadiness in the flow is considered for two types of motion, viz. the motion arising due to the free stream velocity varying continuously with time and the transient motion occurring due to an impulsive change either in the strength of the point sink or in the wall temperature. The partial differential equations governing the flow have been solved numerically using an implicit finite-difference scheme in combination with the quasilinearization technique. The magnetic field increases the skin friction but reduces heat transfer. The heat transfer and temperature field are strongly influenced by the viscous dissipation and Prandtl number. The velocity field is more affected at the early stage of the transient motion, caused by an impulsive change in the strength of the point sink, as compared to the temperature field. When the transient motion is caused by a sudden change in the wall temperature, both skin friction and heat transfer take more time to reach a new steady state. The transient nature of the flow and heat transfer is active for a short time in the case of suction and for a long time in the case of injection. The viscous dissipation prolongs the transient behavior of the flow.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider a discrete time queue with finite capacity and i.i.d. and Markov modulated arrivals, Efficient algorithms are developed to calculate the moments and the distributions of the first time to overflow and the regeneration length, Results are extended to the multiserver queue. Some illustrative numerical examples are provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We recast the reconstruction problem of diffuse optical tomography (DOT) in a pseudo-dynamical framework and develop a method to recover the optical parameters using particle filters, i.e., stochastic filters based on Monte Carlo simulations. In particular, we have implemented two such filters, viz., the bootstrap (BS) filter and the Gaussian-sum (GS) filter and employed them to recover optical absorption coefficient distribution from both numerically simulated and experimentally generated photon fluence data. Using either indicator functions or compactly supported continuous kernels to represent the unknown property distribution within the inhomogeneous inclusions, we have drastically reduced the number of parameters to be recovered and thus brought the overall computation time to within reasonable limits. Even though the GS filter outperformed the BS filter in terms of accuracy of reconstruction, both gave fairly accurate recovery of the height, radius, and location of the inclusions. Since the present filtering algorithms do not use derivatives, we could demonstrate accurate contrast recovery even in the middle of the object where the usual deterministic algorithms perform poorly owing to the poor sensitivity of measurement of the parameters. Consistent with the fact that the DOT recovery, being ill posed, admits multiple solutions, both the filters gave solutions that were verified to be admissible by the closeness of the data computed through them to the data used in the filtering step (either numerically simulated or experimentally generated). (C) 2011 Optical Society of America

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power semiconductor devices have finite turn on and turn off delays that may not be perfectly matched. In a leg of a voltage source converter, the simultaneous turn on of one device and the turn off of the complementary device will cause a DC bus shoot through, if the turn off delay is larger than the turn on delay time. To avoid this situation it is common practice to blank the two complementary devices in a leg for a small duration of time while switching, which is called dead time. This paper proposes a logic circuit for digital implementation required to control the complementary devices of a leg independently and at the same time preventing cross conduction of devices in a leg, and while providing accurate and stable dead time. This implementation is based on the concept of finite state machines. This circuit can also block improper PWM pulses to semiconductor switches and filters small pulses notches below a threshold time width as the narrow pulses do not provide any significant contribution to average pole voltage, but leads to increased switching loss. This proposed dead time logic has been implemented in a CPLD and is implemented in a protection and delay card for 3- power converters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As power systems grow in their size and interconnections, their complexity increases. Rising costs due to inflation and increased environmental concerns has made transmission, as well as generation systems be operated closer to design limits. Hence power system voltage stability and voltage control are emerging as major problems in the day-to-day operation of stressed power systems. For secure operation and control of power systems under normal and contingency conditions it is essential to provide solutions in real time to the operator in energy control center (ECC). Artificial neural networks (ANN) are emerging as an artificial intelligence tool, which give fast, though approximate, but acceptable solutions in real time as they mostly use the parallel processing technique for computation. The solutions thus obtained can be used as a guide by the operator in ECC for power system control. This paper deals with development of an ANN architecture, which provide solutions for monitoring, and control of voltage stability in the day-to-day operation of power systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Urea-based molecular constructs are shown for the first time to be nonlinear optically (NLO) active in solution. We demonstrate self-assembly triggered large amplification and specific anion recognition driven attenuation of the NLO activity. This orthogonal modulation along with an excellent nonlinearity-transparency trade-off makes them attractive NLO probes for studies related to weak self-assembly and anion transportation by second harmonic microscopy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In wireless sensor networks (WSNs) the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting at the same time. Such a situation is known as spatially correlated contention. The random access methods to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration and therefore generating an optimal or sub-optimal schedule is not very useful. On the other hand, if the algorithm takes very large time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. To efficiently handle the spatially correlated contention in WSNs, we present a distributed TDMA slot scheduling algorithm, called DTSS algorithm. The DTSS algorithm is designed with the primary objective of reducing the time required to perform scheduling, while restricting the schedule length to maximum degree of interference graph. The algorithm uses randomized TDMA channel access as the mechanism to transmit protocol messages, which bounds the message delay and therefore reduces the time required to get a feasible schedule. The DTSS algorithm supports unicast, multicast and broadcast scheduling, simultaneously without any modification in the protocol. The protocol has been simulated using Castalia simulator to evaluate the run time performance. Simulation results show that our protocol is able to considerably reduce the time required to schedule.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.