982 resultados para Theodorus, Anagnostes, 6th cent.
Resumo:
In the Himalayas, large area is covered by glaciers, seasonal snow and changes in its extent can influence availability of water in the Himalayan Rivers. In this paper, changes in glacial extent, glacial mass balance and seasonal snow cover have been discussed. Field and satellite based investigations suggest, most of the Himalayan glaciers are retreating though the rate of retreat is varying from glacier to glacier, ranging from few meters to almost 50 meters per year, depending upon the numerous glacial, terrain and meteorological parameters. Retreat was estimated for 1868 glaciers in eleven basins distributed across the Indian Himalaya since 1962 to 2001/02. Estimates show an overall reduction in glacier area from 6332 to 5329 sq km, an overall deglaciation of 16 percent.Snow line at the end of ablation season on the Chhota Shigri glacier suggests a change in altitude from 4900 to 5200 m from late 1970’s to the present. Seasonal snow cover monitoring of the Himalaya has shown large amounts of snow cover depletion in early part of winter, i.e. from October to December. For many basins located in lower altitude and in south of Pir Panjal range, snow ablation was observed through out the winter season. In addition, average stream runoff of the Baspa basin during the month of December shows an increase by 75 per cent. This combination of glacial retreat, negative mass balance, early melting of seasonal snow cover and winter time increase in stream runoff suggest an influence of climate change on the Himalayan cryosphere.
Resumo:
Ergonomic design of products demands accurate human dimensions-anthropometric data. Manual measurement over live subjects, has several limitations like long time, required presence of subjects for every new measurement, physical contact etc. Hence the data currently available is limited and anthropometric data related to facial features is difficult to obtain. In this paper, we discuss a methodology to automatically detect facial features and landmarks from scanned human head models. Segmentation of face into meaningful patches corresponding to facial features is achieved by Watershed algorithms and Mathematical Morphology tools. Many Important physiognomical landmarks are identified heuristically.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Resumo:
In this paper we have developed methods to compute maps from differential equations. We take two examples. First is the case of the harmonic oscillator and the second is the case of Duffing's equation. First we convert these equations to a canonical form. This is slightly nontrivial for the Duffing's equation. Then we show a method to extend these differential equations. In the second case, symbolic algebra needs to be used. Once the extensions are accomplished, various maps are generated. The Poincare sections are seen as a special case of such generated maps. Other applications are also discussed.
Resumo:
Many web sites incorporate dynamic web pages to deliver customized contents to their users. However, dynamic pages result in increased user response times due to their construction overheads. In this paper, we consider mechanisms for reducing these overheads by utilizing the excess capacity with which web servers are typically provisioned. Specifically, we present a caching technique that integrates fragment caching with anticipatory page pre-generation in order to deliver dynamic pages faster during normal operating situations. A feedback mechanism is used to tune the page pre-generation process to match the current system load. The experimental results from a detailed simulation study of our technique indicate that, given a fixed cache budget, page construction speedups of more than fifty percent can be consistently achieved as compared to a pure fragment caching approach.
Resumo:
In this paper we approach the problem of computing the characteristic polynomial of a matrix from the combinatorial viewpoint. We present several combinatorial characterizations of the coefficients of the characteristic polynomial, in terms of walks and closed walks of different kinds in the underlying graph. We develop algorithms based on these characterizations, and show that they tally with well-known algorithms arrived at independently from considerations in linear algebra.