987 resultados para Yosida Approximate
Resumo:
The Hanuman langur is one of the most widely distributed and morphologically variable non-human primates in South Asia. Even though it has been extensively studied, the taxonomic status of this species remains unresolved due to incongruence between various classification schemes. This incongruence, we believe, is largely due to the use of plastic morphological characters such as coat color in classification. Additionally these classification schemes were largely based on reanalysis of the same set of museum specimens. To bring greater resolution in Hanuman langur taxonomy we undertook a field survey to study variation in external morphological characters among Hanuman langurs. The primary objective of this study is to ascertain the number of morphologically recognizable units (morphotypes) of Hanuman langur in peninsular India and to compare our field observations with published classification schemes. We typed five color-independent characters for multiple adults from various populations in South India. We used the presence-absence matrix of these characters to derive the pair-wise distance between individuals and used this to construct a neighbor-joining (NJ) tree. The resulting NJ tree retrieved six distinct clusters, which we assigned to different morphotypes. These morphotypes can be identified in the field by using a combination of five diagnostic characters. We determined the approximate distributions of these morphotypes by plotting the sampling locations of each morphotype on a map using GIS software. Our field observations are largely concordant with some of the earliest classification schemes, but are incongruent with recent classification schemes. Based on these results we recommend Hill (Ceylon Journal of Science, Colombo 21:277-305, 1939) and Pocock (Primates and carnivora (in part) (pp. 97-163). London: Taylor and Francis, 1939) classification schemes for future studies on Hanuman langurs.
Resumo:
An experimental investigation on reverse transition from turbulent to laminar flow in a two-dimensional channel was carried out. The reverse transition occurred when Reynolds number of an initially turbulent flow was reduced below a certain value by widening the duct in the lateral direction. The experiments were conducted at Reynolds numbers of 625, 865, 980 and 1250 based on half the height of the channel and the average of the mean velocity. At all these Reynolds numbers the initially turbulent mean velocity profiles tend to become parabolic. The longitudinal and vertical velocity fluctuations ($\overline{u^{\prime 2}}$ and $\overline{v^{\prime 2}}$) averaged over the height of the channel decrease exponentially with distance downstream, but $\overline{u^{\prime}v^{\prime}} $ tends to become zero at a reasonably well-defined point. During reverse transition $\overline{u^{\prime}}\overline{v^{\prime}}/\sqrt{\overline{u^{\prime 2}}}\sqrt{\overline{v^{\prime 2}}}$ also decreases as the flow moves downstream and Lissajous figures taken with u’ and v’ signals confirm this trend. There is approximate similarly between $\overline{u^{\prime 2}} $ profiles if the value of $\overline{u^{\prime 2}_{\max}} $ and the distance from the wall at which it occurs are taken as the reference scales. The spectrum of $\overline{u^{\prime 2}} $ is almost similar at all stations and the non-dimensional spectrum is exponential in wave-number. All the turbulent quantities, when plotted in appropriate co-ordinates, indicate that there is a definite critical Reynolds number of 1400±50 for reverse transition.
Resumo:
Quartz fibre anemometers have been used (as described in subsequent papers) to survey the velocity field of turbulent free convective air flows. This paper discusses the reasons for the choice of this instrument and provides the background information for its use in this way. Some practical points concerning fibre anemometers are mentioned. The rest of the paper is a theoretical study of the response of a fibre to a turbulent flow. An approximate representation of the force on the fibre due to the velocity field and the equation for a bending beam, representing the response to this force, form the basis of a consideration of the mean and fluctuating displacement of the fibre. Emphasis is placed on the behaviour when the spectrum of the turbulence is largely in frequencies low enough for the fibre to respond effectively instantaneously (as this corresponds to the practical situation). Incomplete correlation of the turbulence along the length of the fibre is taken into account. Brief mention is made to the theory of the higher-frequency (resonant) response in the context of an experimental check on the applicability of the low-frequency theory.
Resumo:
An investigation has been made of the structure of the motion above a heated plate inclined at a small angle (about 10°) to the horizontal. The turbulence is considered in terms of the similarities to and differences from the motion above an exactly horizontal surface. One effect of inclination is, of course, that there is also a mean motion. Accurate data on the mean temperature field and the intensity of the temperature fluctuations have been obtained with platinum resistance thermometers, the signals being processed electronically. More approximate information on the velocity field has been obtained with quartz fibre anemometers. These results have been supplemented qualitatively by simultaneous observations of the temperature and velocity fluctuations and also by smoke experiments. The principal features of the flow inferred from these observations are as follows. The heat transfer and the mean temperature field are not much altered by the inclination, though small, not very systematic, variations may result from the complexities of the velocity field. This supports the view that the mean temperature field is largely governed by the large-scale motions. The temperature fluctuations show a systematic variation with distance from the lower edge and resemble those above a horizontal plate when this distance is large. The largescale motions of the turbulence start close to the lower edge, but the smaller eddies do not attain full intensity until the air has moved some distance up the plate. The mean velocity receives a sizable contribution from a ‘through-flow’ between the side-walls. Superimposed on this are developments that show that the momentum transfer processes are complex and certainly not capable of representation by any simple theory such as an eddy viscosity. On the lower part of the plate there is surprisingly large acceleration, but further up the mixing action of the small eddies has a decelerating effect.
Resumo:
A continuum model based on the critical-state theory of soil mechanics is used to generate stress, density, and velocity profiles, and to compute discharge rates for the flow of granular material in a mass flow bunker. The bin–hopper transition region is idealized as a shock across which all the variables change discontinuously. Comparison with the work of Michalowski (1987) shows that his experimentally determined rupture layer lies between his prediction and that of the present theory. However, it resembles the former more closely. The conventional condition involving a traction-free surface at the hopper exit is abandoned in favour of an exit shock below which the material falls vertically with zero frictional stress. The basic equations, which are not classifiable under any of the standard types, require excessive computational time. This problem is alleviated by the introduction of the Mohr–Coulomb approximation (MCA). The stress, density, and velocity profiles obtained by integration of the MCA converge to asymptotic fields on moving down the hopper. Expressions for these fields are derived by a perturbation method. Computational difficulties are encountered for bunkers with wall angles θw [gt-or-equal, slanted] 15° these are overcome by altering the initial conditions. Predicted discharge rates lie significantly below the measured values of Nguyen et al. (1980), ranging from 38% at θw = 15° to 59% at θw = 32°. The poor prediction appears to be largely due to the exit condition used here. Paradoxically, incompressible discharge rates lie closer to the measured values. An approximate semi-analytical expression for the discharge rate is obtained, which predicts values within 9% of the exact (numerical) ones in the compressible case, and 11% in the incompressible case. The approximate analysis also suggests that inclusion of density variation decreases the discharge rate. This is borne out by the exact (numerical) results – for the parameter values investigated, the compressible discharge rate is about 10% lower than the incompressible value. A preliminary comparison of the predicted density profiles with the measurements of Fickie et al. (1989) shows that the material within the hopper dilates more strongly than predicted. Surprisingly, just below the exit slot, there is good agreement between theory and experiment.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
The role of B2O3 addition on the long phosphorescence of SrAl2O4:Eu2+, Dy3+ has been investigated. B2O3 is just not an inert high temperature solvent (flux) to accelerate grain growth, according to SEM results. B2O3 has a substitutional effect, even at low concentrations. by way of incorporation of BO4 in the corner-shared AlO4 framework of the distorted 'stuffed' tridymite structure of SrAl2O4. which is discernible from the IR and solid-state MAS NMR spectral data. With increasing concentrations, B2O3 reacts with SrAl2O4 to form Sr4Al4O25 together with Sr-borate (SrB2O4) as the glassy phase, as evidenced by XRD and SEM studies. At high B2O3 contents, Sr4Al14O25 converts to SrAl2B2O7 (cubic and hexagonal), SrAl12O19 and Sr-borate (SrB4O7) glass. Sr4Al14O25:Eu2+, Dy3+ has also been independently synthesized to realize the blue emitting (lambda(em)approximate to490 nm) phosphor. The afterglow decay as well as thermoluminescence studies reveal that Sr4Al14O25:Eu, Dy exhibits equally long phosphorescence as that of SrAl2O4:Eu2+, Dy3+. In both cases, long phosphorescence is noticed only when BO4 is present along with Dy3+ and Eu2+. Here Dy3+ because of its higher charge density than Eu2+ prefers to occupy the Sr sites in the neighbourhood of BO4, as the effective charge on borate is more negative than that of AlO4. Thus. Dy3+ forms a substitutional defect complex with borate and acts as an acceptor-type defect center. These defects Eu2+ ions and the subsequent thermal release of hole at room temperature followed by the trap the hole generated by the excitation of recombination with electron resulting in the long persistent phosphorescence. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents a new application of two dimensional Principal Component Analysis (2DPCA) to the problem of online character recognition in Tamil Script. A novel set of features employing polynomial fits and quartiles in combination with conventional features are derived for each sample point of the Tamil character obtained after smoothing and resampling. These are stacked to form a matrix, using which a covariance matrix is constructed. A subset of the eigenvectors of the covariance matrix is employed to get the features in the reduced sub space. Each character is modeled as a separate subspace and a modified form of the Mahalanobis distance is derived to classify a given test character. Results indicate that the recognition accuracy using the 2DPCA scheme shows an approximate 3% improvement over the conventional PCA technique.
Resumo:
In this paper, expressions for convolution multiplication properties of MDCT are derived starting from the equivalent DFT representations. Using these expressions, methods for implementing linear filtering through block convolution in the MDCT domain are presented. The implementation is exact for symmetric filters and approximate for non-symmetric filters in the case of rectangular window based MDCT. For a general MDCT window function, the filtering is done on the windowed segments and hence the convolution is approximate for symmetric as well as non-symmetric filters. This approximation error is shown to be perceptually insignificant for symmetric impulse response filters. Moreover, the inherent $50 \%$ overlap between adjacent frames used in MDCT computation does reduce this approximation error similar to smoothing of other block processing errors. The presented techniques are useful for compressed domain processing of audio signals.
Resumo:
We investigate the feasibility of developing a comprehensive gate delay and slew models which incorporates output load, input edge slew, supply voltage, temperature, global process variations and local process variations all in the same model. We find that the standard polynomial models cannot handle such a large heterogeneous set of input variables. We instead use neural networks, which are well known for their ability to approximate any arbitrary continuous function. Our initial experiments with a small subset of standard cell gates of an industrial 65 nm library show promising results with error in mean less than 1%, error in standard deviation less than 3% and maximum error less than 11% as compared to SPICE for models covering 0.9- 1.1 V of supply, -40degC to 125degC of temperature, load, slew and global and local process parameters. Enhancing the conventional libraries to be voltage and temperature scalable with similar accuracy requires on an average 4x more SPICE characterization runs.
Resumo:
We investigate the feasibility of developing a comprehensive gate delay and slew models which incorporates output load, input edge slew, supply voltage, temperature, global process variations and local process variations all in the same model. We find that the standard polynomial models cannot handle such a large heterogeneous set of input variables. We instead use neural networks, which are well known for their ability to approximate any arbitrary continuous function. Our initial experiments with a small subset of standard cell gates of an industrial 65 nm library show promising results with error in mean less than 1%, error in standard deviation less than 3% and maximum error less than 11% as compared to SPICE for models covering 0.9- 1.1 V of supply, -40degC to 125degC of temperature, load, slew and global and local process parameters. Enhancing the conventional libraries to be voltage and temperature scalable with similar accuracy requires on an average 4x more SPICE characterization runs.
Resumo:
A circular array of Piezoelectric Wafer Active Sensor (PWAS) has been employed to detect surface damages like corrosion using lamb waves. The array consists of a number of small PWASs of 10 mm diameter and 1 mm thickness. The advantage of a circular array is its compact arrangement and large area of coverage for monitoring with small area of physical access. Growth of corrosion is monitored in a laboratory-scale set-up using the PWAS array and the nature of reflected and transmitted Lamb wave patterns due to corrosion is investigated. The wavelet time-frequency maps of the sensor signals are employed and a damage index is plotted against the damage parameters and varying frequency of the actuation signal (a windowed sine signal). The variation of wavelet coefficient for different growth of corrosion is studied. Wavelet coefficient as function of time gives an insight into the effect of corrosion in time-frequency scale. We present here a method to eliminate the time scale effect which helps in identifying easily the signature of damage in the measured signals. The proposed method becomes useful in determining the approximate location of the corrosion with respect to the location of three neighboring sensors in the circular array. A cumulative damage index is computed for varying damage sizes and the results appear promising.
Resumo:
In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors.
Resumo:
We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes O(m 1/2) amortized time per arc and our second algorithm takes O(n 2.5/m) amortized time per arc, where n is the number of vertices and m is the total number of arcs. For sparse graphs, our O(m 1/2) bound improves the best previous bound by a factor of logn and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our O(n 2.5/m) bound improves the best previously published bound by a factor of n 1/4 and a recent bound obtained independently of our work by a factor of logn. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.
Resumo:
The primary objective of the paper is to make use of statistical digital human model to better understand the nature of reach probability of points in the taskspace. The concept of task-dependent boundary manikin is introduced to geometrically characterize the extreme individuals in the given population who would accomplish the task. For a given point of interest and task, the map of the acceptable variation in anthropometric parameters is superimposed with the distribution of the same parameters in the given population to identify the extreme individuals. To illustrate the concept, the task space mapping is done for the reach probability of human arms. Unlike the boundary manikins, who are completely defined by the population, the dimensions of these manikins will vary with task, say, a point to be reached, as in the present case. Hence they are referred to here as the task-dependent boundary manikins. Simulations with these manikins would help designers to visualize how differently the extreme individuals would perform the task. Reach probability at the points in a 3D grid in the operational space is computed; for objects overlaid in this grid, approximate probabilities are derived from the grid for rendering them with colors indicating the reach probability. The method may also help in providing a rational basis for selection of personnel for a given task.