894 resultados para Scaling Criteria
Resumo:
In this paper, we outline a systematic procedure for scaling analysis of momentum and heat transfer in laser melted pools. With suitable choices of non-dimensionalising parameters, the governing equations coupled with appropriate boundary conditions are first scaled, and the relative significance of various terms appearing in them are accordingly analysed. The analysis is then utilised to predict the orders of magnitude of some important quantities, such as the velocity scale at the top surface, velocity boundary layer thickness, maximum temperature rise in the pool, fully developed pool-depth, and time required for initiation of melting. Using the scaling predictions, the influence of various processing parameters on the system variables can be well recognised, which enables us to develop a deeper insight into the physical problem of interest. Moreover, some of the quantities predicted from the scaling analysis can be utilised for optimised selection of appropriate grid-size and time-steps for full numerical simulation of the process. The scaling predictions are finally assessed by comparison with experimental and numerical results quoted in the literature, and an excellent qualitative agreement is observed.
Resumo:
‘Best’ solutions for the shock-structure problem are obtained by solving the Boltzmann equation for a rigid sphere gas by applying minimum error criteria on the Mott-Smith ansatz. The use of two such criteria minimizing respectively the local and total errors, as well as independent computations of the remaining error, establish the high accuracy of the solutions, although it is shown that the Mott-Smith distribution is not an exact solution of the Boltzmann equation even at infinite Mach number. The minimum local error method is found to be particularly simple and efficient. Adopting the present solutions as the standard of comparison, it is found that the widely used v2x-moment solutions can be as much as a third in error, but that results based on Rosen's method provide good approximations. Finally, it is shown that if the Maxwell mean free path on the hot side of the shock is chosen as the scaling length, the value of the density-slope shock thickness is relatively insensitive to the intermolecular potential. A comparison is made on this basis of present results with experiment, and very satisfactory quantitative agreement is obtained.
Resumo:
Experiments on reverse transition were conducted in two-dimensional accelerated incompressible turbulent boundary layers. Mean velocity profiles, longitudinal velocity fluctuations $\tilde{u}^{\prime}(=(\overline{u^{\prime 2}})^{\frac{1}{2}})$ and the wall-shearing stress (TW) were measured. The mean velocity profiles show that the wall region adjusts itself to laminar conditions earlier than the outer region. During the reverse transition process, increases in the shape parameter (H) are accompanied by a decrease in the skin friction coefficient (Cf). Profiles of turbulent intensity (u’2) exhibit near similarity in the turbulence decay region. The breakdown of the law of the wall is characterized by the parameter \[ \Delta_p (=\nu[dP/dx]/\rho U^{*3}) = - 0.02, \] where U* is the friction velocity. Downstream of this region the decay of $\tilde{u}^{\prime}$ fluctuations occurred when the momentum thickness Reynolds number (R) decreased roughly below 400.
Resumo:
We study the statistical properties of spatially averaged global injected power fluctuations for Taylor-Couette flow of a wormlike micellar gel formed by surfactant cetyltrimethylammonium tosylate. At sufficiently high Weissenberg numbers the shear rate, and hence the injected power p(t), at a constant applied stress shows large irregular fluctuations in time. The nature of the probability distribution function (PDF) of p(t) and the power-law decay of its power spectrum are very similar to that observed in recent studies of elastic turbulence for polymer solutions. Remarkably, these non-Gaussian PDFs can be well described by a universal, large deviation functional form given by the generalized Gumbel distribution observed in the context of spatially averaged global measures in diverse classes of highly correlated systems. We show by in situ rheology and polarized light scattering experiments that in the elastic turbulent regime the flow is spatially smooth but random in time, in agreement with a recent hypothesis for elastic turbulence.
Resumo:
For the successful performance of a granular filter medium, existing design guidelines, which are based on the particle size distribution (PSD) characteristics of the base soil and filter medium, require two contradictory conditions to be satisfied, viz., soil retention and permeability. In spite of the wider applicability of these guidelines, it is well recognized that (i) they are applicable to a particular range of soils tested in the laboratory, (ii) the design procedures do not include performance-based selection criteria, and (iii) there are no means to establish the sensitivity of the important variables influencing performance. In the present work, analytical solutions are developed to obtain a factor of safety with respect to soil-retention and permeability criteria for a base soil - filter medium system subjected to a soil boiling condition. The proposed analytical solutions take into consideration relevant geotechnical properties such as void ratio, permeability, dry unit weight, effective friction angle, shape and size of soil particles, seepage discharge, and existing hydraulic gradient. The solution is validated through example applications and experimental results, and it is established that it can be used successfully in the selection as well as design of granular filters and can be applied to all types of base soils.
Resumo:
Multiple Clock Domain processors provide an attractive solution to the increasingly challenging problems of clock distribution and power dissipation. They allow their chips to be partitioned into different clock domains, and each domain’s frequency (voltage) to be independently configured. This flexibility adds new dimensions to the Dynamic Voltage and Frequency Scaling problem, while providing better scope for saving energy and meeting performance demands. In this paper, we propose a compiler directed approach for MCD-DVFS. We build a formal petri net based program performance model, parameterized by settings of microarchitectural components and resource configurations, and integrate it with our compiler passes for frequency selection.Our model estimates the performance impact of a frequency setting, unlike the existing best techniques which rely on weaker indicators of domain performance such as queue occupancies(used by online methods) and slack manifestation for a particular frequency setting (software based methods).We evaluate our method with subsets of SPECFP2000,Mediabench and Mibench benchmarks. Our mean energy savings is 60.39% (versus 33.91% of the best software technique)in a memory constrained system for cache miss dominated benchmarks, and we meet the performance demands.Our ED2 improves by 22.11% (versus 18.34%) for other benchmarks. For a CPU with restricted frequency settings, our energy consumption is within 4.69% of the optimal.
Resumo:
Energy consumption has become a major constraint in providing increased functionality for devices with small form factors. Dynamic voltage and frequency scaling has been identified as an effective approach for reducing the energy consumption of embedded systems. Earlier works on dynamic voltage scaling focused mainly on performing voltage scaling when the CPU is waiting for memory subsystem or concentrated chiefly on loop nests and/or subroutine calls having sufficient number of dynamic instructions. This paper concentrates on coarser program regions and for the first time uses program phase behavior for performing dynamic voltage scaling. Program phases are annotated at compile time with mode switch instructions. Further, we relate the Dynamic Voltage Scaling Problem to the Multiple Choice Knapsack Problem, and use well known heuristics to solve it efficiently. Also, we develop a simple integer linear program formulation for this problem. Experimental evaluation on a set of media applications reveal that our heuristic method obtains a 38% reduction in energy consumption on an average, with a performance degradation of 1% and upto 45% reduction in energy with a performance degradation of 5%. Further, the energy consumed by the heuristic solution is within 1% of the optimal solution obtained from the ILP approach.
Resumo:
In this study, we analyse simultaneous measurements (at 50 Hz) of velocity at several heights and shear stress at the surface made during the Utah field campaign for the presence of ranges of scales, where distinct scale-to-scale interactions between velocity and shear stress can be identified. We find that our results are similar to those obtained in a previous study [Venugopal et al., 2003] (contrary to the claim in V2003, that the scaling relations might be dependent on Reynolds number) where wind tunnel measurements of velocity and shear stress were analysed. We use a wavelet-based scale-to-scale cross-correlation to detect three ranges of scales of interaction between velocity and shear stress, namely, (a) inertial subrange, where the correlation is negligible; (b) energy production range, where the correlation follows a logarithmic law; and (c) for scales larger than the boundary layer height, the correlation reaches a plateau.
Resumo:
A generalized power tracking algorithm that minimizes power consumption of digital circuits by dynamic control of supply voltage and the body bias is proposed. A direct power monitoring scheme is proposed that does not need any replica and hence can sense total power consumed by load circuit across process, voltage, and temperature corners. Design details and performance of power monitor and tracking algorithm are examined by a simulation framework developed using UMC 90-nm CMOS triple well process. The proposed algorithm with direct power monitor achieves a power savings of 42.2% for activity of 0.02 and 22.4% for activity of 0.04. Experimental results from test chip fabricated in AMS 350 nm process shows power savings of 46.3% and 65% for load circuit operating in super threshold and near sub-threshold region, respectively. Measured resolution of power monitor is around 0.25 mV and it has a power overhead of 2.2% of die power. Issues with loop convergence and design tradeoff for power monitor are also discussed in this paper.
Resumo:
We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.
Resumo:
The problem of developing L2-stability criteria for feedback systems with a single time-varying gain, which impose average variation constraints on the gain is treated. A unified approach is presented which facilitates the development of such average variation criteria for both linear and nonlinear systems. The stability criteria derived here are shown to be more general than the existing results.
Resumo:
The data obtained in the earlier parts of this series for the donor and acceptor end parameters of N-H. O and O-H. O hydrogen bonds have been utilised to obtain a qualitative working criterion to classify the hydrogen bonds into three categories: “very good” (VG), “moderately good” (MG) and weak (W). The general distribution curves for all the four parameters are found to be nearly of the Gaussian type. Assuming that the VG hydrogen bonds lie between 0 and ± la, MG hydrogen bonds between ± 1s̀ and ± 2s̀, W hydrogen bonds beyond ± 2s̀ (where s̀ is the standard deviation), suitable cut-off limits for classifying the hydrogen bonds in the three categories have been derived. These limits are used to get VG and MG ranges for the four parameters 1 and θ (at the donor end) and ± and ± (at the acceptor end). The qualitative strength of a hydrogen bond is decided by the cumulative application of the criteria to all the four parameters. The criterion has been further applied to some practical examples in conformational studies such as α-helix and can be used for obtaining suitable location of hydrogen atoms to form good hydrogen bonds. An empirical approach to the energy of hydrogen bonds in the three categories has also been presented.