975 resultados para Average nusselt number
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.
Resumo:
Hydrophobic/superhydrophobic metallic surfaces prepared via chemical treatment are encountered in many industrial scenarios involving the impingement of spray droplets. The effectiveness of such surfaces is understood through the analysis of droplet impact experiments. In the present study, three target surfaces with aluminum (Al-6061) as base material-acid-etched, Octadecyl Trichloro Silane (OTS) coated, and acid-etched plus OTS-coated-were prepared. Experiments on the impact of inertia dominated water drops on these chemically modified aluminum surfaces were carried out with the objective to highlight the effect of chemical treatment on the target surfaces on key sub-processes occurring in drop impact phenomenon. High speed videos of the entire drop impact dynamics were captured at three Weber number (We) conditions representative of high We (We > 200) regime. During the early stages of drop spreading, the drop impact resulted in ejection of secondary droplets from spreading drop front on the etched surfaces resembling prompt splash on rough surfaces whereas no such splashing was observable on untreated aluminum surface. Prominent development of undulations (fingers) were observed at the rim of drop spreading on the etched surfaces; between the etched surfaces the OTS-coated surface showed a subdued development of fingers than the uncoated surface. The impacted drops showed intense receding on OTS-coated surfaces whereas on the etched surface a highly irregular receding, with drop liquid sticking to the surface, was observed. Quantitative analyses were performed to reveal the effect of target surface characteristics on drop impact parameters such as temporal variation of spread factor of drop lamella, temporal variation of average finger length during spreading phase, maximum drop spreading, time taken to attain maximum spreading, sensitivity of maximum spreading to We, number of fingers at maximum spreading, and average receding velocity of drop lamella. Existing models for maximum drop spreading showed reasonably good agreement with the experimental measurements on the target surfaces except the acid-etched surface. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Clustering techniques which can handle incomplete data have become increasingly important due to varied applications in marketing research, medical diagnosis and survey data analysis. Existing techniques cope up with missing values either by using data modification/imputation or by partial distance computation, often unreliable depending on the number of features available. In this paper, we propose a novel approach for clustering data with missing values, which performs the task by Symmetric Non-Negative Matrix Factorization (SNMF) of a complete pair-wise similarity matrix, computed from the given incomplete data. To accomplish this, we define a novel similarity measure based on Average Overlap similarity metric which can effectively handle missing values without modification of data. Further, the similarity measure is more reliable than partial distances and inherently possesses the properties required to perform SNMF. The experimental evaluation on real world datasets demonstrates that the proposed approach is efficient, scalable and shows significantly better performance compared to the existing techniques.
Resumo:
Nanoparticle deposition behavior observed at the Darcy scale represents an average of the processes occurring at the pore scale. Hence, the effect of various pore-scale parameters on nanoparticle deposition can be understood by studying nanoparticle transport at pore scale and upscaling the results to the Darcy scale. In this work, correlation equations for the deposition rate coefficients of nanoparticles in a cylindrical pore are developed as a function of nine pore-scale parameters: the pore radius, nanoparticle radius, mean flow velocity, solution ionic strength, viscosity, temperature, solution dielectric constant, and nanoparticle and collector surface potentials. Based on dominant processes, the pore space is divided into three different regions, namely, bulk, diffusion, and potential regions. Advection-diffusion equations for nanoparticle transport are prescribed for the bulk and diffusion regions, while the interaction between the diffusion and potential regions is included as a boundary condition. This interaction is modeled as a first-order reversible kinetic adsorption. The expressions for the mass transfer rate coefficients between the diffusion and the potential regions are derived in terms of the interaction energy profile. Among other effects, we account for nanoparticle-collector interaction forces on nanoparticle deposition. The resulting equations are solved numerically for a range of values of pore-scale parameters. The nanoparticle concentration profile obtained for the cylindrical pore is averaged over a moving averaging volume within the pore in order to get the 1-D concentration field. The latter is fitted to the 1-D advection-dispersion equation with an equilibrium or kinetic adsorption model to determine the values of the average deposition rate coefficients. In this study, pore-scale simulations are performed for three values of Peclet number, Pe = 0.05, 5, and 50. We find that under unfavorable conditions, the nanoparticle deposition at pore scale is best described by an equilibrium model at low Peclet numbers (Pe = 0.05) and by a kinetic model at high Peclet numbers (Pe = 50). But, at an intermediate Pe (e.g., near Pe = 5), both equilibrium and kinetic models fit the 1-D concentration field. Correlation equations for the pore-averaged nanoparticle deposition rate coefficients under unfavorable conditions are derived by performing a multiple-linear regression analysis between the estimated deposition rate coefficients for a single pore and various pore-scale parameters. The correlation equations, which follow a power law relation with nine pore-scale parameters, are found to be consistent with the column-scale and pore-scale experimental results, and qualitatively agree with the colloid filtration theory. These equations can be incorporated into pore network models to study the effect of pore-scale parameters on nanoparticle deposition at larger length scales such as Darcy scale.
Resumo:
In this paper, a theory is developed to calculate the average strain field in the materials with randomly distributed inclusions. Many previous researches investigating the average field behaviors were based upon Mori and Tanaka's idea. Since they were restricted to studying those materials with uniform distributions of inclusions they did not need detailed statistical information of random microstructures, and could use the volume average to replace the ensemble average. To study more general materials with randomly distributed inclusions, the number density function is introduced in formulating the average field equation in this research. Both uniform and nonuniform distributions of inclusions are taken into account in detail.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Streamflow values show definite seasonal patterns in their month-to-month correlation structure. The structure also seems to vary as a function of the type of stream (coastal versus mountain or humid versus arid region). The standard autoregressive moving average (ARMA) time series model is incapable of reproducing this correlation structure. ... A periodic ARMA time series model is one in which an ARMA model is fitted to each month or season but the parameters of the model are constrained to be periodic according to a Fourier series. This constraint greatly reduces the number of parameters but still leaves the flexibility for matching the seasonally varying correlograms.
Resumo:
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers. (c) 2006 Optical Society of America
Resumo:
Based on the ray theory and Longuet-Higgins's linear,model of sea waves, the joint distribution of wave envelope and apparent wave number vector is established. From the joint distribution, we define a new concept, namely the outer wave number spectrum, to describe the outer characteristics of ocean waves. The analytical form of the outer wave number spectrum, the probability distributions of the apparent wave number vector and its components are then derived. The outer wave number spectrum is compared with the inner wave number spectrum for the average status of wind-wave development corresponding to a peakness factor P = 3. Discussions on the similarity and difference between the outer wave number spectrum and inner one are also presented in the paper. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The purpose of this preliminary study is to identify signs of fatigue in specific muscle groups that in turn directly influence accuracy in professional darts. Electromyography (EMG) sensors are employed to monitor the electrical activity produced by skeletal muscles of the trunk and upper limb during throw. It is noted that the Flexor Pollicis Brevis muscle which controls the critical release action during throw shows signs of fatigue. This is accompanied by an inherent increase in mean integral EMG amplitude for a number of other throw related muscles indicating an attempt to maintain constant applied throwing force. A strong correlation is shown to exist between average score and decrease in mean integral ECG amplitude for the Flexor Pollicis Brevis.
Resumo:
Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.
Resumo:
This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.
Resumo:
Flow responsive passive samplers offer considerable potential in nutrient monitoring in catchments; bridging the gap between the intermittency of grab sampling and the high cost of automated monitoring systems. A commercially available passive sampler was evaluated in a number of river systems encapsulating a gradient in storm response, combinations of diffuse and point source pressures, and levels of phosphorus and nitrogen concentrations. Phosphorus and nitrogen are sequestered to a resin matrix in a permeable cartridge positioned in line with streamflow. A salt tracer dissolves in proportion to advective flow through the cartridge. Multiple deployments of different cartridge types were undertaken and the recovery of P and N compared with the flow-weighted mean concentration (FWMC) from high-resolution bank-side analysers at each site. Results from the passive samplers were variable and largely underestimated the FWMC derived from the bank-side analysers. Laboratory tests using ambient river samples indicated good replication of advective throughflow using pumped water, although this appeared not to be a good analogue of river conditions where flow divergence was possible. Laboratory tests also showed good nutrient retention but not elution and these issues appeared to combine to limit the utility in ambient river systems at the small catchment scale.
Resumo:
Multiuser diversity gain has been investigated well in terms of a system capacity formulation in the literature. In practice, however, designs on multiuser systems with nonzero error rates require a relationship between the error rates and the number of users within a cell. Considering a best-user scheduling, where the user with the best channel condition is scheduled to transmit per scheduling interval, our focus is on the uplink. We assume that each user communicates with the base station through a single-input multiple-output channel. We derive a closed-form expression for the average BER, and analyze how the average BER goes to zero asymptotically as the number of users increases for a given SNR. Note that the analysis of average BER even in SI SO multiuser diversity systems has not been done with respect to the number of users for a given SNR. Our analysis can be applied to multiuser diversity systems with any number of antennas.
Resumo:
We analyze the production of defects during the dynamical crossing of a mean-field phase transition with a real order parameter. When the parameter that brings the system across the critical point changes in time according to a power-law schedule, we recover the predictions dictated by the well-known Kibble-Zurek theory. For a fixed duration of the evolution, we show that the average number of defects can be drastically reduced for a very large but finite system, by optimizing the time dependence of the driving using optimal control techniques. Furthermore, the optimized protocol is robust against small fluctuations.