255 resultados para Set-Valued Functions
Resumo:
Wireless networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem – the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node.In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels,where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information(CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a corresponding factored class of control policies corresponding to local cooperation between nodes with a local outage constraint. The resulting optimal scheme in this class can again be computed efficiently in a decentralized manner. We demonstrate significant energy savings for both slow and fast fading channels through numerical simulations of randomly distributed networks.
Resumo:
Even though several techniques have been proposed in the literature for achieving multiclass classification using Support Vector Machine(SVM), the scalability aspect of these approaches to handle large data sets still needs much of exploration. Core Vector Machine(CVM) is a technique for scaling up a two class SVM to handle large data sets. In this paper we propose a Multiclass Core Vector Machine(MCVM). Here we formulate the multiclass SVM problem as a Quadratic Programming(QP) problem defining an SVM with vector valued output. This QP problem is then solved using the CVM technique to achieve scalability to handle large data sets. Experiments done with several large synthetic and real world data sets show that the proposed MCVM technique gives good generalization performance as that of SVM at a much lesser computational expense. Further, it is observed that MCVM scales well with the size of the data set.
Resumo:
The statistically steady humidity distribution resulting from an interaction of advection, modelled as an uncorrelated random walk of moist parcels on an isentropic surface, and a vapour sink, modelled as immediate condensation whenever the specific humidity exceeds a specified saturation humidity, is explored with theory and simulation. A source supplies moisture at the deep-tropical southern boundary of the domain and the saturation humidity is specified as a monotonically decreasing function of distance from the boundary. The boundary source balances the interior condensation sink, so that a stationary spatially inhomogeneous humidity distribution emerges. An exact solution of the Fokker-Planck equation delivers a simple expression for the resulting probability density function (PDF) of the wate-rvapour field and also the relative humidity. This solution agrees completely with a numerical simulation of the process, and the humidity PDF exhibits several features of interest, such as bimodality close to the source and unimodality further from the source. The PDFs of specific and relative humidity are broad and non-Gaussian. The domain-averaged relative humidity PDF is bimodal with distinct moist and dry peaks, a feature which we show agrees with middleworld isentropic PDFs derived from the ERA interim dataset. Copyright (C) 2011 Royal Meteorological Society
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.
Resumo:
Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.
Thermal Weight Functions and Stress Intensity Factors for Bonded Dissimilar Media Using Body Analogy
Resumo:
In this study, an analytical method is presented for the computation of thermal weight functions in two dimensional bi-material elastic bodies containing a crack at the interface and subjected to thermal loads using body analogy method. The thermal weight functions are derived for two problems of infinite bonded dissimilar media, one with a semi-infinite crack and the other with a finite crack along the interface. The derived thermal weight functions are shown to reduce to the already known expressions of thermal weight functions available in the literature for the respective homogeneous elastic body. Using these thermal weight functions, the stress intensity factors are computed for the above interface crack problems when subjected to an instantaneous heat source.
Resumo:
Innate immunity recognizes and resists various pathogens; however, the mechanisms regulating pathogen versus non-pathogen discrimination are still imprecisely understood. Here, we demonstrate that pathogen-specific activation of TLR2 upon infection with Mycobacterium bovis BCG, in comparison with other pathogenic microbes, including Salmonella typhimurium and Staphylococcus aureus, programs macrophages for robust up-regulation of signaling cohorts of Wnt-beta-catenin signaling. Signaling perturbations or genetic approaches suggest that infection-mediated stimulation of Wnt-beta-catenin is vital for activation of Notch1 signaling. Interestingly, inducible NOS (iNOS) activity is pivotal for TLR2-mediated activation of Wnt-beta-catenin signaling as iNOS(-/-) mice demonstrated compromised ability to trigger activation of Wnt-beta-catenin signaling as well as Notch1-mediated cellular responses. Intriguingly, TLR2-driven integration of iNOS/NO, Wnt-beta-catenin, and Notch1 signaling contributes to its capacity to regulate the battery of genes associated with T(Reg) cell lineage commitment. These findings reveal a role for differential stimulation of TLR2 in deciding the strength of Wnt-beta-catenin signaling, which together with signals from Notch1 contributes toward the modulation of a defined set of effector functions in macrophages and thus establishes a conceptual framework for the development of novel therapeutics.
Resumo:
This paper deals with surface profilometry, where we try to detect a periodic structure, hidden in randomness using the matched filter method of analysing the intensity of light, scattered from the surface. From the direct problem of light scattering from a composite rough surface of the above type, we find that the detectability of the periodic structure can be hindered by the randomness, being dependent on the correlation function of the random part. In our earlier works, we had concentrated mainly on the Cauchy-type correlation function for the rough part. In the present work, we show that this technique can determine the periodic structure of different kinds of correlation functions of the roughness, including Cauchy, Gaussian etc. We study the detection by the matched filter method as the nature of the correlation function is varied.
Resumo:
In the present study singular fractal functions (SFF) were used to generate stress-strain plots for quasibrittle material like concrete and cement mortar and subsequently stress-strain plot of cement mortar obtained using SFF was used for modeling fracture process in concrete. The fracture surface of concrete is rough and irregular. The fracture surface of concrete is affected by the concrete's microstructure that is influenced by water cement ratio, grade of cement and type of aggregate 11-41. Also the macrostructural properties such as the size and shape of the specimen, the initial notch length and the rate of loading contribute to the shape of the fracture surface of concrete. It is known that concrete is a heterogeneous and quasi-brittle material containing micro-defects and its mechanical properties strongly relate to the presence of micro-pores and micro-cracks in concrete 11-41. The damage in concrete is believed to be mainly due to initiation and development of micro-defects with irregularity and fractal characteristics. However, repeated observations at various magnifications also reveal a variety of additional structures that fall between the `micro' and the `macro' and have not yet been described satisfactorily in a systematic manner [1-11,15-17]. The concept of singular fractal functions by Mosolov was used to generate stress-strain plot of cement concrete, cement mortar and subsequently the stress-strain plot of cement mortar was used in two-dimensional lattice model [28]. A two-dimensional lattice model was used to study concrete fracture by considering softening of matrix (cement mortar). The results obtained from simulations with lattice model show softening behavior of concrete and fairly agrees with the experimental results. The number of fractured elements are compared with the acoustic emission (AE) hits. The trend in the cumulative fractured beam elements in the lattice fracture simulation reasonably reflected the trend in the recorded AE measurements. In other words, the pattern in which AE hits were distributed around the notch has the same trend as that of the fractured elements around the notch which is in support of lattice model. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In arriving at the ideal filter transfer function for an active noise control system in a duct, the effect of the auxiliary sources (generally loudspeakers) on the waves generated by the primary source has invariably been neglected in the existing literature, implying a rigid wall or infinite impedance. The present paper presents a fairly general analysis of a linear one-dimensional noise control system by means of block diagrams and transfer functions. It takes into account the passive as well as active role of a terminal primary source, wall-mounted auxiliary source, open duct radiation impedance, and the effects of mean flow and damping. It is proved that the pressure generated by a source against a load impedance can be looked upon as a sum of two pressure waves, one generated by the source against an anechoic termination and the other by reflecting the rearward wave (incident on the source) off the passive source impedance. Application of this concept is illustrated for both the types of sources. A concise closed-form expression for the ideal filter transfer function is thus derived and discussed. Finally, the dynamics of an adaptive noise control system is discussed briefly, relating its standing-wave variables and transfer functions with those of the progressive-wave model presented here.
Resumo:
Simple algorithms have been developed to generate pairs of minterms forming a given 2-sum and thereby to test 2-asummability of switching functions. The 2-asummability testing procedure can be easily implemented on the computer. Since 2-asummability is a necessary and sufficient condition for a switching function of upto eight variables to be linearly separable (LS), it can be used for testing LS switching functions of upto eight variables.
Resumo:
The radiation resistance of off-set series slots has been calculated for microstrip lines using the method proposed by Breithaupt for strip lines. A suitable transformation is made to allow for the difference in structure. Curves relating the slot resistance to the microstrip length, width and off-set distance have been obtained. Microstrip slot antenna arrays are becoming important in applications where size and weight are of significance. The radiation resistance is a very significant parameter is the design of such arrays. Oliner first calculated the radiation conductance of centered series slots in strip transmission lines and that analysis was extended by Breithaupt to the off-set series slots in stripline. The radiation resistance of off-set series slots in microstrip lines is calculated in this paper and data are obtained for different slot lengths, slot widths and off-set values. An example of the use of these data in array antenna design in shown.