944 resultados para Multi-soft sets
Resumo:
The idea of ubiquity and seamless connectivity in networks is gaining more importance in recent times because of the emergence of mobile devices with added capabilities like multiple interfaces and more processing abilities. The success of ubiquitous applications depends on how effectively the user is provided with seamless connectivity. In a ubiquitous application, seamless connectivity encompasses the smooth migration of a user between networks and providing him/her with context based information automatically at all times. In this work, we propose a seamless connectivity scheme in the true sense of ubiquitous networks by providing smooth migration to a user along with providing information based on his/her contexts automatically without re-registration with the foreign network. The scheme uses Ubi-SubSystems(USS) and Soft-Switches(SS) for maintaining the ubiquitous application resources and the users. The scheme has been tested by considering the ubiquitous touring system with several sets of tourist spots and users.
Resumo:
Effective feature extraction for robust speech recognition is a widely addressed topic and currently there is much effort to invoke non-stationary signal models instead of quasi-stationary signal models leading to standard features such as LPC or MFCC. Joint amplitude modulation and frequency modulation (AM-FM) is a classical non-parametric approach to non-stationary signal modeling and recently new feature sets for automatic speech recognition (ASR) have been derived based on a multi-band AM-FM representation of the signal. We consider several of these representations and compare their performances for robust speech recognition in noise, using the AURORA-2 database. We show that FEPSTRUM representation proposed is more effective than others. We also propose an improvement to FEPSTRUM based on the Teager energy operator (TEO) and show that it can selectively outperform even FEPSTRUM
Resumo:
We consider single-source single-sink (ss-ss) multi-hop relay networks, with slow-fading links and single-antenna half-duplex relay nodes. While two-hop cooperative relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this paper, we identify two families of networks that are multi-hop generalizations of the two-hop network: K-Parallel-Path (KPP)networks and layered networks.KPP networks, can be viewed as the union of K node-disjoint parallel relaying paths, each of length greater than one. KPP networks are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the DMT of these families of networks completely for K > 3. Layered networks are networks comprising of layers of relays with edges existing only between adjacent layers, with more than one relay in each layer. We prove that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4.For multiple-antenna KPP and layered networks, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks.For arbitrary multi-terminal wireless networks with multiple source-sink pairs, the maximum achievable diversity is shown to be equal to the min-cut between the corresponding source and the sink, irrespective of whether the network has half-duplex or full-duplex relays. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable.Along the way, we derive the optimal DMT of a generalized parallel channel and derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. We also give alternative and often simpler proofs of several existing results and show that codes achieving full diversity on a MIMO Rayleigh fading channel achieve full diversity on arbitrary fading channels. All protocols in this paper are explicit and use only amplify-and-forward (AF) relaying. We also construct codes with short block-lengths based on cyclic division algebras that achieve the optimal DMT for all the proposed schemes.Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple AF protocols are often sufficient to attain the optimal DMT
Resumo:
The poor performance of TCP over multi-hop wireless networks is well known. In this paper we explore to what extent network coding can help to improve the throughput performance of TCP controlled bulk transfers over a chain topology multi-hop wireless network. The nodes use a CSMA/ CA mechanism, such as IEEE 802.11’s DCF, to perform distributed packet scheduling. The reverse flowing TCP ACKs are sought to be X-ORed with forward flowing TCP data packets. We find that, without any modification to theMAC protocol, the gain from network coding is negligible. The inherent coordination problem of carrier sensing based random access in multi-hop wireless networks dominates the performance. We provide a theoretical analysis that yields a throughput bound with network coding. We then propose a distributed modification of the IEEE 802.11 DCF, based on tuning the back-off mechanism using a feedback approach. Simulation studies show that the proposed mechanism when combined with network coding, improves the performance of a TCP session by more than 100%.
Resumo:
Learning to rank from relevance judgment is an active research area. Itemwise score regression, pairwise preference satisfaction, and listwise structured learning are the major techniques in use. Listwise structured learning has been applied recently to optimize important non-decomposable ranking criteria like AUC (area under ROC curve) and MAP(mean average precision). We propose new, almost-lineartime algorithms to optimize for two other criteria widely used to evaluate search systems: MRR (mean reciprocal rank) and NDCG (normalized discounted cumulative gain)in the max-margin structured learning framework. We also demonstrate that, for different ranking criteria, one may need to use different feature maps. Search applications should not be optimized in favor of a single criterion, because they need to cater to a variety of queries. E.g., MRR is best for navigational queries, while NDCG is best for informational queries. A key contribution of this paper is to fold multiple ranking loss functions into a multi-criteria max-margin optimization.The result is a single, robust ranking model that is close to the best accuracy of learners trained on individual criteria. In fact, experiments over the popular LETOR and TREC data sets show that, contrary to conventional wisdom, a test criterion is often not best served by training with the same individual criterion.
Resumo:
The methods of design available for geocell-supported embankments are very few. Two of the earlier methods are considered in this paper and a third method is proposed and compared with them. The first method is the slip line method proposed by earlier researchers. The second method is based on slope stability analysis proposed by this author earlier and the new method proposed is based on the finite element analyses. In the first method, plastic bearing failure of the soil was assumed and the additional resistance due to geocell layer is calculated using a non-symmetric slip line field in the soft foundation soil. In the second method, generalpurpose slope stability program was used to design the geocell mattress of required strength for embankment using a composite model to represent the shear strength of geocell layer. In the third method proposed in this paper, geocell reinforcement is designed based on the plane strain finite element analysis of embankments. The geocell layer is modelled as an equivalent composite layer with modified strength and stiffness values. The strength and dimensions of geocell layer is estimated for the required bearing capacity or permissible deformations. These three design methods are compared through a design example.
Resumo:
An integrated reservoir operation model is presented for developing effective operational policies for irrigation water management. In arid and semi-arid climates, owing to dynamic changes in the hydroclimatic conditions within a season, the fixed cropping pattern with conventional operating policies, may have considerable impact on the performance of the irrigation system and may affect the economics of the farming community. For optimal allocation of irrigation water in a season, development of effective mathematical models may guide the water managers in proper decision making and consequently help in reducing the adverse effects of water shortage and crop failure problems. This paper presents a multi-objective integrated reservoir operation model for multi-crop irrigation system. To solve the multi-objective model, a recent swarm intelligence technique, namely elitist-mutated multi-objective particle swarm optimisation (EM-MOPSO) has been used and applied to a case study in India. The method evolves effective strategies for irrigation crop planning and operation policies for a reservoir system, and thereby helps farming community in improving crop benefits and water resource usage in the reservoir command area.
Resumo:
Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.
Resumo:
Land cover (LC) refers to what is actually present on the ground and provide insights into the underlying solution for improving the conditions of many issues, from water pollution to sustainable economic development. One of the greatest challenges of modeling LC changes using remotely sensed (RS) data is of scale-resolution mismatch: that the spatial resolution of detail is less than what is required, and that this sub-pixel level heterogeneity is important but not readily knowable. However, many pixels consist of a mixture of multiple classes. The solution to mixed pixel problem typically centers on soft classification techniques that are used to estimate the proportion of a certain class within each pixel. However, the spatial distribution of these class components within the pixel remains unknown. This study investigates Orthogonal Subspace Projection - an unmixing technique and uses pixel-swapping algorithm for predicting the spatial distribution of LC at sub-pixel resolution. Both the algorithms are applied on many simulated and actual satellite images for validation. The accuracy on the simulated images is ~100%, while IRS LISS-III and MODIS data show accuracy of 76.6% and 73.02% respectively. This demonstrates the relevance of these techniques for applications such as urban-nonurban, forest-nonforest classification studies etc.
Resumo:
This paper is concerned with the dynamic analysis of flexible,non-linear multi-body beam systems. The focus is on problems where the strains within each elastic body (beam) remain small. Based on geometrically non-linear elasticity theory, the non-linear 3-D beam problem splits into either a linear or non-linear 2-D analysis of the beam cross-section and a non-linear 1-D analysis along the beam reference line. The splitting of the three-dimensional beam problem into two- and one-dimensional parts, called dimensional reduction,results in a tremendous savings of computational effort relative to the cost of three-dimensional finite element analysis,the only alternative for realistic beams. The analysis of beam-like structures made of laminated composite materials requires a much more complicated methodology. Hence, the analysis procedure based on Variational Asymptotic Method (VAM), a tool to carry out the dimensional reduction, is used here.The analysis methodology can be viewed as a 3-step procedure. First, the sectional properties of beams made of composite materials are determined either based on an asymptotic procedure that involves a 2-D finite element nonlinear analysis of the beam cross-section to capture trapeze effect or using strip-like beam analysis, starting from Classical Laminated Shell Theory (CLST). Second, the dynamic response of non-linear, flexible multi-body beam systems is simulated within the framework of energy-preserving and energy-decaying time integration schemes that provide unconditional stability for non-linear beam systems. Finally,local 3-D responses in the beams are recovered, based on the 1-D responses predicted in the second step. Numerical examples are presented and results from this analysis are compared with those available in the literature.