959 resultados para componentwise ultimate bounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conservation of natural resources through sustainable ecosystem management and development is the key to our secured future. The management of ecosystem involves inventorying and monitoring, and applying integrated technologies, methodologies and interdisciplinary approaches for its conservation. Hence, now it is even more critical than ever before for the humans to be environmentally literate. To realise this vision, both ecological and environmental education must become a fundamental part of the education system at all levels of education. Currently, it is even more critical than ever before for the humankind as a whole to have a clear understanding of environmental concerns and to follow sustainable development practices. The degradation of our environment is linked to continuing problems of pollution, loss of forest, solid waste disposal, and issues related to economic productivity and national as well as ecological security. Environmental management has gained momentum in the recent years with the initiatives focussing on managing environmental hazards and preventing possible disasters. Environmental issues make better sense, when one can understand them in the context of one’s own cognitive sphere. Environmental education focusing on real-world contexts and issues often begins close to home, encouraging learners to understand and forge connections with their immediate surroundings. The awareness, knowledge, and skills needed for these local connections and understandings provide a base for moving out into larger systems, broader issues, and a more sophisticated comprehension of causes, connections, and consequences. Environmental Education Programme at CES in collaboration with Karnataka Environment Research Foundation (KERF) referred as ‘Know your Ecosystem’ focuses on the importance of investigating the ecosystems within the context of human influences, incorporating an examination of ecology, economics, culture, political structure, and social equity as well as natural processes and systems. The ultimate goal of environment education is to develop an environmentally literate public. It needs to address the connection between our conception and practice of education and our relationship as human cultures to life-sustaining ecological systems. For each environmental issue there are many perspectives and much uncertainty. Environmental education cultivates the ability to recognise uncertainty, envision alternative scenarios, and adapt to changing conditions and information. These knowledge, skills, and mindset translate into a citizenry who is better equipped to address its common problems and take advantage of opportunities, whether environmental concerns are involved or not.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To a reasonable approximation, a secondary structures of RNA is determined by Watson-Crick pairing without pseudo-knots in such a way as to minimise the number of unpaired bases: We show that this minimal number is determined by the maximal conjugacy-invariant pseudo-norm on the free group on two generators subject to bounds on the generators. This allows us to construct lower bounds on the minimal number of unpaired bases by constructing conjugacy invariant pseudo-norms. We show that one such construction, based on isometric actions on metric spaces, gives a sharp lower bound. A major goal here is to formulate a purely mathematical question, based on considering orthogonal representations, which we believe is of some interest independent of its biological roots.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A methodology is presented for the synthesis of analog circuits using piecewise linear (PWL) approximations. The function to be synthesized is divided into PWL segments such that each segment can be realized using elementary MOS current-mode programmable-gain circuits. A number of these elementary current-mode circuits when connected in parallel, it is possible to realize piecewise linear approximation of any arbitrary analog function with in the allowed approximation error bounds. Simulation results show a close agreement between the desired function and the synthesized output. The number of PWL segments used for approximation and hence the circuit area is determined by the required accuracy and the smoothness of the resulting function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a typical sensor network scenario a goal is to monitor a spatio-temporal process through a number of inexpensive sensing nodes, the key parameter being the fidelity at which the process has to be estimated at distant locations. We study such a scenario in which multiple encoders transmit their correlated data at finite rates to a distant and common decoder. In particular, we derive inner and outer bounds on the rate region for the random field to be estimated with a given mean distortion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surfactant-intercalated layered double-hydroxide solid Mg-Al LDH-dodecyl sulfate (DDS) undergoes rapid and facile delamination to its ultimate constituent, single sheets of nanometer thickness and micrometer size, in a nonpolar solvent such as toluene to form stable dispersions. The delaminated nanosheets are electrically neutral because the surfactant chains remain tethered to the inorganic layer even on exfoliation. With increasing volume fraction of the solid, the dispersion transforms from a free-flowing sol to a solidlike gel. Here we have investigated the sol-gel transition in dispersions of the hydrophobically modified Mg-Al LDH-DDS in toluene by rheology, SAXS, and (1)H NMR measurements. The rheo-SAXS measurements show that the sharp rise in the viscosity of the dispersion during gel formation is a consequence of a tactoidal microstructure formed by the stacking of the nanosheets with an intersheet separation of 3.92 nm. The origin and nature of the attractive forces that lead to the formation of the tactoidal structure were obtained from 1D and 2D (1)H NMR measurements that provided direct evidence of the association of the toluene solvent molecules with the terminal methyl of the tethered DDS surfactant chains. Gel formation is a consequence of the attractive dispersive interactions of toluene molecules with the tails of DDS chains anchored to opposing Mg-Al LDH sheets. The toluene solvent molecules function as molecular ``glue'' holding the nanosheets within the tactoidal microstructure together. Our study shows how rheology, SAXS, and NMR measurements complement each other to provide a molecular-level description of the sol-gel transition in dispersions of a hydrophobically modified layered double hydroxide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this present paper, the effects of non-isothermal rolling temperature and reduction in thickness followed by annealing on microstructure and mechanical properties of ZM21 magnesium alloy were investigated. The alloy rolled at four different temperatures 250 degrees C, 300 degrees C, 350 degrees C and 400 degrees C with reductions of 25%, 50% and 75%. Non-isothermal rolling resulted in grain refinement, introduction of shear bands and twins in the matrix alloy. Partial to full recrystallization was observed when the rolling temperature was above recrystallization temperature. Rolling and subsequent annealing resulted in strain-free equiaxed grains and complete disappearance of shear bands and twins. Maximum ultimate strength (345 MPa) with good ductility (14%) observed in the sample rolled at 250 degrees C with 75% reduction in thickness followed by short annealing. Recrystallization during warm/hot rolling was sluggish, but post-roll treatment gives distinct views about dynamic and static recrystallization. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectral efficiency is a key characteristic of cellular communications systems, as it quantifies how well the scarce spectrum resource is utilized. It is influenced by the scheduling algorithm as well as the signal and interference statistics, which, in turn, depend on the propagation characteristics. In this paper we derive analytical expressions for the short-term and long-term channel-averaged spectral efficiencies of the round robin, greedy Max-SINR, and proportional fair schedulers, which are popular and cover a wide range of system performance and fairness trade-offs. A unified spectral efficiency analysis is developed to highlight the differences among these schedulers. The analysis is different from previous work in the literature in the following aspects: (i) it does not assume the co-channel interferers to be identically distributed, as is typical in realistic cellular layouts, (ii) it avoids the loose spectral efficiency bounds used in the literature, which only considered the worst case and best case locations of identical co-channel interferers, (iii) it explicitly includes the effect of multi-tier interferers in the cellular layout and uses a more accurate model for handling the total co-channel interference, and (iv) it captures the impact of using small modulation constellation sizes, which are typical of cellular standards. The analytical results are verified using extensive Monte Carlo simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Homogenization of partial differential equations is relatively a new area and has tremendous applications in various branches of engineering sciences like: material science,porous media, study of vibrations of thin structures, composite materials to name a few. Though the material scientists and others had reasonable idea about the homogenization process, it was lacking a good mathematical theory till early seventies. The first proper mathematical procedure was developed in the seventies and later in the last 30 years or so it has flourished in various ways both application wise and mathematically. This is not a full survey article and on the other hand we will not be concentrating on a specialized problem. Indeed, we do indicate certain specialized problems of our interest without much details and that is not the main theme of the article. I plan to give an introductory presentation with the aim of catering to a wider audience. We go through few examples to understand homogenization procedure in a general perspective together with applications. We also present various mathematical techniques available and if possible some details about some of the techniques. A possible definition of homogenization would be that it is a process of understanding a heterogeneous (in-homogeneous) media, where the heterogeneties are at the microscopic level, like in composite materials, by a homogeneous media. In other words, one would like to obtain a homogeneous description of a highly oscillating in-homogeneous media. We also present other generalizations to non linear problems, porous media and so on. Finally, we will like to see a closely related issue of optimal bounds which itself is an independent area of research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the diversity-multiplexing gain tradeoff (DMT) of single-source, single-sink (ss-ss), multihop relay networks having slow-fading links is studied. In particular, the two end-points of the DMT of ss-ss full-duplex networks are determined, by showing that the maximum achievable diversity gain is equal to the min-cut and that the maximum multiplexing gain is equal to the min-cut rank, the latter by using an operational connection to a deterministic network. Also included in the paper, are several results that aid in the computation of the DMT of networks operating under amplify-and-forward (AF) protocols. In particular, it is shown that the colored noise encountered in amplify-and-forward protocols can be treated as white for the purpose of DMT computation, lower bounds on the DMT of lower-triangular channel matrices are derived and the DMT of parallel MIMO channels is computed. All protocols appearing in the paper are explicit and rely only upon AF relaying. Half-duplex networks and explicit coding schemes are studied in a companion paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.