975 resultados para Constrained


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In wireless ad hoc networks, nodes communicate with far off destinations using intermediate nodes as relays. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios (complete cooperation and complete noncooperation) are inimical to the interests of a user. In this paper, we address the issue of user cooperation in ad hoc networks. We assume that nodes are rational, i.e., their actions are strictly determined by self interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we are able to determine the optimal share of service that each node should receive. We define this to be the rational Pareto optimal operating point. We then propose a distributed and scalable acceptance algorithm called Generous TIT-FOR-TAT (GTFT). The acceptance algorithm is used by the nodes to decide whether to accept or reject a relay request. We show that GTFT results in a Nash equilibrium and prove that the system converges to the rational and optimal operating point.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of stereochemically constrained amino acids permits the design of short peptides as models for protein secondary structures. Amino acid residues that are restrained to a limited range of backbone torsion angles (ϕ-ψ) may be used as folding nuclei in the design of helices and β-hairpins. α-Amino-isobutyric acid (Aib) and related Cαα dialkylated residues are strong promoters of helix formation, as exemplified by a large body of experimentally determined structures of helical peptides. DPro-Xxx sequences strongly favor type II’ turn conformations, which serve to nucleate registered β-hairpin formation. Appropriately positioned DPro-Xxx segments may be used to nucleate the formation of multistranded antiparallel β-sheet structures. Mixed (α/β) secondary structures can be generated by linking rigid modules of helices and β-hairpins. The approach of using stereochemically constrained residues promotes folding by limiting the local structural space at specific residues. Several aspects of secondary structure design are outlined in this chapter, along with commonly used methods of spectroscopic characterization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The conformational properties of foldamers generated from alpha gamma hybrid peptide sequences have been probed in the model sequence Boc-Aib-Gpn-Aib-Gpn-NHMe. The choice of alpha-aminoisobutyryl (Aib) and gabapentin (Gpn) residues greatly restricts sterically accessible coil formational space. This model sequence was anticipated to be a short segment of the alpha gamma C-12 helix, stabilized by three successive 4 -> 1 hydrogen bonds, corresponding to a backbone-expanded analogue of the alpha polypeptide 3(10)-helix. Unexpectedly, three distinct crystalline polymorphs were characterized in the solid state by X-ray diffraction. In one form, two successive C-12 hydrogen bonds were obtained at the N-terminus, while a novel C-17 hydrogen-bonded gamma alpha gamma turn was observed at the C-terminus. In the other two polymorphs, isolated C-9 and C-7 hydrogen-bonded turns were observed at Gpn (2) and Gpn (4). Isolated C-12 and C-9 turns were also crystallographically established in the peptides Boc-Aib-Gpn-Aib-OMe and Boc-Gpn-Aib-NHMe, respectively. Selective line broadening of NH resonances and the observation of medium range NH(i)<-> NH(i+2) NOEs established the presence of conformational heterogeneity for the tetrapeptide in CDCl3 solution. The NMR results are consistent with the limited population of the continuous C-12 helix conformation. Lengthening of the (alpha gamma)(n) sequences in the nonapeptides Boc-Aib-Gpn-Aib-Gpn-Aib-Gpn-Aib-Gpn-Xxx (Xxx = Aib, Leu) resulted in the observation of all of the sequential NOEs characteristic of an alpha gamma C-12 helix. These results establish that conformational fragility is manifested in short hybrid alpha gamma sequences despite the choice of conformationally constrained residues, while stable helices are formed on chain extension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit four generations within the context of supersymmetry. Wecompute the perturbativity limits for the fourth generation Yukawa couplings and show that if the masses of the fourth generation lie within reasonable limits of their present experimental lower bounds, it is possible to have perturbativity only up to scales around 1000 TeV. Such low scales are ideally suited to incorporate gauge mediated supersymmetry breaking, where the mediation scale can be as low as 10-20 TeV. The minimal messenger model, however, is highly constrained. While lack of electroweak symmetry breaking rules out a large part of the parameter space, a small region exists, where the fourth generation stau is tachyonic. General gauge mediation with its broader set of boundary conditions is better suited to accommodate the fourth generation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis a manifold learning method is applied to the problem of WLAN positioning and automatic radio map creation. Due to the nature of WLAN signal strength measurements, a signal map created from raw measurements results in non-linear distance relations between measurement points. These signal strength vectors reside in a high-dimensioned coordinate system. With the help of the so called Isomap-algorithm the dimensionality of this map can be reduced, and thus more easily processed. By embedding position-labeled strategic key points, we can automatically adjust the mapping to match the surveyed environment. The environment is thus learned in a semi-supervised way; gathering training points and embedding them in a two-dimensional manifold gives us a rough mapping of the measured environment. After a calibration phase, where the labeled key points in the training data are used to associate coordinates in the manifold representation with geographical locations, we can perform positioning using the adjusted map. This can be achieved through a traditional supervised learning process, which in our case is a simple nearest neighbors matching of a sampled signal strength vector. We deployed this system in two locations in the Kumpula campus in Helsinki, Finland. Results indicate that positioning based on the learned radio map can achieve good accuracy, especially in hallways or other areas in the environment where the WLAN signal is constrained by obstacles such as walls.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a measurement of the mass of the top quark using data corresponding to an integrated luminosity of 1.9fb^-1 of ppbar collisions collected at sqrt{s}=1.96 TeV with the CDF II detector at Fermilab's Tevatron. This is the first measurement of the top quark mass using top-antitop pair candidate events in the lepton + jets and dilepton decay channels simultaneously. We reconstruct two observables in each channel and use a non-parametric kernel density estimation technique to derive two-dimensional probability density functions from simulated signal and background samples. The observables are the top quark mass and the invariant mass of two jets from the W decay in the lepton + jets channel, and the top quark mass and the scalar sum of transverse energy of the event in the dilepton channel. We perform a simultaneous fit for the top quark mass and the jet energy scale, which is constrained in situ by the hadronic W boson mass. Using 332 lepton + jets candidate events and 144 dilepton candidate events, we measure the top quark mass to be mtop=171.9 +/- 1.7 (stat. + JES) +/- 1.1 (syst.) GeV/c^2 = 171.9 +/- 2.0 GeV/c^2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present results of a signature-based search for new physics using a dijet plus missing transverse energy data sample collected in 2 fb-1 of p-pbar collisions at sqrt(s) = 1.96 TeV with the CDF II detector at the Fermilab Tevatron. We observe no significant event excess with respect to the standard model prediction and extract a 95% C.L. upper limit on the cross section times acceptance for a potential contribution from a non-standard model process. Based on this limit the mass of a first or second generation scalar leptoquark is constrained to be above 187 GeV/c^2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method is developed by which the input leading to the highest possible response in an interval of time can be determined for a class of non-linear systems. The input, if deterministic, is constrained to have a known finite energy (or norm) in the interval under consideration. In the case of random inputs, the energy is constrained to have a known probability distribution function. The approach has applications when a system has to be put to maximum advantage by getting the largest possible output or when a system has to be designed to the highest maximum response with only the input energy or the energy distribution known. The method is also useful in arriving at a bound on the highest peak distribution of the response, when the excitation is a known random process.As an illustration the Duffing oscillator has been analysed and some numerical results have also been presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microfinance institutions (MFIs) are constrained by double bottom-lines: meeting social obligations (the first bottom-line) and obtaining financial self-sufficiency (the second bottom-line). The proponents of the first bottom-line, however, are increasingly concerned that there is a trade-off between these two bottom-lines—i.e., getting hold of financial self-sufficiency may lead MFIs to drift away from their original social mission of serving the very poor, commonly known as mission drift in microfinance which is still a controversial issue. This study aims at addressing the concerns for mission drift in microfinance in a performance analysis framework. Chapter 1 deals with theoretical background, motivation and objectives of the topic. Then the study explores the validity of three major and related present-day concerns. Chapter 2 explores the impact of profitability on outreach-quality in MFIs, commonly known as mission drift, using a unique panel database that contains 4-9 years’ observations from 253 MFIs in 69 countries. Chapter 3 introduces factor analysis, a multivariate tool, in the process of analysing mission drift in microfinance and the exercise in this chapter demonstrates how the statistical tool of factor analysis can be utilised to examine this conjecture. In order to explore why some microfinance institutions (MFIs) perform better than others, Chapter 4 looks at factors which have an impact on several performance indicators of MFIs—profitability or sustainability, repayment status and cost indicators—based on quality-data on 353 institutions in 77 countries. The study also demonstrates whether such mission drift can be avoided while having self-sustainability. In Chapter 5 we examine the impact of capital and financing structure on the performance of microfinance institutions where estimations with instruments have been performed using a panel dataset of 782 MFIs in 92 countries for the period 2000-2007. Finally, Chapter 6 concludes the study by summarising the results from the previous chapters and suggesting some directions for future studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, focus of real estate investment has expanded from the building-specific level to the aggregate portfolio level. The portfolio perspective requires investment analysis for real estate which is comparable with that of other asset classes, such as stocks and bonds. Thus, despite its distinctive features, such as heterogeneity, high unit value, illiquidity and the use of valuations to measure performance, real estate should not be considered in isolation. This means that techniques which are widely used for other assets classes can also be applied to real estate. An important part of investment strategies which support decisions on multi-asset portfolios is identifying the fundamentals of movements in property rents and returns, and predicting them on the basis of these fundamentals. The main objective of this thesis is to find the key drivers and the best methods for modelling and forecasting property rents and returns in markets which have experienced structural changes. The Finnish property market, which is a small European market with structural changes and limited property data, is used as a case study. The findings in the thesis show that is it possible to use modern econometric tools for modelling and forecasting property markets. The thesis consists of an introduction part and four essays. Essays 1 and 3 model Helsinki office rents and returns, and assess the suitability of alternative techniques for forecasting these series. Simple time series techniques are able to account for structural changes in the way markets operate, and thus provide the best forecasting tool. Theory-based econometric models, in particular error correction models, which are constrained by long-run information, are better for explaining past movements in rents and returns than for predicting their future movements. Essay 2 proceeds by examining the key drivers of rent movements for several property types in a number of Finnish property markets. The essay shows that commercial rents in local markets can be modelled using national macroeconomic variables and a panel approach. Finally, Essay 4 investigates whether forecasting models can be improved by accounting for asymmetric responses of office returns to the business cycle. The essay finds that the forecast performance of time series models can be improved by introducing asymmetries, and the improvement is sufficient to justify the extra computational time and effort associated with the application of these techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The optimal design of a multiproduct batch chemical plant is formulated as a multiobjective optimization problem, and the resulting constrained mixed-integer nonlinear program (MINLP) is solved by the nondominated sorting genetic algorithm approach (NSGA-II). By putting bounds on the objective function values, the constrained MINLP problem can be solved efficiently by NSGA-II to generate a set of feasible nondominated solutions in the range desired by the decision-maker in a single run of the algorithm. The evolution of the entire set of nondominated solutions helps the decision-maker to make a better choice of the appropriate design from among several alternatives. The large set of solutions also provides a rich source of excellent initial guesses for solution of the same problem by alternative approaches to achieve any specific target for the objective functions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

H.264 video standard achieves high quality video along with high data compression when compared to other existing video standards. H.264 uses context-based adaptive variable length coding (CAVLC) to code residual data in Baseline profile. In this paper we describe a novel architecture for CAVLC decoder including coeff-token decoder, level decoder total-zeros decoder and run-before decoder UMC library in 0.13 mu CMOS technology is used to synthesize the proposed design. The proposed design reduces chip area and improves critical path performance of CAVLC decoder in comparison with [1]. Macroblock level (including luma and chroma) pipeline processing for CAVLC is implemented with an average of 141 cycles (including pipeline buffering) per macroblock at 250MHz clock frequency. To compare our results with [1] clock frequency is constrained to 125MHz. The area required for the proposed architecture is 17586 gates, which is 22.1% improvement in comparison to [1]. We obtain a throughput of 1.73 * 10(6) macroblocks/second, which is 28% higher than that reported in [1]. The proposed design meets the processing requirement of 1080HD [5] video at 30frames/seconds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Utilization bounds for Earliest Deadline First(EDF) and Rate Monotonic(RM) scheduling are known and well understood for uniprocessor systems. In this paper, we derive limits on similar bounds for the multiprocessor case, when the individual processors need not be identical. Tasks are partitioned among the processors and RM scheduling is assumed to be the policy used in individual processors. A minimum limit on the bounds for a 'greedy' class of algorithms is given and proved, since the actual value of the bound depends on the algorithm that allocates the tasks. We also derive the utilization bound of an algorithm which allocates tasks in decreasing order of utilization factors. Knowledge of such bounds allows us to carry out very fast schedulability tests although we are constrained by the fact that the tests are sufficient but not necessary to ensure schedulability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Constellation Constrained (CC) capacity regions of a two-user Gaussian Multiple Access Channel(GMAC) have been recently reported. For such a channel, code pairs based on trellis coded modulation are proposed in this paper with MPSK and M-PAM alphabet pairs, for arbitrary values of M,toachieve sum rates close to the CC sum capacity of the GMAC. In particular, the structure of the sum alphabets of M-PSK and M-PAMmalphabet pairs are exploited to prove that, for certain angles of rotation between the alphabets, Ungerboeck labelling on the trellis of each user maximizes the guaranteed squared Euclidean distance of the sum trellis. Hence, such a labelling scheme can be used systematically,to construct trellis code pairs to achieve sum rates close to the CC sum capacity. More importantly, it is shown for the first time that ML decoding complexity at the destination is significantly reduced when M-PAM alphabet pairs are employed with almost no loss in the sum capacity.