993 resultados para performaceoptimazation soft error
Resumo:
The most difficult operation in the flood inundation mapping using optical flood images is to separate fully inundated areas from the ‘wet’ areas where trees and houses are partly covered by water. This can be referred as a typical problem the presence of mixed pixels in the images. A number of automatic information extraction image classification algorithms have been developed over the years for flood mapping using optical remote sensing images. Most classification algorithms generally, help in selecting a pixel in a particular class label with the greatest likelihood. However, these hard classification methods often fail to generate a reliable flood inundation mapping because the presence of mixed pixels in the images. To solve the mixed pixel problem advanced image processing techniques are adopted and Linear Spectral unmixing method is one of the most popular soft classification technique used for mixed pixel analysis. The good performance of linear spectral unmixing depends on two important issues, those are, the method of selecting endmembers and the method to model the endmembers for unmixing. This paper presents an improvement in the adaptive selection of endmember subset for each pixel in spectral unmixing method for reliable flood mapping. Using a fixed set of endmembers for spectral unmixing all pixels in an entire image might cause over estimation of the endmember spectra residing in a mixed pixel and hence cause reducing the performance level of spectral unmixing. Compared to this, application of estimated adaptive subset of endmembers for each pixel can decrease the residual error in unmixing results and provide a reliable output. In this current paper, it has also been proved that this proposed method can improve the accuracy of conventional linear unmixing methods and also easy to apply. Three different linear spectral unmixing methods were applied to test the improvement in unmixing results. Experiments were conducted in three different sets of Landsat-5 TM images of three different flood events in Australia to examine the method on different flooding conditions and achieved satisfactory outcomes in flood mapping.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.
Resumo:
In this paper, we consider the design and bit-error performance analysis of linear parallel interference cancellers (LPIC) for multicarrier (MC) direct-sequence code division multiple access (DS-CDMA) systems. We propose an LPIC scheme where we estimate and cancel the multiple access interference (MAT) based on the soft decision outputs on individual subcarriers, and the interference cancelled outputs on different subcarriers are combined to form the final decision statistic. We scale the MAI estimate on individual subcarriers by a weight before cancellation. In order to choose these weights optimally, we derive exact closed-form expressions for the bit-error rate (BER) at the output of different stages of the LPIC, which we minimize to obtain the optimum weights for the different stages. In addition, using an alternate approach involving the characteristic function of the decision variable, we derive BER expressions for the weighted LPIC scheme, matched filter (MF) detector, decorrelating detector, and minimum mean square error (MMSE) detector for the considered multicarrier DS-CDMA system. We show that the proposed BER-optimized weighted LPIC scheme performs better than the MF detector and the conventional LPIC scheme (where the weights are taken to be unity), and close to the decorrelating and MMSE detectors.
Resumo:
A constant switching frequency current error space vector-based hysteresis controller for two-level voltage source inverter-fed induction motor (IM) drives is proposed in this study. The proposed controller is capable of driving the IM in the entire speed range extending to the six-step mode. The proposed controller uses the parabolic boundary, reported earlier, for vector selection in a sector, but uses simple, fast and self-adaptive sector identification logic for sector change detection in the entire modulation range. This new scheme detects the sector change using the change in direction of current error along the axes jA, jB and jC. Most of the previous schemes use an outer boundary for sector change detection. So the current error goes outside the boundary six times during sector change, in one cycle,, introducing additional fifth and seventh harmonic components in phase current. This may cause sixth harmonic torque pulsations in the motor and spread in the harmonic spectrum of phase voltage. The proposed new scheme detects the sector change fast and accurately eliminating the chance of introducing additional fifth and seventh harmonic components in phase current and provides harmonic spectrum of phase voltage, which exactly matches with that of constant switching frequency voltage-controlled space vector pulse width modulation (VC-SVPWM)-based two-level inverter-fed drives.
Resumo:
Theoretical expressions for stresses and displacements have been derived for bending under a ring load of a free shell, a shell embedded in a soft medium, and a shell containing a soft core. Numerical work has been done for typical cases with an Elliot 803 Digital Computer and influence lines are drawn therefrom.
Resumo:
Ion transport in a recently demonstrated promising soft matter solid plastic-polymer electrolyte is discussed here in the context of solvent dynamics and ion association. The plastic-polymer composite electrolytes display liquid-like ionic conductivity in the solid state,compliable mechanical strength (similar to 1 MPa), and wide electrochemical voltage stability (>= 5 V). Polyacrylonitrile (PAN) dispersed in lithium perchlorate (LiClO4)-succinonitrile (SN) was chosen as the model system for the study (abbreviated LiClO4-SN:PAN). Systematic observation of various mid-infrared isomer and ion association bands as a function of temperature and polyme concentration shows an effective increase in trans conformer concentration along with free Li+ ion concentration. This strongly supports the view that enhancement in LiClO4-SN:PAN ionic conductivity over the neat plastic electrolyte (LiClO4-SN) is due to both increase in charge mobility and concentration. The ionic conductivity and infrared spectroscopy studies are supported by Brillouin light scattering. For the LiClO4-SN:PAN composites, a peak at 17 GHz was observed in addition to the normal trans-gauche isomerism (as in neat SN) at 12 GHz. The fast process is attributed to increased dynamics of those SN molecules whose energy barrier of transition from gauche to trans has reduced under influences induced by the changes in temperature and polymer concentration. The observations from ionic conductivity, spectroscopy, and light scattering studies were further supplemented by temperature dependent nuclear magnetic resonance H-1 and Li-7 line width measurements.
Resumo:
The problem is solved using the Love function and Flügge shell theory. Numerical work has been done with a computer for various values of shell geometry parameters and elastic constants.
Resumo:
We present a measurement of the electric charge of the top quark using $\ppbar$ collisions corresponding to an integrated luminosity of 2.7~fb$^{-1}$ at the CDF II detector. We reconstruct $\ttbar$ events in the lepton+jets final state and use kinematic information to determine which $b$-jet is associated with the leptonically- or hadronically-decaying $t$-quark. Soft lepton taggers are used to determine the $b$-jet flavor. Along with the charge of the $W$ boson decay lepton, this information permits the reconstruction of the top quark's electric charge. Out of 45 reconstructed events with $2.4\pm0.8$ expected background events, 29 are reconstructed as $\ttbar$ with the standard model $+$2/3 charge, whereas 16 are reconstructed as $\ttbar$ with an exotic $-4/3$ charge. This is consistent with the standard model and excludes the exotic scenario at 95\% confidence level. This is the strongest exclusion of the exotic charge scenario and the first to use soft leptons for this purpose.
Resumo:
We present a measurement of the top quark pair production cross section in ppbar collisions at sqrt(s)=1.96 TeV using a data sample corresponding to 1.7/fb of integrated luminosity collected with the Collider Detector at Fermilab. We reconstruct ttbar events in the lepton+jets channel. The dominant background is the production of W bosons in association with multiple jets. To suppress this background, we identify electrons from the semileptonic decay of heavy-flavor jets. We measure a production cross section of 7.8 +/- 2.4 (stat) +/- 1.6 (syst) +/- 0.5 (lumi) pb. This is the first measurement of the top pair production cross section with soft electron tags in Run II of the Tevatron.
Resumo:
We present a measurement of the tt̅ production cross section in pp̅ collisions at √s=1.96 TeV using events containing a high transverse momentum electron or muon, three or more jets, and missing transverse energy. Events consistent with tt̅ decay are found by identifying jets containing candidate heavy-flavor semileptonic decays to muons. The measurement uses a CDF run II data sample corresponding to 2 fb-1 of integrated luminosity. Based on 248 candidate events with three or more jets and an expected background of 79.5±5.3 events, we measure a production cross section of 9.1±1.6 pb.
Resumo:
We present a measurement of the tt̅ production cross section in pp̅ collisions at √s=1.96 TeV using events containing a high transverse momentum electron or muon, three or more jets, and missing transverse energy. Events consistent with tt̅ decay are found by identifying jets containing candidate heavy-flavor semileptonic decays to muons. The measurement uses a CDF run II data sample corresponding to 2 fb-1 of integrated luminosity. Based on 248 candidate events with three or more jets and an expected background of 79.5±5.3 events, we measure a production cross section of 9.1±1.6 pb.
Resumo:
We present a measurement of the $\ttbar$ production cross section in $\ppbar$ collisions at $\sqrt{s}=1.96$ TeV using events containing a high transverse momentum electron or muon, three or more jets, and missing transverse energy. Events consistent with $\ttbar$ decay are found by identifying jets containing candidate heavy-flavor semileptonic decays to muons. The measurement uses a CDF Run II data sample corresponding to $2 \mathrm{fb^{-1}}$ of integrated luminosity. Based on 248 candidate events with three or more jets and an expected background of $79.5\pm5.3$ events, we measure a production cross section of $9.1\pm 1.6 \mathrm{pb}$.
Resumo:
In handling large volumes of data such as chemical notations, serial numbers for books, etc., it is always advisable to provide checking methods which would indicate the presence of errors. The entire new discipline of coding theory is devoted to the study of the construction of codes which provide such error-detecting and correcting means.l Although these codes are very powerful, they are highly sophisticated from the point of view of practical implementation
Resumo:
We present a method to perform in situ microrheological measurements on monolayers of soft materials undergoing viscoelastic transitions under compression. Using the combination of a Langmuir trough mounted on the inverted microscope stage of a laser scanning confocal microscope we track the motion of individual fluorescent quantum dots partly dispersed in monolayers spread at the air-water interface. From the calculated mean square displacement of the probe particles and extending a well established scheme of the generalized Stokes-Einstein relation in bulk to the interface we arrive at the viscoelastic modulus for the respective monolayers as a function of surface density. Measurements on monolayers of glassy as well as nonglassy polymers and a standard fatty acid clearly show sensitivity of our technique to subtle variations, in the viscoelastic properties of the highly confined materials under compression. Evidence for possible spatial variations of such viscoelastic properties at a given surface density for the fatty acid monolayer is also provided.
Resumo:
On the one hand this thesis attempts to develop and empirically test an ethically defensible theorization of the relationship between human resource management (HRM) and competitive advantage. The specific empirical evidence indicates that at least part of HRM's causal influence on employee performance may operate indirectly through a social architecture and then through psychological empowerment. However, in particular the evidence concerning a potential influence of HRM on organizational performance seems to put in question some of the rhetorics within the HRM research community. On the other hand, the thesis tries to explicate and defend a certain attitude towards the philosophically oriented debates within organization science. This involves suggestions as to how we should understand meaning, reference, truth, justification and knowledge. In this understanding it is not fruitful to see either the problems or the solutions to the problems of empirical social science as fundamentally philosophical ones. It is argued that the notorious problems of social science, in this thesis exemplified by research on HRM, can be seen as related to dynamic complexity in combination with both the ethical and pragmatic difficulty of ”laboratory-like-experiments”. Solutions … can only be sought by informed trials and errors depending on the perceived familiarity with the object(s) of research. The odds are against anybody who hopes for clearly adequate social scientific answers to more complex questions. Social science is in particular unlikely to arrive at largely accepted knowledge of the kind ”if we do this, then that will happen”, or even ”if we do this, then that is likely to happen”. One of the problems probably facing most of the social scientific research communities is to specify and agree upon the ”this ” and the ”that” and provide convincing evidence of how they are (causally) related. On most more complex questions the role of social science seems largely to remain that of contributing to a (critical) conversation, rather than to arrive at more generally accepted knowledge. This is ultimately what is both argued and, in a sense, demonstrated using research on the relationship between HRM and organizational performance as an example.