882 resultados para Branch and bound algorithms
Resumo:
We use a polarizer to investigate quantum-well infrared absorption, and report experimental results as follows. The intrasubband transition was observed in GaAs/AlxGa1-xAs multiple quantum wells (MQWs) when the incident infrared radiation (IR) is polarized parallel to the MQW plane. According to the selection rule, an intrasubband transition is forbidden. Up to now, most studies have only observed the intersubband transition between two states with opposite parity. However, our experiment shows not only the intersubband transitions, but also the intrasubband transitions. In our study, we also found that for light doping in the well (4x10(18) cm(-3)), the intrasubband transition occurs only in the lowest subband, while for the heavy doping (8x10(18) cm(-3)), such a transition occurs not only in the lowest subband, but also in the first excited one, because of the electron subband filling. Further experimental results show a linear dependence of the intrasubband transition frequency on the root of the well doping density. These data are in good agreement with our numerical results. Thus we strongly suggest that such a transition can be attributed to plasma oscillation. Conversely, when the incident IR is polarized perpendicular to the MQW plane, intersubband-transition-induced signals appear, while the intrasubband-transition-induced spectra disappear for both light and heavy well dopings. A depolarization blueshift was also taken into account to evaluate the intersubband transition spectra at different well dopings. Furthermore, we performed a deep-level transient spectroscopy (DLTS) measurement to determine the subband energies at different well dopings. A good agreement between DLTS, infrared absorption, and numerical calculation was obtained. In our experiment, two important phenomena are noteworthy: (1) The polarized absorbance is one order of magnitude higher than the unpolarized spectra. This puzzling result is well explained in detail. (2) When the IR, polarized perpendicular to the well plane, normally irradiates the 45 degrees-beveled edge of the samples, we only observed intersubband transition spectra. However, the intrasubband transition signals caused by the in-plane electric-field component are significantly absent. The reason is that such in-plane electric-field components can cancel each other out everywhere during the light propagating in the samples. The spectral widths of bound-to-bound and bound-to-continuum transitions were also discussed, and quantitatively compared to the relaxation time tau, which is deduced from the electron mobility. The relaxation times deduced from spectral widths of bound-to-bound and bound-to-continuum transitions are also discussed, and quantitatively compared to the relaxation time deduced from electron mobility. [S0163-1829(98)01912-2].
Resumo:
We have developed a novel InP-based, ridge-waveguide photonic integrated circuit (PIC), which consists of a 1.1-um wavelength Y-branch optical waveguide with low loss and improved far field pattern and a 1.3-um wavelength strained InGaAsP-InP multiple quantum-well superluminescent diode, with bundle integrated guide (BIG) as the scheme for monolithic integration. The simulations of BIG and Y-branches show low losses and improved far-field patterns, based on the beam propagation method (BPM). The amplified spontaneous emission of the device is up to 10 mW at 120 mA with no threshold and saturation. Spectral characteristics of about 30 nm width and less than I dB modulation are achieved using the built-in anti-lasing ability of Y-branch. The beam divergence angles in horizontal and vertical directions are optimized to as small as 12 degrees x8 degrees, resulting in good fiber coupling. The compactness, simplicity in fabrication, good superluminescent performance, low transmission loss and estimated low coupling loss prove the BIG and Y-branch method to be a feasible way for integration and make the photonic integrated circuit of Y-branch and superluminescent diode an promising candidate for transmitter and transceiver used in fiber optic gyroscope.
Resumo:
This paper describes the ground target detection, classification and sensor fusion problems in distributed fiber seismic sensor network. Compared with conventional piezoelectric seismic sensor used in UGS, fiber optic sensor has advantages of high sensitivity and resistance to electromagnetic disturbance. We have developed a fiber seismic sensor network for target detection and classification. However, ground target recognition based on seismic sensor is a very challenging problem because of the non-stationary characteristic of seismic signal and complicated real life application environment. To solve these difficulties, we study robust feature extraction and classification algorithms adapted to fiber sensor network. An united multi-feature (UMF) method is used. An adaptive threshold detection algorithm is proposed to minimize the false alarm rate. Three kinds of targets comprise personnel, wheeled vehicle and tracked vehicle are concerned in the system. The classification simulation result shows that the SVM classifier outperforms the GMM and BPNN. The sensor fusion method based on D-S evidence theory is discussed to fully utilize information of fiber sensor array and improve overall performance of the system. A field experiment is organized to test the performance of fiber sensor network and gather real signal of targets for classification testing.
Resumo:
A Function Definition Language (FDL) is presented. Though designed for describing specifications, FDL is also a general-purpose functional programming language. It uses context-free language as data type, supports pattern matching definition of functions, offers several function definition forms, and is executable. It is shown that FDL has strong expressiveness, is easy to use and describes algorithms concisely and naturally. An interpreter of FDL is introduced. Experiments and discussion are included.
Resumo:
A voltage-controlled tunable two-color infrared detector with photovoltaic (PV) and photoconductive (PC) dual-mode operation at 3-5 mu m and 8-14 mu m using GaAs/AlAs/AlGaAs double barrier quantum wells (DBQWs) and bound-to-continuum GaAs/AlGaAs quantum wells is demonstrated. The photoresponse peak of the photovoltaic GaAs/AlAs/GaAlAs DBQWs is at 5.3 mu m, and that of the photoconductive GaAs/GaAlAs quantum wells is at 9.0 mu m. When the two-color detector is under a zero bias, the spectral response at 5.3 mu m is close to saturate and the peak detectivity at 80 K can reach 1.0X10(11) cmHz(1/2)/W, while the spectral photoresponsivity at 9.0 mu m is absolutely zero completely. When the external voltage of the two-color detector is changed to 2.0 V, the spectral photoresponsivity at 5.3 mu m becomes zero while the spectral photoresponsivity at 9.0 mu m increases comparable to that at 5.3 mu m under zero bias, and the peak detectivity (9.0 mu m) at 80 K can reach 1.5X10(10) cmHz(1/2)/W. Strictly speaking, this is a real bias-controlled tunable two-color infrared photodetector. We have proposed a model based on the PV and PC dual-mode operation of stacked two-color QWIPs and the effects of tunneling resonance with narrow energy width of photoexcited electrons in DBQWs, which can explain qualitatively the voltage-controlled tunable behavior of the photoresponse of the two-color infrared photodetector. (C) 1996 American Institute of Physics.
Resumo:
Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.
Resumo:
P. Lingras and R. Jensen, 'Survey of Rough and Fuzzy Hybridization,' Proceedings of the 16th International Conference on Fuzzy Systems (FUZZ-IEEE'07), pp. 125-130, 2007.
Resumo:
Danny S. Tuckwell, Matthew J. Nicholson, Christopher S. McSweeney, Michael K. Theodorou and Jayne L. Brookman (2005). The rapid assignment of ruminal fungi to presumptive genera using ITS1 and ITS2 RNA secondary structures to produce group-specific fingerprints. Microbiology, 151 (5) pp.1557-1567 Sponsorship: BBSRC / Stapledon Memorial Trust RAE2008
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in flexible congestion control mechanisms. Window-based congestion control schemes use increase rules to probe available bandwidth, and decrease rules to back off when congestion is detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and packet loss rate. In this paper, we propose a novel window-based congestion control algorithm called SIMD (Square-Increase/Multiplicative-Decrease). Contrary to previous memory-less controls, SIMD utilizes history information in its control rules. It uses multiplicative decrease but the increase in window size is in proportion to the square of the time elapsed since the detection of the last loss event. Thus, SIMD can efficiently probe available bandwidth. Nevertheless, SIMD is TCP-friendly as well as TCP-compatible under RED, and it has much better convergence behavior than TCP-friendly AIMD and binomial algorithms proposed recently.
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.
Resumo:
In regression analysis of counts, a lack of simple and efficient algorithms for posterior computation has made Bayesian approaches appear unattractive and thus underdeveloped. We propose a lognormal and gamma mixed negative binomial (NB) regression model for counts, and present efficient closed-form Bayesian inference; unlike conventional Poisson models, the proposed approach has two free parameters to include two different kinds of random effects, and allows the incorporation of prior information, such as sparsity in the regression coefficients. By placing a gamma distribution prior on the NB dispersion parameter r, and connecting a log-normal distribution prior with the logit of the NB probability parameter p, efficient Gibbs sampling and variational Bayes inference are both developed. The closed-form updates are obtained by exploiting conditional conjugacy via both a compound Poisson representation and a Polya-Gamma distribution based data augmentation approach. The proposed Bayesian inference can be implemented routinely, while being easily generalizable to more complex settings involving multivariate dependence structures. The algorithms are illustrated using real examples. Copyright 2012 by the author(s)/owner(s).