890 resultados para Branch and Bound algorithm
Resumo:
In many online applications, we need to maintain quantile statistics for a sliding window on a data stream. The sliding windows in natural form are defined as the most recent N data items. In this paper, we study the problem of estimating quantiles over other types of sliding windows. We present a uniform framework to process quantile queries for time constrained and filter based sliding windows. Our algorithm makes one pass on the data stream and maintains an E-approximate summary. It uses O((1)/(epsilon2) log(2) epsilonN) space where N is the number of data items in the window. We extend this framework to further process generalized constrained sliding window queries and proved that our technique is applicable for flexible window settings. Our performance study indicates that the space required in practice is much less than the given theoretical bound and the algorithm supports high speed data streams.
Resumo:
Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.
Resumo:
Purpose – In 2001, Euronext-Liffe introduced single security futures contracts for the first time. The purpose of this paper is to examine the impact that these single security futures had on the volatility of the underlying stocks. Design/methodology/approach – The Inclan and Tiao algorithm was used to show that the volatility of underlying securities did not change after universal futures were introduced. Findings – It was found that in the aftermath of the introduction of universal futures the volatility of the underlying securities increases. Increased volatility is not apparent in the control sample. This suggests that single security futures did have some impact on the volatility of the underlying securities. Originality/value – Despite the huge literature that has examined the effects of a futures listing on the volatility of underlying stock returns, little consensus has emerged. This paper adds to the dialogue by focusing on the effects of a single security futures contract rather than concentrating on the effects of index futures contracts.
Resumo:
We review our recent progress on the study of new nonlinear mechanisms of pulse shaping in passively mode-locked fibre lasers. These include a mode-locking regime featuring pulses with a triangular distribution of the intensity, and spectral compression arising from nonlinear pulse propagation. We also report on our recent experimental studies unveiling new families of vector solitons with precessing states of polarization for multipulsing and bound-state soliton operations in a carbon nanotube mode-locked fibre laser with anomalous dispersion cavity. © 2013 IEEE.
Resumo:
We review our recent progress on the study of new nonlinear mechanisms of pulse shaping in passively mode-locked fibre lasers. These include a mode-locking regime featuring pulses with a triangular distribution of the intensity, and spectral compression arising from nonlinear pulse propagation. We also report on our recent experimental studies unveiling new families of vector solitons with precessing states of polarization for multipulsing and bound-state soliton operations in a carbon nanotube mode-locked fibre laser with anomalous dispersion cavity. © 2013 IEEE.
Resumo:
The problem of MPLS networks survivability analysis is considered in this paper. The survivability indexes are defined which take into account the specificity of MPLS networks and the algorithm of its estimation is elaborated. The problem of MPLS network structure optimization under the constraints on the survivability indexes is considered and the algorithm of its solution is suggested. The experimental investigations were carried out and their results are presented.
Resumo:
Constant increase of human population result in more and more people living in emergency dangerous regions. In order to protect them from possible emergencies we need effective solution for decision taking in case of emergencies, because lack of time for taking decision and possible lack of data. One among possible methods of taking such decisions is shown in this article.
Resumo:
Next-generation integrated wireless local area network (WLAN) and 3G cellular networks aim to take advantage of the roaming ability in a cellular network and the high data rate services of a WLAN. To ensure successful implementation of an integrated network, many issues must be carefully addressed, including network architecture design, resource management, quality-of-service (QoS), call admission control (CAC) and mobility management. ^ This dissertation focuses on QoS provisioning, CAC, and the network architecture design in the integration of WLANs and cellular networks. First, a new scheduling algorithm and a call admission control mechanism in IEEE 802.11 WLAN are presented to support multimedia services with QoS provisioning. The proposed scheduling algorithms make use of the idle system time to reduce the average packet loss of realtime (RT) services. The admission control mechanism provides long-term transmission quality for both RT and NRT services by ensuring the packet loss ratio for RT services and the throughput for non-real-time (NRT) services. ^ A joint CAC scheme is proposed to efficiently balance traffic load in the integrated environment. A channel searching and replacement algorithm (CSR) is developed to relieve traffic congestion in the cellular network by using idle channels in the WLAN. The CSR is optimized to minimize the system cost in terms of the blocking probability in the interworking environment. Specifically, it is proved that there exists an optimal admission probability for passive handoffs that minimizes the total system cost. Also, a method of searching the probability is designed based on linear-programming techniques. ^ Finally, a new integration architecture, Hybrid Coupling with Radio Access System (HCRAS), is proposed for lowering the average cost of intersystem communication (IC) and the vertical handoff latency. An analytical model is presented to evaluate the system performance of the HCRAS in terms of the intersystem communication cost function and the handoff cost function. Based on this model, an algorithm is designed to determine the optimal route for each intersystem communication. Additionally, a fast handoff algorithm is developed to reduce the vertical handoff latency.^
Resumo:
This dissertation poses a set of six questions about one of the Israel Lobby's particular components, a Potential Christian Jewish coalition (PCJc) within American politics that advocates for Israeli sovereignty over "Judea and Samaria" ("the West Bank"). The study addresses: the profiles of the individuals of the PCJc; its policy positions, the issues that have divided it, and what has prevented, and continues to prevent, the coalition from being absorbed into one or more of the more formally organized components of the Israel Lobby; the resources and methods this coalition has used to attempt to influence U.S. policy on (a) the Middle East, and (b) the Arab-Israeli conflict in particular; the successes or failures of this coalition's advocacy and why it has not organized; and what this case reveals about interest group politics and social movements in the United States. This dissertation follows the descriptive-analytic case-study tradition that comprises a detailed analysis of a specific interest group and one policy issue, which conforms to my interest in the potential Christian Jewish coalition that supports a Jewish Judea and Samaria. I have employed participant observation, interviewing, content analysis and documentary research. The findings suggest: The PCJc consists of Christian Zionists and mostly Jews of the center religious denominations. Orthodox Jewish traditions of separation from Christians inhibit like-minded Christians and Jews from organizing. The PCJc opposes an Arab state in Judea and Samaria, and is not absorbed into more formally organized interest groups that support that policy. The PCJc's resources consist of support and funding from conservatives. Methods include use of education, debates and media. Members of the PCJc are successful because they persist in their support for a Jewish Judea and Samaria and meet through other organizations around Judeo-Christian values. The PCJc is deterred from advocacy and organization by a mobilization of bias from a subgovernment in Washington, D.C. comprising Congress, the Executive branch and lobby organizations. The study's results raise questions about interest group politics in America and the degree to which the U.S. political system is pluralistic, suggesting that executive power constrains the agenda to "safe" positions it favors.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
Intriguing lattice dynamics has been predicted for aperiodic crystals that contain incommensurate substructures. Here we report inelastic neutron scattering measurements of phonon and magnon dispersions in Sr14Cu24O41, which contains incommensurate one-dimensional (1D) chain and two-dimensional (2D) ladder substructures. Two distinct acoustic phonon-like modes, corresponding to the sliding motion of one sublattice against the other, are observed for atomic motions polarized along the incommensurate axis. In the long wavelength limit, it is found that the sliding mode shows a remarkably small energy gap of 1.7-1.9 meV, indicating very weak interactions between the two incommensurate sublattices. The measurements also reveal a gapped and steep linear magnon dispersion of the ladder sublattice. The high group velocity of this magnon branch and weak coupling with acoustic phonons can explain the large magnon thermal conductivity in Sr14Cu24O41 crystals. In addition, the magnon specific heat is determined from the measured total specific heat and phonon density of states, and exhibits a Schottky anomaly due to gapped magnon modes of the spin chains. These findings offer new insights into the phonon and magnon dynamics and thermal transport properties of incommensurate magnetic crystals that contain low-dimensional substructures.
Resumo:
Free and "bound" long-chain alkenones (C37?2 and C37?3) in oxidized and unoxidized sections of four organic matter-rich Pliocene and Miocene Madeira Abyssal Plain turbidites (one from Ocean Drilling Program site 951B and three from site 952A) were analyzed to determine the effect of severe post depositional oxidation on the value of Uk'37. The profiles of both alkenones across the redox boundary show a preferential degradation of the C37?3 compared to the C37?2 compound. Because of the high initial Uk'37 values and the way of calculating the Uk'37 this degradation hardly influences the Uk'37 profiles. However, for lower Uk'37 values, measured selective degradation would increase Uk'37 up to 0.17 units, equivalent to 5°C. For most of the Uk'37 band-width, much smaller degradation already increases Uk'37 beyond the analytical error (0.017 units). Consequently, for interpreting the Uk'37 record in terms of past sea surface temperatures, selective degradation needs serious consideration.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.
Resumo:
The objective of this dissertation is to explore a more accurate and versatile approach to investigating the neutralization of spores suffered from ultrafast heating and biocide based stresses, and further to explore and understand novel methods to supply ultrafast heating and biocides through nanostructured energetic materials A surface heating method was developed to apply accurate (± 25 ˚C), high heating rate thermal energy (200 - 800 ˚C, ~103 - ~105 ˚C/s). Uniform attachment of bacterial spores was achieved electrophoretically onto fine wires in liquids, which could be quantitatively detached into suspension for spore enumeration. The spore inactivation increased with temperature and heating rate, and fit a sigmoid response. The neutralization mechanisms of peak temperature and heating rate were correlated to the DNA damage at ~104 ˚C/s, and to the coat rupture by ultrafast vapor pressurization inside spores at ~105 ˚C/s. Humidity was found to have a synergistic effect of rapid heating and chlorine gas to neutralization efficiency. The primary neutralization mechanism of Cl2 and rapid heat is proposed to be chlorine reacting with the spore surface. The stress-kill correlation above provides guidance to explore new biocidal thermites, and to probe mechanisms. Results show that nano-Al/K2S2O8 released more gas at a lower temperature and generated a higher maximum pressure than the other nano-Al/oxysalts. Given that this thermite formulation generates the similar amount of SO2 as O2, it can be considered as a potential candidate for use in energetic biocidal applications. The reaction mechanisms of persulfate and other oxysalts containing thermites can be divided into two groups, with the reactive thermites (e.g. Al/K2S2O8) that generate ~10× higher of pressure and ~10× shorter of burn time ignited via a solid-gas Al/O2 reaction, while the less reactive thermites (e.g. Al/K2SO4) following a condensed phase Al/O reaction mechanism. These different ignition mechanisms were further re-evaluated by investigating the roles of free and bound oxygen. A constant critical reaction rate for ignition was found which is independent to ignition temperature, heating rate and free vs. bound oxygen.
Resumo:
In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.