69 resultados para Branch and Bound algorithms
Resumo:
Linear adaptive channel equalization using the least mean square (LMS) algorithm and the recursive least-squares(RLS) algorithm for an innovative multi-user (MU) MIMOOFDM wireless broadband communications system is proposed. The proposed equalization method adaptively compensates the channel impairments caused by frequency selectivity in the propagation environment. Simulations for the proposed adaptive equalizer are conducted using a training sequence method to determine optimal performance through a comparative analysis. Results show an improvement of 0.15 in BER (at a SNR of 16 dB) when using Adaptive Equalization and RLS algorithm compared to the case in which no equalization is employed. In general, adaptive equalization using LMS and RLS algorithms showed to be significantly beneficial for MU-MIMO-OFDM systems.
Resumo:
Recently there has been significant interest of researchers and practitioners on the use of Bluetooth as a complementary transport data. However, literature is limited with the understanding of the Bluetooth MAC Scanner (BMS) based data acquisition process and the properties of the data being collected. This paper first provides an insight on the BMS data acquisition process. Thereafter, it discovers the interesting facts from analysis of the real BMS data from both motorway and arterial networks of Brisbane, Australia. The knowledge gained is helpful for researchers and practitioners to understand the BMS data being collected which is vital to the development of management and control algorithms using the data.
Resumo:
Purpose This article reports on a research project that explored social media best practice in the public library sector. Design/methodology/approach The primary research approach for the project was case study. Two organisations participated in case studies that involved interviews, document analysis, and social media observation. Findings The two case study organisations use social media effectively to facilitate participatory networks, however, there have been challenges surrounding its implementation in both organisations. Challenges include negotiating requirements of governing bodies and broader organisational environments, and managing staff reluctance around the implementations. As social media use continues to grow and libraries continue to take up new platforms, social media must be considered to be another service point of the virtual branch, and indeed, for the library service as a whole. This acceptance of social media as being core business is critical to the successful implementation of social media based activities. Practical implications The article provides an empirically grounded discussion of best practice and the conditions that support it. The findings are relevant for information organisations across all sectors and could inform the development of policy and practice in other organisations. This paper contributes to the broader dialogue around best practice in participatory service delivery and social media use in library and information organisations.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
This paper presents large, accurately calibrated and time-synchronised datasets, gathered outdoors in controlled environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. It discusses how the data collection process was designed, the conditions in which these datasets have been gathered, and some possible outcomes of their exploitation, in particular for the evaluation of performance of sensors and perception algorithms for UGVs.
Resumo:
Voltage drop at network peak hours is a significant power quality problem in Low Voltage (LV) distribution feeders. Recently, voltage rise due to high penetration of Photovoltaic cells (PVs) has been creating a new power quality problem during noon periods. In this paper, a voltage control strategy is proposed for the household installed PVs to regulate the voltage along the LV feeder. For this purpose, each PV is controlled to exchange reactive power with the grid. A droop control method is utilized to coordinate the reactive power exchange of each PV. The proposed method is a decentralized local voltage support since it is based on only local measurements and does not require any communication with other PVs. The required converter and filter structure and control algorithms are proposed to ensure the dynamic performance of the system. The study focuses on 3-phase PVs. The network is studied at network peak and off-peak periods, separately. The efficacy of the proposed voltage support concept is verified through numerical and dynamic analyses with MATLAB and PSCAD/EMTDC.
Resumo:
This paper presents a new hybrid evolutionary algorithm based on Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) for daily Volt/Var control in distribution system including Distributed Generators (DGs). Due to the small X/R ratio and radial configuration of distribution systems, DGs have much impact on this problem. Since DGs are independent power producers or private ownership, a price based methodology is proposed as a proper signal to encourage owners of DGs in active power generation. Generally, the daily Volt/Var control is a nonlinear optimization problem. Therefore, an efficient hybrid evolutionary method based on Particle Swarm Optimization and Ant Colony Optimization (ACO), called HPSO, is proposed to determine the active power values of DGs, reactive power values of capacitors and tap positions of transformers for the next day. The feasibility of the proposed algorithm is demonstrated and compared with methods based on the original PSO, ACO and GA algorithms on IEEE 34-bus distribution feeder.
Resumo:
RC4(n, m) is a stream cipher based on RC4 and is designed by G. Gong et al. It can be seen as a generalization of the famous RC4 stream cipher designed by Ron Rivest. The authors of RC4(n, m) claim that the cipher resists all the attacks that are successful against the original RC4. The paper reveals cryptographic weaknesses of the RC4(n, m) stream cipher. We develop two attacks. The first one is based on non-randomness of internal state and allows to distinguish it from a truly random cipher by an algorithm that has access to 24·n bits of the keystream. The second attack exploits low diffusion of bits in the KSA and PRGA algorithms and recovers all bytes of the secret key. This attack works only if the initial value of the cipher can be manipulated. Apart from the secret key, the cipher uses two other inputs, namely, initial value and initial vector. Although these inputs are fixed in the cipher specification, some applications may allow the inputs to be under the attacker control. Assuming that the attacker can control the initial value, we show a distinguisher for the cipher and a secret key recovery attack that for the L-bit secret key, is able to recover it with about (L/n) · 2n steps. The attack has been implemented on a standard PC and can reconstruct the secret key of RC(8, 32) in less than a second.
Resumo:
We present a text watermarking scheme that embeds a bitstream watermark Wi in a text document P preserving the meaning, context, and flow of the document. The document is viewed as a set of paragraphs, each paragraph being a set of sentences. The sequence of paragraphs and sentences used to embed watermark bits is permuted using a secret key. Then, English language sentence transformations are used to modify sentence lengths, thus embedding watermarking bits in the Least Significant Bits (LSB) of the sentences’ cardinalities. The embedding and extracting algorithms are public, while the secrecy and security of the watermark depends on a secret key K. The probability of False Positives is extremely small, hence avoiding incidental occurrences of our watermark in random text documents. Majority voting provides security against text addition, deletion, and swapping attacks, further reducing the probability of False Positives. The scheme is secure against the general attacks on text watermarks such as reproduction (photocopying, FAX), reformatting, synonym substitution, text addition, text deletion, text swapping, paragraph shuffling and collusion attacks.
Resumo:
Motivated by privacy issues associated with dissemination of signed digital certificates, we define a new type of signature scheme called a ‘Universal Designated-Verifier Signature’ (UDVS). A UDVS scheme can function as a standard publicly-verifiable digital signature but has additional functionality which allows any holder of a signature (not necessarily the signer) to designate the signature to any desired designated-verifier (using the verifier’s public key). Given the designated-signature, the designated-verifier can verify that the message was signed by the signer, but is unable to convince anyone else of this fact. We propose an efficient deterministic UDVS scheme constructed using any bilinear group-pair. Our UDVS scheme functions as a standard Boneh-Lynn-Shacham (BLS) signature when no verifier-designation is performed, and is therefore compatible with the key-generation, signing and verifying algorithms of the BLS scheme. We prove that our UDVS scheme is secure in the sense of our unforgeability and privacy notions for UDVS schemes, under the Bilinear Diffie-Hellman (BDH) assumption for the underlying group-pair, in the random-oracle model. We also demonstrate a general constructive equivalence between a class of unforgeable and unconditionally-private UDVS schemes having unique signatures (which includes the deterministic UDVS schemes) and a class of ID-Based Encryption (IBE) schemes which contains the Boneh-Franklin IBE scheme but not the Cocks IBE scheme.
Resumo:
Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in ‘real-world’ environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.
Resumo:
Problem addressed Wrist-worn accelerometers are associated with greater compliance. However, validated algorithms for predicting activity type from wrist-worn accelerometer data are lacking. This study compared the activity recognition rates of an activity classifier trained on acceleration signal collected on the wrist and hip. Methodology 52 children and adolescents (mean age 13.7 +/- 3.1 year) completed 12 activity trials that were categorized into 7 activity classes: lying down, sitting, standing, walking, running, basketball, and dancing. During each trial, participants wore an ActiGraph GT3X+ tri-axial accelerometer on the right hip and the non-dominant wrist. Features were extracted from 10-s windows and inputted into a regularized logistic regression model using R (Glmnet + L1). Results Classification accuracy for the hip and wrist was 91.0% +/- 3.1% and 88.4% +/- 3.0%, respectively. The hip model exhibited excellent classification accuracy for sitting (91.3%), standing (95.8%), walking (95.8%), and running (96.8%); acceptable classification accuracy for lying down (88.3%) and basketball (81.9%); and modest accuracy for dance (64.1%). The wrist model exhibited excellent classification accuracy for sitting (93.0%), standing (91.7%), and walking (95.8%); acceptable classification accuracy for basketball (86.0%); and modest accuracy for running (78.8%), lying down (74.6%) and dance (69.4%). Potential Impact Both the hip and wrist algorithms achieved acceptable classification accuracy, allowing researchers to use either placement for activity recognition.
Resumo:
The forthcoming NIST’s Advanced Hash Standard (AHS) competition to select SHA-3 hash function requires that each candidate hash function submission must have at least one construction to support FIPS 198 HMAC application. As part of its evaluation, NIST is aiming to select either a candidate hash function which is more resistant to known side channel attacks (SCA) when plugged into HMAC, or that has an alternative MAC mode which is more resistant to known SCA than the other submitted alternatives. In response to this, we perform differential power analysis (DPA) on the possible smart card implementations of some of the recently proposed MAC alternatives to NMAC (a fully analyzed variant of HMAC) and HMAC algorithms and NMAC/HMAC versions of some recently proposed hash and compression function modes. We show that the recently proposed BNMAC and KMDP MAC schemes are even weaker than NMAC/HMAC against the DPA attacks, whereas multi-lane NMAC, EMD MAC and the keyed wide-pipe hash have similar security to NMAC against the DPA attacks. Our DPA attacks do not work on the NMAC setting of MDC-2, Grindahl and MAME compression functions.
Resumo:
A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.