607 resultados para transactional boosting
Resumo:
This work proposes a boosting-based transfer learning approach for head-pose classification from multiple, low-resolution views. Head-pose classification performance is adversely affected when the source (training) and target (test) data arise from different distributions (due to change in face appearance, lighting, etc). Under such conditions, we employ Xferboost, a Logitboost-based transfer learning framework that integrates knowledge from a few labeled target samples with the source model to effectively minimize misclassifications on the target data. Experiments confirm that the Xferboost framework can improve classification performance by up to 6%, when knowledge is transferred between the CLEAR and FBK four-view headpose datasets.
Resumo:
Maintaining metadata consistency is a critical issue in designing a filesystem. Although satisfactory solutions are available for filesystems residing on magnetic disks, these solutions may not give adequate performance for filesystems residing on flash devices. Prabhakaran et al. have designed a metadata consistency mechanism specifically for flash chips, called Transactional Flash1]. It uses cyclic commit mechanism to provide transactional abstractions. Although significant improvement over usual journaling techniques, this mechanism has certain drawbacks such as complex protocol and necessity to read whole flash during recovery, which slows down recovery process. In this paper we propose addition of thin journaling layer on top of Transactional Flash to simplify the protocol and speed up the recovery process. The simplified protocol named Quick Recovery Cyclic Commit (QRCC) uses journal stored on NOR flash for recovery. Our evaluations on actual raw flash card show that journal writes add negligible penalty compared to original Transactional Flash's write performance, while quick recovery is facilitated by journal in case of failures.
Resumo:
Software transactional memory(STM) is a promising programming paradigm for shared memory multithreaded programs. While STM offers the promise of being less error-prone and more programmer friendly compared to traditional lock-based synchronization, it also needs to be competitive in performance in order for it to be adopted in mainstream software. A major source of performance overheads in STM is transactional aborts. Conflict resolution and aborting a transaction typically happens at the transaction level which has the advantage that it is automatic and application agnostic. However it has a substantial disadvantage in that STM declares the entire transaction as conflicting and hence aborts it and re-executes it fully, instead of partially re-executing only those part(s) of the transaction, which have been affected due to the conflict. This "Re-execute Everything" approach has a significant adverse impact on STM performance. In order to mitigate the abort overheads, we propose a compiler aided Selective Reconciliation STM (SR-STM) scheme, wherein certain transactional conflicts can be reconciled by performing partial re-execution of the transaction. Ours is a selective hybrid approach which uses compiler analysis to identify those data accesses which are legal and profitable candidates for reconciliation and applies partial re-execution only to these candidates selectively while other conflicting data accesses are handled by the default STM approach of abort and full re-execution. We describe the compiler analysis and code transformations required for supporting selective reconciliation. We find that SR-STM is effective in reducing the transactional abort overheads by improving the performance for a set of five STAMP benchmarks by 12.58% on an average and up to 22.34%.
Resumo:
We consider ZH and WH production at the Large Hadron Collider, where the Higgs decays to a b (b) over bar pair. We use jet substructure techniques to reconstruct the Higgs boson and construct angular observables involving leptonic decay products of the vector bosons. These efficiently discriminate between the tensor structure of the HVV vertex expected in the Standard Model and that arising from possible new physics, as quantified by higher dimensional operators. This can then be used to examine the CP nature of the Higgs as well as CP mixing effects in the HZZ and HWW vertices separately. (C) 2014 Elsevier B.V.
Resumo:
We consider the problem of parameter estimation from real-valued multi-tone signals. Such problems arise frequently in spectral estimation. More recently, they have gained new importance in finite-rate-of-innovation signal sampling and reconstruction. The annihilating filter is a key tool for parameter estimation in these problems. The standard annihilating filter design has to be modified to result in accurate estimation when dealing with real sinusoids, particularly because the real-valued nature of the sinusoids must be factored into the annihilating filter design. We show that the constraint on the annihilating filter can be relaxed by making use of the Hilbert transform. We refer to this approach as the Hilbert annihilating filter approach. We show that accurate parameter estimation is possible by this approach. In the single-tone case, the mean-square error performance increases by 6 dB for signal-to-noise ratio (SNR) greater than 0 dB. We also present experimental results in the multi-tone case, which show that a significant improvement (about 6 dB) is obtained when the parameters are close to 0 or pi. In the mid-frequency range, the improvement is about 2 to 3 dB.
Resumo:
The paper discusses the status of the Tiga Reservoir Fishery pre-and Clupeid transplantation. This was achieved by examining the species diversity, abundance and distribution with mitigating factors. It concludes with a verdict on the achievement of the transplantation exercise
Resumo:
En este proyecto se describirá como construir un modelo predictivo de tipo gradient boosting para predecir el número de ventas online de un producto X del cual solo sabremos su número de identificación, teniendo en cuenta las campañas publicitarias y las características tanto cualitativas y cuantitativas de éste. Para ello se utilizarán y se explicarán las diferentes técnicas utilizadas, como son: la técnica de la validación cruzada y el Blending. El objetivo del proyecto es implementar el modelo así como explicar con exactitud cada técnica y herramienta utilizada y obtener un resultado válido para la competición propuesta en Kaggle con el nombre de Online Product Sales.
Resumo:
This paper presents a new online multi-classifier boosting algorithm for learning object appearance models. In many cases the appearance model is multi-modal, which we capture by training and updating multiple strong classifiers. The proposed algorithm jointly learns the classifiers and a soft partitioning of the input space, defining an area of expertise for each classifier. We show how this formulation improves the specificity of the strong classifiers, allowing simultaneous location and pose estimation in a tracking task. The proposed online scheme iteratively adapts the classifiers during tracking. Experiments show that the algorithm successfully learns multi-modal appearance models during a short initial training phase, subsequently updating them for tracking an object under rapid appearance changes. © 2010 IEEE.
Resumo:
We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and features to be co-clustered. Co-clustering is performed in a way that maximises discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each boosting classifier is an aggregation of weak-learners, i.e. simple visual features. The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks.