921 resultados para Docker,ARM,Raspberry PI,single board computer,QEMU,Sabayon Linux,Gentoo Linux
Resumo:
Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpred_page.php.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
A novel multiple regression method (RM) is developed to predict identity-by-descent probabilities at a locus L (IBDL), among individuals without pedigree, given information on surrounding markers and population history. These IBDL probabilities are a function of the increase in linkage disequilibrium (LD) generated by drift in a homogeneous population over generations. Three parameters are sufficient to describe population history: effective population size (Ne), number of generations since foundation (T), and marker allele frequencies among founders (p). IBD L are used in a simulation study to map a quantitative trait locus (QTL) via variance component estimation. RM is compared to a coalescent method (CM) in terms of power and robustness of QTL detection. Differences between RM and CM are small but significant. For example, RM is more powerful than CM in dioecious populations, but not in monoecious populations. Moreover, RM is more robust than CM when marker phases are unknown or when there is complete LD among founders or Ne is wrong, and less robust when p is wrong. CM utilises all marker haplotype information, whereas RM utilises information contained in each individual marker and all possible marker pairs but not in higher order interactions. RM consists of a family of models encompassing four different population structures, and two ways of using marker information, which contrasts with the single model that must cater for all possible evolutionary scenarios in CM.
Resumo:
utomatic pain monitoring has the potential to greatly improve patient diagnosis and outcomes by providing a continuous objective measure. One of the most promising methods is to do this via automatically detecting facial expressions. However, current approaches have failed due to their inability to: 1) integrate the rigid and non-rigid head motion into a single feature representation, and 2) incorporate the salient temporal patterns into the classification stage. In this paper, we tackle the first problem by developing a “histogram of facial action units” representation using Active Appearance Model (AAM) face features, and then utilize a Hidden Conditional Random Field (HCRF) to overcome the second issue. We show that both of these methods improve the performance on the task of pain detection in sequence level compared to current state-of-the-art-methods on the UNBC-McMaster Shoulder Pain Archive.
Resumo:
We demonstrated for the first time by ab initio density functional calculation and molecular dynamics simulation that C0.5(BN)0.5 armchair single-walled nanotubes (NT) are gapless semiconductors and can be spontaneously formed via the hybrid connection of graphene/BN Nanoribbons (GNR/BNNR) at room temperature. The direct synthesis of armchair C0.5(BN)0.5 via the hybrid connection of GNR/BNNR is predicted to be both thermodynamically and dynamically stable. Such novel armchair C0.5(BN)0.5 NTs possess enhanced conductance as that observed in GNRs. Additionally, the zigzag C0.5(BN)0.5 SWNTs are narrow band gap semiconductors, which may have potential application for light emission. In light of recent experimental progress and the enhanced degree of control in the synthesis of GNRs and BNNR, our results highlight an interesting avenue for synthesizing a novel specific type of C0.5(BN)0.5 nanotube (gapless or narrow direct gap semiconductor), with potentially important applications in BNC-based nanodevices.
Resumo:
The interaction of bare graphene nanoribbons (GNRs) was investigated by ab initio density functional theory calculations with both the local density approximation (LDA) and the generalized gradient approximation (GGA). Remarkably, two bare 8-GNRs with zigzag-shaped edges are predicted to form an (8, 8) armchair single-wall carbon nanotube (SWCNT) without any obvious activation barrier. The formation of a (10, 0) zigzag SWCNT from two bare 10-GNRs with armchair-shaped edges has activation barriers of 0.23 and 0.61 eV for using the LDA and the revised PBE exchange correlation functional, respectively, Our results suggest a possible route to control the growth of specific types SWCNT via the interaction of GNRs.
Resumo:
This paper describes an approach to investigate the adoption of Web 2.0 in the classroom using a mixed methods study. By using a combination of qualitative or quantitative data collection and analysis techniques, we attempt to synergize the results and provide a more valid understanding of Web 2.0 adoption for learning by both teachers and students. This approach is expected to yield a better holistic view on the adoption issues associated with the e-learning 2.0 concept in current higher education as opposed to single method studies done previously. This paper also presents some early findings of e-learning 2.0 adoption using this research method
Traffic queue estimation for metered motorway on-ramps through use of loop detector time occupancies
Resumo:
The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is a vital input for dynamic queue management on metered on-ramps. Accurate and reliable queue information enables the management of on-ramp queue in an adaptive manner to the actual traffic queue size and thus minimises the adverse impacts of queue flush while increasing the benefit of ramp metering. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.
Resumo:
The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is the most vital input for a dynamic queue management that can treat long queues on metered on-ramps more sophistically. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in the congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.
Resumo:
The well-known power system stabilizer (PSS) is used to generate supplementary control signals for the excitation system of a generator so as to damp low frequency oscillations in the power system concerned. Up to now, various kinds of PSS design methods have been proposed and some of them applied in actual power systems with different degrees. Given this background, the small-disturbance eigenvalue analysis and large-disturbance dynamic simulations in the time domain are carried out to evaluate the performances of four different PSS design methods, including the Conventional PSS (CPSS), Single-Neuron PSS (SNPSS), Adaptive PSS (APSS) and Multi-band PSS (MBPSS). To make the comparisons equitable, the parameters of the four kinds of PSSs are all determined by the steepest descent method. Finally, an 8-unit 24-bus power system is employed to demonstrate the performances of the four kinds of PSSs by the well-established eigenvalue analysis as well as numerous digital simulations, and some useful conclusions obtained.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
In an increasingly business technology (BT) dependent world, the impact of the extraordinary changes brought about by the nexus of mobile and cloud technologies, social media and big data is increasingly being felt in the board room. As leaders of enterprises of every type and size, board directors can no longer afford to ignore, delegate or avoid BT-related decisions. Competitive, financial and reputational risk is increased if boards fail to recognize their role in governing technology as an asset and in removing barriers to improving enterprise business technology governance (EBTG). Directors’ awareness of the need for EBTG is increasing. However, industry research shows that board level willingness to rectify the gap between awareness and action is very low or non-existent. This literature review-based research identifies barriers to EBTG effectiveness. It provides a practical starting point for board analysis. We offer four outcomes that boards might focus on to ensure the organizations they govern are not left behind by those led by the upcoming new breed of technology-savvy leaders. Most extant research looks backward for examples, examining data pre-2010, the time when a tipping point in the personal and business use of multimedia and mobile-internet devices significantly deepened the impacts of the identified nexus technology forces, and began rapidly changing the way many businesses engage with their customers, employees and stakeholders. We situate our work amidst these nexus forces, discuss the board’s role in EBTG in this context, and modernize current definitions of enterprise technology governance. The primary limitation faced is the lack of scholarly research relating to EBTG in the rapidly changing digital economy. Although we have used recent (2011 - 2013) industry surveys, the volume of these surveys and congruence across them is significant in terms of levels of increased awareness and calls for increased board attention and competency in EBTG and strategic information use. Where possible we have used scholarly research to illustrate or discuss industry findings.
Resumo:
The Australian Business Assessment of Computer User Security (ABACUS) survey is a nationwide assessment of the prevalence and nature of computer security incidents experienced by Australian businesses. This report presents the findings of the survey which may be used by businesses in Australia to assess the effectiveness of their information technology security measures.
Resumo:
This paper investigates advanced channel compensation techniques for the purpose of improving i-vector speaker verification performance in the presence of high intersession variability using the NIST 2008 and 2010 SRE corpora. The performance of four channel compensation techniques: (a) weighted maximum margin criterion (WMMC), (b) source-normalized WMMC (SN-WMMC), (c) weighted linear discriminant analysis (WLDA), and; (d) source-normalized WLDA (SN-WLDA) have been investigated. We show that, by extracting the discriminatory information between pairs of speakers as well as capturing the source variation information in the development i-vector space, the SN-WLDA based cosine similarity scoring (CSS) i-vector system is shown to provide over 20% improvement in EER for NIST 2008 interview and microphone verification and over 10% improvement in EER for NIST 2008 telephone verification, when compared to SN-LDA based CSS i-vector system. Further, score-level fusion techniques are analyzed to combine the best channel compensation approaches, to provide over 8% improvement in DCF over the best single approach, (SN-WLDA), for NIST 2008 interview/ telephone enrolment-verification condition. Finally, we demonstrate that the improvements found in the context of CSS also generalize to state-of-the-art GPLDA with up to 14% relative improvement in EER for NIST SRE 2010 interview and microphone verification and over 7% relative improvement in EER for NIST SRE 2010 telephone verification.
Resumo:
Previously, expected satiety (ES) has been measured using software and two-dimensional pictures presented on a computer screen. In this context, ES is an excellent predictor of self-selected portions, when quantified using similar images and similar software. In the present study we sought to establish the veracity of ES as a predictor of behaviours associated with real foods. Participants (N = 30) used computer software to assess their ES and ideal portion of three familiar foods. A real bowl of one food (pasta and sauce) was then presented and participants self-selected an ideal portion size. They then consumed the portion ad libitum. Additional measures of appetite, expected and actual liking, novelty, and reward, were also taken. Importantly, our screen-based measures of expected satiety and ideal portion size were both significantly related to intake (p < .05). By contrast, measures of liking were relatively poor predictors (p > .05). In addition, consistent with previous studies, the majority (90%) of participants engaged in plate cleaning. Of these, 29.6% consumed more when prompted by the experimenter. Together, these findings further validate the use of screen-based measures to explore determinants of portion-size selection and energy intake in humans.