49 resultados para Density-based Scanning Algorithm

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrating evidence from multiple domains is useful in prioritizing disease candidate genes for subsequent testing. We ranked all known human genes (n = 3819) under linkage peaks in the Irish Study of High-Density Schizophrenia Families using three different evidence domains: 1) a meta-analysis of microarray gene expression results using the Stanley Brain collection, 2) a schizophrenia protein-protein interaction network, and 3) a systematic literature search. Each gene was assigned a domain-specific p-value and ranked after evaluating the evidence within each domain. For comparison to this
ranking process, a large-scale candidate gene hypothesis was also tested by including genes with Gene Ontology terms related to neurodevelopment. Subsequently, genotypes of 3725 SNPs in 167 genes from a custom Illumina iSelect array were used to evaluate the top ranked vs. hypothesis selected genes. Seventy-three genes were both highly ranked and involved in neurodevelopment (category 1) while 42 and 52 genes were exclusive to neurodevelopment (category 2) or highly ranked (category 3), respectively. The most significant associations were observed in genes PRKG1, PRKCE, and CNTN4 but no individual SNPs were significant after correction for multiple testing. Comparison of the approaches showed an excess of significant tests using the hypothesis-driven neurodevelopment category. Random selection of similar sized genes from two independent genome-wide association studies (GWAS) of schizophrenia showed the excess was unlikely by chance. In a further meta-analysis of three GWAS datasets, four candidate SNPs reached nominal significance. Although gene ranking using integrated sources of prior information did not enrich for significant results in the current experiment, gene selection using an a priori hypothesis (neurodevelopment) was superior to random selection. As such, further development of gene ranking strategies using more carefully selected sources of information is warranted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. The motivation for this research is to find a subset of Ngram features that makes a robust indicator of malware. The experiments within this paper represent programs as N-gram density histograms, gained through dynamic analysis. A Support Vector Machine (SVM) is used as the program classifier to determine the ability of N-grams to correctly determine the presence of malicious software. The preliminary findings show that an N-gram size N=3 and N=4 present the best avenues for further analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical models are useful tools for simulation, evaluation, optimal operation and control of solar cells and proton exchange membrane fuel cells (PEMFCs). To identify the model parameters of these two type of cells efficiently, a biogeography-based optimization algorithm with mutation strategies (BBO-M) is proposed. The BBO-M uses the structure of biogeography-based optimization algorithm (BBO), and both the mutation motivated from the differential evolution (DE) algorithm and the chaos theory are incorporated into the BBO structure for improving the global searching capability of the algorithm. Numerical experiments have been conducted on ten benchmark functions with 50 dimensions, and the results show that BBO-M can produce solutions of high quality and has fast convergence rate. Then, the proposed BBO-M is applied to the model parameter estimation of the two type of cells. The experimental results clearly demonstrate the power of the proposed BBO-M in estimating model parameters of both solar and fuel cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile malware has been growing in scale and complexity spurred by the unabated uptake of smartphones worldwide. Android is fast becoming the most popular mobile platform resulting in sharp increase in malware targeting the platform. Additionally, Android malware is evolving rapidly to evade detection by traditional signature-based scanning. Despite current detection measures in place, timely discovery of new malware is still a critical issue. This calls for novel approaches to mitigate the growing threat of zero-day Android malware. Hence, the authors develop and analyse proactive machine-learning approaches based on Bayesian classification aimed at uncovering unknown Android malware via static analysis. The study, which is based on a large malware sample set of majority of the existing families, demonstrates detection capabilities with high accuracy. Empirical results and comparative analysis are presented offering useful insight towards development of effective static-analytic Bayesian classification-based solutions for detecting unknown Android malware.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary objective of this work is the analysis and interpretation of coronal observations of Capella obtained in 1999 September with the High Energy Transmission Grating Spectrometer on the Chandra X-ray Observatory and the Extreme Ultraviolet Explorer (EUVE). He-like lines of O (O vii) are used to derive a density of 1.7 x 10(10) cm(-3) for the coronae of the binary, consistent with the upper limits derived from Fe xxi, Ne ix and Mg xi line ratios. Previous estimates of the electron density based on Fe xxi should be considered as upper limits. We construct emission measure distributions and compare the theoretical and observed spectra to conclude that the coronal material has a temperature distribution that peaks around 4-6 MK, implying that the coronae of Capella were significantly cooler than in the previous years. In addition, we present an extended line list with over 100 features in the 5-24 Angstrom wavelength range, and find that the X-ray spectrum is very similar to that of a solar flare observed with SMM. The observed to theoretical Fe xvii 15.012-Angstrom line intensity reveals that opacity has no significant effect on the line flux. We derive an upper limit to the optical depth, which we combine with the electron density to derive an upper limit of 3000 km for the size of the Fe xvii emitting region. In the same context, we use the Si iv transition region lines of Capella from HST/Goddard High-Resolution Spectrometer observations to show that opacity can be significant at T = 10(5) K, and derive a path-length of approximate to 75 kin for the transition region. Both the coronal and transition region observations are consistent with very small emitting regions, which could be explained by small loops over the stellar surfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an Invariant Information Local Sub-map Filter (IILSF) as a technique for consistent Simultaneous Localisation and Mapping (SLAM) in a large environment. It harnesses the benefits of sub-map technique to improve the consistency and efficiency of Extended Kalman Filter (EKF) based SLAM. The IILSF makes use of invariant information obtained from estimated locations of features in independent sub-maps, instead of incorporating every observation directly into the global map. Then the global map is updated at regular intervals. Applying this technique to the EKF based SLAM algorithm: (a) reduces the computational complexity of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. Simulation results show that the method was able to accurately fuse local map observations to generate an efficient and consistent global map, in addition to significantly reducing computational cost and data association ambiguities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A technique for optimizing the efficiency of the sub-map method for large-scale simultaneous localization and mapping (SLAM) is proposed. It optimizes the benefits of the sub-map technique to improve the accuracy and consistency of an extended Kalman filter (EKF)-based SLAM. Error models were developed and engaged to investigate some of the outstanding issues in employing the sub-map technique in SLAM. Such issues include the size (distance) of an optimal sub-map, the acceptable error effect caused by the process noise covariance on the predictions and estimations made within a sub-map, when to terminate an existing sub-map and start a new one and the magnitude of the process noise covariance that could produce such an effect. Numerical results obtained from the study and an error-correcting process were engaged to optimize the accuracy and convergence of the Invariant Information Local Sub-map Filter previously proposed. Applying this technique to the EKF-based SLAM algorithm (a) reduces the computational burden of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. A Monte Carlo analysis of the system is presented as a means of demonstrating the consistency and efficacy of the proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clean and renewable energy generation and supply has drawn much attention worldwide in recent years, the proton exchange membrane (PEM) fuel cells and solar cells are among the most popular technologies. Accurately modeling the PEM fuel cells as well as solar cells is critical in their applications, and this involves the identification and optimization of model parameters. This is however challenging due to the highly nonlinear and complex nature of the models. In particular for PEM fuel cells, the model has to be optimized under different operation conditions, thus making the solution space extremely complex. In this paper, an improved and simplified teaching-learning based optimization algorithm (STLBO) is proposed to identify and optimize parameters for these two types of cell models. This is achieved by introducing an elite strategy to improve the quality of population and a local search is employed to further enhance the performance of the global best solution. To improve the diversity of the local search a chaotic map is also introduced. Compared with the basic TLBO, the structure of the proposed algorithm is much simplified and the searching ability is significantly enhanced. The performance of the proposed STLBO is firstly tested and verified on two low dimension decomposable problems and twelve large scale benchmark functions, then on the parameter identification of PEM fuel cell as well as solar cell models. Intensive experimental simulations show that the proposed STLBO exhibits excellent performance in terms of the accuracy and speed, in comparison with those reported in the literature.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Previous research based on theoretical simulations has shown the potential of the wavelet transform to detect damage in a beam by analysing the time-deflection response due to a constant moving load. However, its application to identify damage from the response of a bridge to a vehicle raises a number of questions. Firstly, it may be difficult to record the difference in the deflection signal between a healthy and a slightly damaged structure to the required level of accuracy and high scanning frequencies in the field. Secondly, the bridge is going to have a road profile and it will be loaded by a sprung vehicle and time-varying forces rather than a constant load. Therefore, an algorithm based on a plot of wavelet coefficients versus time to detect damage (a singularity in the plot) appears to be very sensitive to noise. This paper addresses these questions by: (a) using the acceleration signal, instead of the deflection signal, (b) employing a vehicle-bridge finite element interaction model, and (c) developing a novel wavelet-based approach using wavelet energy content at each bridge section which proves to be more sensitive to damage than a wavelet coefficient line plot at a given scale as employed by others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.