917 resultados para Experimental evaluation
Resumo:
In this paper we propose algorithms for combining and ranking answers from distributed heterogeneous data sources in the context of a multi-ontology Question Answering task. Our proposal includes a merging algorithm that aggregates, combines and filters ontology-based search results and three different ranking algorithms that sort the final answers according to different criteria such as popularity, confidence and semantic interpretation of results. An experimental evaluation on a large scale corpus indicates improvements in the quality of the search results with respect to a scenario where the merging and ranking algorithms were not applied. These collective methods for merging and ranking allow to answer questions that are distributed across ontologies, while at the same time, they can filter irrelevant answers, fuse similar answers together, and elicit the most accurate answer(s) to a question.
Resumo:
Because metadata that underlies semantic web applications is gathered from distributed and heterogeneous data sources, it is important to ensure its quality (i.e., reduce duplicates, spelling errors, ambiguities). However, current infrastructures that acquire and integrate semantic data have only marginally addressed the issue of metadata quality. In this paper we present our metadata acquisition infrastructure, ASDI, which pays special attention to ensuring that high quality metadata is derived. Central to the architecture of ASDI is a verification engine that relies on several semantic web tools to check the quality of the derived data. We tested our prototype in the context of building a semantic web portal for our lab, KMi. An experimental evaluation comparing the automatically extracted data against manual annotations indicates that the verification engine enhances the quality of the extracted semantic metadata.
Resumo:
This thesis presented a detailed research work on diamond materials. Chapter 1 is an overall introduction of the thesis. In the Chapter 2, the literature review on the physical, chemical, optical, mechanical, as well as other properties of diamond materials are summarised. Followed by this chapter, several advanced diamond growth and characterisation techniques used in experimental work are also introduced. Then, the successful installation and applications of chemical vapour deposition system was demonstrated in Chapter 4. Diamond growth on a variety of different substrates has been investigated such as on silicon, diamond-like carbon or silica fibres. In Chapter 5, the single crystalline diamond substrate was used as the substrate to perform femtosecond laser inscription. The results proved the potentially feasibility of this technique, which could be utilised in fabricating future biochemistry microfluidic channels on diamond substrates. In Chapter 6, the hydrogen-terminated nanodiamond powder was studied using impedance spectroscopy. Its intrinsic electrical properties and its thermal stability were presented and analysed in details. As the first PhD student within Nanoscience Research Group at Aston, my initial research work was focused on the installation and testing of the microwave plasma enhanced chemical vapour deposition system (MPECVD), which will be beneficial to all the future researchers in the group. The fundamental of the on MPECVD system will be introduced in details. After optimisation of the growth parameters, the uniform diamond deposition has been achieved with a good surface coverage and uniformity. Furthermore, one of the most significant contributions of this work is the successful pattern inscription on diamond substrates by femtosecond laser system. Previous research of femtosecond laser inscription on diamond was simple lines or dots, with little characterisation techniques were used. In my research work, the femtosecond laser has been successfully used to inscribe patterns on diamond substrate and fully characterisation techniques, e.g. by SEM, Raman, XPS, as well as AFM, have been carried out. After the femtosecond laser inscription, the depth of microfluidic channels on diamond film has been found to be 300~400 nm, with a graphitic layer thickness of 165~190 nm. Another important outcome of this work is the first time to characterise the electrical properties of hydrogenterminated nanodiamond with impedance spectroscopy. Based on the experimental evaluation and mathematic fitting, the resistance of hydrogen-terminated nanodiamond reduced to 0.25 MO, which were four orders of magnitude lower than untreated nanodiamond. Meanwhile, a theoretical equivalent circuit has been proposed to fit the results. Furthermore, the hydrogenterminated nanodiamond samples were annealed at different temperature to study its thermal stability. The XPS and FTIR results indicate that hydrogen-terminated nanodiamond will start to oxidize over 100ºC and the C-H bonds can survive up to 400ºC. This research work reports the fundamental electrical properties of hydrogen-terminated nanodiamond, which can be used in future applications in physical or chemical area.
Resumo:
This paper investigates whether AspectJ can be used for efficient profiling of Java programs. Profiling differs from other applications of AOP (e.g. tracing), since it necessitates efficient and often complex interactions with the target program. As such, it was uncertain whether AspectJ could achieve this goal. Therefore, we investigate four common profiling problems (heap usage, object lifetime, wasted time and time-spent) and report on how well AspectJ handles them. For each, we provide an efficient implementation, discuss any trade-offs or limitations and present the results of an experimental evaluation into the costs of using it. Our conclusions are mixed. On the one hand, we find that AspectJ is sufficiently expressive to describe the four profiling problems and reasonably efficient in most cases. On the other hand, we find several limitations with the current AspectJ implementation that severely hamper its suitability for profiling. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
This thesis presents a detailed numerical analysis, fabrication method and experimental investigation on 45º tilted fiber gratings (45º-TFGs) and excessively tilted fiber gratings (Ex-TFGs), and their applications in fiber laser and sensing systems. The one of the most significant contributions of the work reported in this thesis is that the 45º-TFGs with high polarization extinction ratio (PER) have been fabricated in single mode telecom and polarization maintaining (PM) fibers with spectral response covering three prominent optic communication and central wavelength ranges at 1060nm, 1310nm and 1550nm. The most achieved PERs for the 45º-TFGs are up to and greater than 35-50dB, which have reached and even exceeded many commercial in-fiber polarizers. It has been proposed that the 45º-TFGs of high PER can be used as ideal in-fiber polarizers for a wide range of fiber systems and applications. In addition, in-depth detailed theoretical models and analysis have been developed and systematic experimental evaluation has been conducted producing results in excellent agreement with theoretical modeling. Another important outcome of the research work is the proposal and demonstration of all fiber Lyot filters (AFLFs) implemented by utilizing two (for a single stage type) and more (for multi-stage) 45º-TFGs in PM fiber cavity structure. The detailed theoretical analysis and modelling of such AFLFs have also been carried out giving design guidance for the practical implementation. The unique function advantages of 45º-TFG based AFLFs have been revealed, showing high finesse multi-wavelength transmission of single polarization and wide range of tuneability. The temperature tuning results of AFLFs have shown that the AFLFs have 60 times higher thermal sensitivity than the normal FBGs, thus permitting thermal tuning rate of ~8nm/10ºC. By using an intra-cavity AFLF, an all fiber soliton mode locking laser with almost total suppression of siliton sidebands, single polarization output and single/multi-wavelength switchable operation has been demonstrated. The final significant contribution is the theoretical analysis and experimental verification on the design, fabrication and sensing application of Ex-TFGs. The Ex-TFG sensitivity model to the surrounding medium refractive index (SRI) has been developed for the first time, and the factors that affect the thermal and SRI sensitivity in relation to the wavelength range, tilt angle, and the size of cladding have been investigated. As a practical SRI sensor, an 81º-TFG UV-inscribed in the fiber with small (40μm) cladding radius has shown an SRI sensitivity up to 1180nm/RIU in the index of 1.345 range. Finally, to ensure single polarization detection in such an SRI sensor, a hybrid configuration by UV-inscribing a 45º-TFG and an 81º-TFG closely on the same piece of fiber has been demonstrated as a more advanced SRI sensing system.
Resumo:
In this paper, we investigate the use of manifold learning techniques to enhance the separation properties of standard graph kernels. The idea stems from the observation that when we perform multidimensional scaling on the distance matrices extracted from the kernels, the resulting data tends to be clustered along a curve that wraps around the embedding space, a behavior that suggests that long range distances are not estimated accurately, resulting in an increased curvature of the embedding space. Hence, we propose to use a number of manifold learning techniques to compute a low-dimensional embedding of the graphs in an attempt to unfold the embedding manifold, and increase the class separation. We perform an extensive experimental evaluation on a number of standard graph datasets using the shortest-path (Borgwardt and Kriegel, 2005), graphlet (Shervashidze et al., 2009), random walk (Kashima et al., 2003) and Weisfeiler-Lehman (Shervashidze et al., 2011) kernels. We observe the most significant improvement in the case of the graphlet kernel, which fits with the observation that neglecting the locational information of the substructures leads to a stronger curvature of the embedding manifold. On the other hand, the Weisfeiler-Lehman kernel partially mitigates the locality problem by using the node labels information, and thus does not clearly benefit from the manifold learning. Interestingly, our experiments also show that the unfolding of the space seems to reduce the performance gap between the examined kernels.
Resumo:
Kernel methods provide a convenient way to apply a wide range of learning techniques to complex and structured data by shifting the representational problem from one of finding an embedding of the data to that of defining a positive semidefinite kernel. One problem with the most widely used kernels is that they neglect the locational information within the structures, resulting in less discrimination. Correspondence-based kernels, on the other hand, are in general more discriminating, at the cost of sacrificing positive-definiteness due to their inability to guarantee transitivity of the correspondences between multiple graphs. In this paper we generalize a recent structural kernel based on the Jensen-Shannon divergence between quantum walks over the structures by introducing a novel alignment step which rather than permuting the nodes of the structures, aligns the quantum states of their walks. This results in a novel kernel that maintains localization within the structures, but still guarantees positive definiteness. Experimental evaluation validates the effectiveness of the kernel for several structural classification tasks. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
Kernel methods provide a way to apply a wide range of learning techniques to complex and structured data by shifting the representational problem from one of finding an embedding of the data to that of defining a positive semidefinite kernel. In this paper, we propose a novel kernel on unattributed graphs where the structure is characterized through the evolution of a continuous-time quantum walk. More precisely, given a pair of graphs, we create a derived structure whose degree of symmetry is maximum when the original graphs are isomorphic. With this new graph to hand, we compute the density operators of the quantum systems representing the evolutions of two suitably defined quantum walks. Finally, we define the kernel between the two original graphs as the quantum Jensen-Shannon divergence between these two density operators. The experimental evaluation shows the effectiveness of the proposed approach. © 2013 Springer-Verlag.
Resumo:
One of the most fundamental problem that we face in the graph domain is that of establishing the similarity, or alternatively the distance, between graphs. In this paper, we address the problem of measuring the similarity between attributed graphs. In particular, we propose a novel way to measure the similarity through the evolution of a continuous-time quantum walk. Given a pair of graphs, we create a derived structure whose degree of symmetry is maximum when the original graphs are isomorphic, and where a subset of the edges is labeled with the similarity between the respective nodes. With this compositional structure to hand, we compute the density operators of the quantum systems representing the evolution of two suitably defined quantum walks. We define the similarity between the two original graphs as the quantum Jensen-Shannon divergence between these two density operators, and then we show how to build a novel kernel on attributed graphs based on the proposed similarity measure. We perform an extensive experimental evaluation both on synthetic and real-world data, which shows the effectiveness the proposed approach. © 2013 Springer-Verlag.
Resumo:
The quantum Jensen-Shannon divergence kernel [1] was recently introduced in the context of unattributed graphs where it was shown to outperform several commonly used alternatives. In this paper, we study the separability properties of this kernel and we propose a way to compute a low-dimensional kernel embedding where the separation of the different classes is enhanced. The idea stems from the observation that the multidimensional scaling embeddings on this kernel show a strong horseshoe shape distribution, a pattern which is known to arise when long range distances are not estimated accurately. Here we propose to use Isomap to embed the graphs using only local distance information onto a new vectorial space with a higher class separability. The experimental evaluation shows the effectiveness of the proposed approach. © 2013 Springer-Verlag.
Resumo:
The convergence of data, audio and video on IP networks is changing the way individuals, groups and organizations communicate. This diversity of communication media presents opportunities for creating synergistic collaborative communications. This form of collaborative communication is however not without its challenges. The increasing number of communication service providers coupled with a combinatorial mix of offered services, varying Quality-of-Service and oscillating pricing of services increases the complexity for the user to manage and maintain ‘always best’ priced or performance services. Consumers have to manually manage and adapt their communication in line with differences in services across devices, networks and media while ensuring that the usage remain consistent with their intended goals. This dissertation proposes a novel user-centric approach to address this problem. The proposed approach aims to reduce the aforementioned complexity to the user by (1) providing high-level abstractions and a policy based methodology for automated selection of the communication services guided by high-level user policies and (2) providing services through the seamless integration of multiple communication service providers and providing an extensible framework to support the integration of multiple communication service providers. The approach was implemented in the Communication Virtual Machine (CVM), a model-driven technology for realizing communication applications. The CVM includes the Network Communication Broker, the layer responsible for providing a network-independent API to the upper layers of CVM. The initial prototype for the NCB supported only a single communication framework which limited the number, quality and types of services available. Experimental evaluation of the approach show the additional overhead of the approach is minimal compared to the individual communication services frameworks. Additionally the automated approach proposed out performed the individual communication services frameworks for cross framework switching.
Resumo:
Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^
Resumo:
Contamination of soil, sediment and groundwater by hydrophobic organic compounds (HOCs) is a matter of growing concern because groundwater is a valuable and limited resource, and because such contamination is difficult to address. This investigation involved an experimental evaluation of the addition of several surfactant solutions to aqueous and soil-water systems contaminated with phenanthrene, a selected HOC. The results are presented in terms of: * phenanthrene solubilization achieved through surfactant addition * observed effects of surfactant addition on the mineralization of phenanthrene * estimation of relative toxicities of various surfactants using toxicity assays * literature-reported biodegradability/persistence of selected surfactants * surfactant sorption/precipitation onto soil and its impacts on proposed use of surfactant-amended remediation Surfactants were observed to facilitate the transfer of phenanthrene from the soil-sorbed phase to the aqueous pseudophase, however, surfactant solubilization did not translate into enhanced phenanthrene biodegradation.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.