47 resultados para noisy speaker verification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motion analysis of a parallel robot assisted minimally invasive surgery/microsurgery system (PRAMiSS) and the control structures enabling it to achieve milli/micromanipulations under the constraint of moving through a fixed penetration point or so-called remote centre-of-motion (RCM) are presented in this article. Two control algorithms are proposed suitable for minimally invasive surgery (MIS) with submillimeter accuracy and for minimally invasive micro-surgery (MIMS) with submicrometer accuracy. The RCM constraint is performed without having any mechanical constraint. Control algorithms also apply orientation constraint preventing the tip to orient relative to the soft tissues due to the robot movements. Experiments were conducted to verify accuracy and effectiveness of the proposed control algorithms for MIS and MIMS operations. The experimental results demonstrate accuracy and performance of the proposed position control algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

 Thanks to the powerful internet, voices that speak about architecture increase every day. This super-populated architectural Speaker’s Corner, is so noisy that it’s even difficult to listen what each speaker is saying. The possibility of saying has turned up to be more important than the relevance of what is actually said. It´s no longer important who says each thing. Our hope is that among all this shouting and screaming we’ll be able to extract something intelligible and able to construct a discourse for architecture. Is this the new format of architectural critic? Transparency is the fundamental value that rules over information transmission in the new digital universe. Transparency that intends to minimise all negativity. By definition it’s positive, operational, flat and sameness. It offers undoubtable advantages in terms of speed, accessibility and amount of information. However, theoretical knowledge must, also by definition, include negativity. And negativity slowens, blocks and limits. The goal of theoretical critic is to segregate, separate and differentiate. It searches a certain truth that is neither transparent nor positive. It must also operate and define what is false. Simple accumulation of information and communication doesn't search or achieve a true conclusion. This paper outlines a taxonomy of the different voices speaking about architecture in the internet. Even though these architectural nano-discourses declare explicitly that they don’t want to replace traditional critic, the role they play particularly in architecture education, is very similar. And their unequivocal search for transparency based on freedom of speech and information, pulls them away from the capacities of traditional critic, of course in terms of format, but also for reason of much deeper importance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address a new problem of noisy images which present in the procedure of relevance feedback for medical image retrieval. We concentrate on the noisy images, caused by the users mislabeling some irrelevant images as relevant ones, and a noisy-smoothing relevance feedback (NS-RF) method is proposed. In NS-RF, a two-step strategy is proposed to handle the noisy images. In step 1, a noisy elimination algorithm is adopted to identify and eliminate the noisy images. In step 2, to further alleviate the influence of noisy images, a fuzzy membership function is employed to estimate the relevance probabilities of retained relevant images. After noisy handling, the fuzzy support vector machine, which can take into account different relevant images with different relevance probabilities, is adopted to re-rank the images. The experimental results on the IRMA medical image collection demonstrate that the proposed method can deal with the noisy images effectively.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

 Luke's work addresses issue of robustly attenuating multi-source noise from surface EEG signals using a novel Adaptive-Multiple-Reference Least-Means-Squares filter (AMR-LMS). In practice, the filter successfully removes electrical interference and muscle noise generated during movement which contaminates EEG, allowing subjects to maintain maximum mobility throughout signal acquisition and during the use of a Brain Computer Interface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Invasive species can disrupt the communication systems that native biota use for reproductive interactions. In tropical Australia, invasive cane toads (Rhinella marina) breed in many of the same waterbodies that are used by native frogs, and males of both the invader and the native taxa rely on vocal signals to attract mates. We conducted playback experiments to test the hypothesis that calls of toads may influence the calling behaviour of frogs (Limnodynastes convexiusculus and Litoria rothii). Male L. convexiusculus adjusted their calling rate and the variance in inter-call interval in response to a variety of sounds, including the calls of cane toads as well as those of other native frog species, and other anthropogenic noise, whereas L. rothii did not. Within the stimulus periods of playbacks, male L. convexiusculus called more intensely during long silent gaps than during calling blocks. Thus, males of one frog species reduced their calling rate, possibly to minimise energy expenditure during periods of acoustic interference generated by cane toads. In spite of such modifications, the number of overlapping calls (within stimulus periods) did not differ significantly from that expected by chance. In natural conditions, the calls of cane toads are continuous rather than episodic, leaving fewer gaps of silence that male frogs could exploit. Future work could usefully quantify the magnitude of temporal (e.g. diel and seasonal) and spatial overlap between calling by toads and by frogs and the impact of call-structure shifts on the ability of male frogs to attract receptive females.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Database query verification schemes provide correctness guarantees for database queries. Typically such guarantees are required and advisable where queries are executed on untrusted servers. This need to verify query results, even though they may have been executed on one’s own database, is something new that has arisen with the advent of cloud services. The traditional model of hosting one’s own databases on one’s own servers did not require such verification because the hardware and software were both entirely within one’s control, and therefore fully trusted. However, with the economical and technological benefits of cloud services beckoning, many are now considering outsourcing both data and execution of database queries to the cloud, despite obvious risks. This survey paper provides an overview into the field of database query verification and explores the current state of the art in terms of query execution and correctness guarantees provided for query results. We also provide indications towards future work in the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Database query verification schemes attempt to provide authenticity, completeness, and freshness guarantees for queries executed on untrusted cloud servers. A number of such schemes currently exist in the literature, allowing query verification for queries that are based on matching whole values (such as numbers, dates, etc.) or for queries based on keyword matching. However, there is a notable gap in the research with regard to query verification schemes for pattern-matching queries. Our contribution here is to provide such a verification scheme that provides correctness guarantees for pattern-matching queries executed on the cloud. We describe a trivial scheme, ȃŸż and show how it does not provide completeness guarantees, and then proceed to describe our scheme based on efficient primitives such as cryptographic hashing and Merkle hash trees along with suffix arrays. We also provide experimental results based on a working prototype to show the practicality of our scheme.Ÿż

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cyber-physical-social system (CPSS) allows individuals to share personal information collected from not only cyberspace but also physical space. This has resulted in generating numerous data at a user's local storage. However, it is very expensive for users to store large data sets, and it also causes problems in data management. Therefore, it is of critical importance to outsource the data to cloud servers, which provides users an easy, cost-effective, and flexible way to manage data, whereas users lose control on their data once outsourcing their data to cloud servers, which poses challenges on integrity of outsourced data. Many schemes have been proposed to allow a third-party auditor to verify data integrity using the public keys of users. Most of these schemes bear a strong assumption: the auditors are honest and reliable, and thereby are vulnerability in the case that auditors are malicious. Moreover, in most of these schemes, an auditor needs to manage users certificates to choose the correct public keys for verification. In this paper, we propose a secure certificateless public integrity verification scheme (SCLPV). The SCLPV is the first work that simultaneously supports certificateless public verification and resistance against malicious auditors to verify the integrity of outsourced data in CPSS. A formal security proof proves the correctness and security of our scheme. In addition, an elaborate performance analysis demonstrates that the SCLPV is efficient and practical. Compared with the only existing certificateless public verification scheme (CLPV), the SCLPV provides stronger security guarantees in terms of remedying the security vulnerability of the CLPV and resistance against malicious auditors. In comparison with the best of integrity verification scheme achieving resistance against malicious auditors, the communication cost between the auditor and the cloud server of the SCLPV is independent of the size of the processed data, meanwhile, the auditor in the SCLPV does not need to manage certificates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On-time completion is an important temporal QoS (Quality of Service) dimension and one of the fundamental requirements for high-confidence workflow systems. In recent years, a workflow temporal verification framework, which generally consists of temporal constraint setting, temporal checkpoint selection, temporal verification, and temporal violation handling, has been the major approach for the high temporal QoS assurance of workflow systems. Among them, effective temporal checkpoint selection, which aims to timely detect intermediate temporal violations along workflow execution plays a critical role. Therefore, temporal checkpoint selection has been a major topic and has attracted significant efforts. In this paper, we will present an overview of work-flow temporal checkpoint selection for temporal verification. Specifically, we will first introduce the throughput based and response-time based temporal consistency models for business and scientific cloud workflow systems, respectively. Then the corresponding benchmarking checkpoint selection strategies that satisfy the property of “necessity and sufficiency” are presented. We also provide experimental results to demonstrate the effectiveness of our checkpoint selection strategies, and finally points out some possible future issues in this research area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is establishing itself as the latest computing paradigm in recent years. As doing science in the cloud is becoming a reality, scientists are now able to access public cloud centers and employ high-performance computing resources to run scientific applications. However, due to the dynamic nature of the cloud environment, the usability of scientific cloud workflow systems can be significantly deteriorated if without effective service quality assurance strategies. Specifically, workflow temporal verification as the major approach for workflow temporal QoS (Quality of Service) assurance plays a critical role in the on-time completion of large-scale scientific workflows. Great efforts have been dedicated to the area of workflow temporal verification in recent years and it is high time that we should define the key research issues for scientific cloud workflows in order to keep our research on the right track. In this paper, we systematically investigate this problem and present four key research issues based on the introduction of a generic temporal verification framework. Meanwhile, state-of-the-art solutions for each research issue and open challenges are also presented. Finally, SwinDeW-V, an ongoing research project on temporal verification as part of our SwinDeW-C cloud workflow system, is also demonstrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workflow temporal verification is conducted to guarantee on-time completion, which is one of the most important QoS (Quality of Service) dimensions for business processes running in the cloud. However, as today's business systems often need to handle a large number of concurrent customer requests, conventional response-time based process monitoring strategies conducted in a one-by-one fashion cannot be applied efficiently to a large batch of parallel processes because of significant time overhead. Similar situations may also exist in software companies where multiple software projects are carried out at the same time by software developers. To address such a problem, based on a novel runtime throughput consistency model, this paper proposes a QoS-aware throughput based checkpoint selection strategy, which can dynamically select a small number of checkpoints along the system timeline to facilitate the temporal verification of throughput constraints and achieve the target on-time completion rate. Experimental results demonstrate that our strategy can achieve the best efficiency and effectiveness compared with the state-of-the-art as and other representative response-time based checkpoint selection strategies.