192 resultados para speaker dependencies
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.
Resumo:
Purpose: The challenges of providing housing that sustains its inhabitants socially, economically and environmentally, and is inherently sustainable for the planet as a whole, requires a holistic systems approach that considers the product, the supply chain and the market, as well as the inter-dependencies within and between each of these process points. The purpose of the research is to identify factors that impact the sustainability performance outcomes of residential dwellings and the diffusion of sustainable housing into the mainstream housing market. Design/methodology/approach: This research represents a snapshot in time: a recording of the experiences of seven Australian families who are “early adopters” of leading edge sustainable homes within a specific sustainable urban development in subtropical Queensland. The research adopts a qualitative approach to compare the goals and expectations of these families with the actual sustainability aspects incorporated into their homes and lifestyles. Findings: The results show that the “product” – a sustainable house – is difficult to define; that sustainability outcomes were strongly influenced by individual concerns and the contextual urban environment; and that economic comparisons with “standard” housing are challenging. Research limitations/implications: This qualitative study is based on seven families (13 individuals) in an Ecovillage in southeast Queensland. Although the findings make a significant contribution to knowledge, they may not be generalisable to the wider population. Originality/value: The experiences of these early adopter families suggest that the housing market and regulators play critical roles, through actions and language, in limiting or enhancing the diffusion of sustainable housing into the market.
Resumo:
ln 2004 Prahalad made managers aware of the great economic opportunity that the population at the BoP (Base of the Pyramid) could represent for business in the tom of new potential consumers. However, MNCs (Multi-National Corporations) have continued to fail in penetrating low income markets, arguably because applied strategies are often the same adopted at the top of the pyramid. Even in those few cases where products get re-envisioned, theie introduction in contexts of extreme poverty only induces new needs and develops new dependencies. At best the rearrangement of business models by MNCs has meant the realization of CSR (Corporate Social Responsibly) schemes that have validity from a marketing perspective, but still lack the crucial element of social embeddedness (London & Hart, 2004). Today the challenge is lo reach the lowest population tier with reinvented business models based on principles of value co-creation. Starting from a view of the potential consumer at the BoP as a ring of continuity in the value chain process – a resource that can itself produce value - this paper concludes proposing an alternative innovative approach to operate in developing markets that overturns the roles of MNCs and the BoP. The proposed perspective of ‘reversed' source of innovation and primary target market builds on two fundamental tenets: traditional knowledge is rich and greatly unexploded, and markets at the lop of the pyramid are saturated with unnecessary products / practices that have lost contact with the natural environment.
Resumo:
ln 2004 Prahalad made managers aware of the great economic opportunity that the population at the BoP (Base of the Pyramid) could represent for business in the tom of new potential consumers. However, MNCs (Multi-National Corporations) have continued to fail in penetrating low income markets, arguably because applied strategies are often the same adopted at the top of the pyramid. Even in those few cases where products get re-envisioned, theie introduction in contexts of extreme poverty only induces new needs and develops new dependencies. At best the rearrangement of business models by MNCs has meant the realization of CSR (Corporate Social Responsibly) schemes that have validity from a marketing perspective, but still lack the crucial element of social embeddedness (London & Hart, 2004). Today the challenge is lo reach the lowest population tier with reinvented business models based on principles of value co-creation. Starting from a view of the potential consumer at the BoP as a ring of continuity in the value chain process – a resource that can itself produce value - this paper concludes proposing an alternative innovative approach to operate in developing markets that overturns the roles of MNCs and the BoP. The proposed perspective of ‘reversed' source of innovation and primary target market builds on two fundamental tenets: traditional knowledge is rich and greatly unexploded, and markets at the lop of the pyramid are saturated with unnecessary products / practices that have lost contact with the natural environment.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
Defence projects are typically undertaken within a multi-project-management environment where a common agenda of project managers is to achieve higher project efficiency. This study adopted a multi-facet qualitative approach to investigate factors contributing to or impeding project efficiency in the Defence sector. Semi-structured interviews were undertaken to identify additional factors to those compiled from the literature survey. This was followed by a three-round Delphi study to examine the perceived critical factors of project efficiency. The results showed that project efficiency in the Defence sector went beyond its traditional internally focused scope to one that is externally focused. As a result, efforts are needed on not only those factors related to individual projects but also those factors related to project inter-dependencies and external customers. The management of these factors will help to enhance the efficiency of a project within the Defence sector.
Resumo:
Language and Mobility is the latest monograph by Alastair Pennycook. It is part of the series, Critical Language and Literacy Studies. Co-edited by Pennycook, along with Brian Morgan and Ryuko Kubota, the series looks at relations of power in diverse worlds of language and literacy. As the title indicates, Pennycook’s own volume explores the idea of language turning up in ‘unexpected’ places, for example, Cornish in Moonta, South Australia, a century or two after it supposedly died with its last speaker. Why is it, Pennycook asks, that we expect to find a (particular form of a) language in a particular place? This question is generated by a critical project that seeks to leverage the educational potential of everyday moments of language use...
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an indispensible management activity in datacenters for application performance, load balancing, server consolidation. While state-of-the-art live VM migration strategies focus on the improvement of the migration performance of a single VM, little attention has been given to the case of multiple VMs migration. Moreover, existing works on live VM migration ignore the inter-VM dependencies, and underlying network topology and its bandwidth. Different sequences of migration and different allocations of bandwidth result in different total migration times and total migration downtimes. This paper concentrates on developing a multiple VMs migration scheduling algorithm such that the performance of migration is maximized. We evaluate our proposed algorithm through simulation. The simulation results show that our proposed algorithm can migrate multiple VMs on any datacenter with minimum total migration time and total migration downtime.
Resumo:
A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.
Resumo:
A known limitation of the Probability Ranking Principle (PRP) is that it does not cater for dependence between documents. Recently, the Quantum Probability Ranking Principle (QPRP) has been proposed, which implicitly captures dependencies between documents through “quantum interference”. This paper explores whether this new ranking principle leads to improved performance for subtopic retrieval, where novelty and diversity is required. In a thorough empirical investigation, models based on the PRP, as well as other recently proposed ranking strategies for subtopic retrieval (i.e. Maximal Marginal Relevance (MMR) and Portfolio Theory(PT)), are compared against the QPRP. On the given task, it is shown that the QPRP outperforms these other ranking strategies. And unlike MMR and PT, one of the main advantages of the QPRP is that no parameter estimation/tuning is required; making the QPRP both simple and effective. This research demonstrates that the application of quantum theory to problems within information retrieval can lead to significant improvements.
Resumo:
In this work, we summarise the development of a ranking principle based on quantum probability theory, called the Quantum Probability Ranking Principle (QPRP), and we also provide an overview of the initial experiments performed employing the QPRP. The main difference between the QPRP and the classic Probability Ranking Principle, is that the QPRP implicitly captures the dependencies between documents by means of quantum interference". Subsequently, the optimal ranking of documents is not based solely on documents' probability of relevance but also on the interference with the previously ranked documents. Our research shows that the application of quantum theory to problems within information retrieval can lead to consistently better retrieval effectiveness, while still being simple, elegant and tractable.
Resumo:
Ranking documents according to the Probability Ranking Principle has been theoretically shown to guarantee optimal retrieval effectiveness in tasks such as ad hoc document retrieval. This ranking strategy assumes independence among document relevance assessments. This assumption, however, often does not hold, for example in the scenarios where redundancy in retrieved documents is of major concern, as it is the case in the sub–topic retrieval task. In this chapter, we propose a new ranking strategy for sub–topic retrieval that builds upon the interdependent document relevance and topic–oriented models. With respect to the topic– oriented model, we investigate both static and dynamic clustering techniques, aiming to group topically similar documents. Evidence from clusters is then combined with information about document dependencies to form a new document ranking. We compare and contrast the proposed method against state–of–the–art approaches, such as Maximal Marginal Relevance, Portfolio Theory for Information Retrieval, and standard cluster–based diversification strategies. The empirical investigation is performed on the ImageCLEF 2009 Photo Retrieval collection, where images are assessed with respect to sub–topics of a more general query topic. The experimental results show that our approaches outperform the state–of–the–art strategies with respect to a number of diversity measures.
Resumo:
The assumptions underlying the Probability Ranking Principle (PRP) have led to a number of alternative approaches that cater or compensate for the PRP’s limitations. All alternatives deviate from the PRP by incorporating dependencies. This results in a re-ranking that promotes or demotes documents depending upon their relationship with the documents that have been already ranked. In this paper, we compare and contrast the behaviour of state-of-the-art ranking strategies and principles. To do so, we tease out analytical relationships between the ranking approaches and we investigate the document kinematics to visualise the effects of the different approaches on document ranking.
Resumo:
Process choreographies describe interactions between different business partners and the dependencies between these interactions. While different proposals were made for capturing choreographies at an implementation level, it remains unclear how choreographies should be described on a conceptual level.While the Business Process Modeling Notation (BPMN) is already in use for describing choreographies in terms of interconnected interface behavior models, this paper will introduce interaction modeling using BPMN. Such interaction models do not suffer from incompatibility issues and are better suited for human modelers. BPMN extensions are proposed and a mapping from interaction models to interface behavior models is presented.
Resumo:
The assumptions underlying the Probability Ranking Principle (PRP) have led to a number of alternative approaches that cater or compensate for the PRP’s limitations. All alternatives deviate from the PRP by incorporating dependencies. This results in a re-ranking that promotes or demotes documents depending upon their relationship with the documents that have been already ranked. In this paper, we compare and contrast the behaviour of state-of-the-art ranking strategies and principles. To do so, we tease out analytical relationships between the ranking approaches and we investigate the document kinematics to visualise the effects of the different approaches on document ranking.