94 resultados para speaker dependencies
Resumo:
In this paper we present the process of designing an efficient speech corpus for the first unit selection speech synthesis system for Bulgarian, along with some significant preliminary results regarding the quality of the resulted system. As the initial corpus is a crucial factor for the quality delivered by the Text-to-Speech system, special effort has been given in designing a complete and efficient corpus for use in a unit selection TTS system. The targeted domain of the TTS system and hence that of the corpus is the news reports, and although it is a restricted one, it is characterized by an unlimited vocabulary. The paper focuses on issues regarding the design of an optimal corpus for such a framework and the ideas on which our approach was based on. A novel multi-stage approach is presented, with special attention given to language and speaker dependent issues, as they affect the entire process. The paper concludes with the presentation of our results and the evaluation experiments, which provide clear evidence of the quality level achieved. © 2011 Springer-Verlag.
Resumo:
The product design development has increasingly become a collaborative process. Conflicts often appear in the design process due to multi-actors interactions. Therefore, a critical element of collaborative design would be conflict situations resolution. In this paper, a methodology, based on a process model, is proposed to support conflict management. This methodology deals mainly with the conflict resolution team identification and the solution impact evaluation issues. The proposed process model allows the design process traceability and the data dependencies network identification; which making it be possible to identify the conflict resolution actors as well as to evaluate the selected solution impact. Copyright © 2006 IFAC.
Resumo:
This paper presents a method to manage Engineering Changes (EC) during the product development process, which is seen to be a complex system. The ability to manage engineering changes efficiently reflects the agility of an enterprise. Although there are unnecessary ECs that should be avoided, many of the ECs are actually beneficial. The proposed method explores the linkages between the product development process features and product specifications dependencies. It suggests ways of identifying and managing specification dependencies to support the Engineering Change Management process. Furthermore, the impacts of an EC on the product specifications as well as on the process organization are studied. © 2009 World Scientific Publishing Company.
Resumo:
The authors use simulation to analyse the resource-driven dependencies between concurrent processes used to create customised products in a company. Such processes are uncertain and unique according to the design changes required. However, they have similar structures. For simulation, a level of abstraction is chosen such that all possible processes are represented by the same activity network. Differences between processes are determined by the customisations that they implement. The approach is illustrated through application to a small business that creates customised fashion products. We suggest that similar techniques could be applied to study intertwined design processes in more complex domains. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
Sir John Egan’s 1998 report on the construction industry (Construction Task Force 1998) noted its confrontational and adversarial nature. Both the original report and its subsequent endorsement in Accelerating Change (Strategic Forum 2002) called for improved working relationships—so-called ‘integration’—within and between both design and construction aspects. In this paper, we report on our observations of on-site team meetings for a major UK project during its construction phase. We attended a series of team meetings and recorded the patterns of verbal interaction that took place within them. In reporting our findings, we have deliberately used a graphical method for presenting the results, in the expectation that this will make them more readily accessible to designers. Our diagrams of these interaction patterns have already proved to be intuitively and quickly understood, and have generated interest and discussion among both those we observed and others who have seen them. We noted that different patterns of communication occurred in different types of meetings. Specifically, in the problem-solving meeting, there was a richness of interaction that was largely missing from progress meetings and technical meetings. Team members expressed greater satisfaction with this problem-solving meeting where these enriched exchanges took place. By making comparisons between the different patterns, we are also able to explore functional roles and their interactions. From this and other published evidence, we conclude that good teamworking practices depend on a complex interplay of relations and dependencies embedded within the team.
Resumo:
Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees (HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in de-convolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso. © 2011 IEEE.
Resumo:
The Dependency Structure Matrix (DSM) has proved to be a useful tool for system structure elicitation and analysis. However, as with any modelling approach, the insights gained from analysis are limited by the quality and correctness of input information. This paper explores how the quality of data in a DSM can be enhanced by elicitation methods which include comparison of information acquired from different perspectives and levels of abstraction. The approach is based on comparison of dependencies according to their structural importance. It is illustrated through two case studies: creation of a DSM showing the spatial connections between elements in a product, and a DSM capturing information flows in an organisation. We conclude that considering structural criteria can lead to improved data quality in DSM models, although further research is required to fully explore the benefits and limitations of our proposed approach.
Resumo:
Latent variable models for network data extract a summary of the relational structure underlying an observed network. The simplest possible models subdivide nodes of the network into clusters; the probability of a link between any two nodes then depends only on their cluster assignment. Currently available models can be classified by whether clusters are disjoint or are allowed to overlap. These models can explain a "flat" clustering structure. Hierarchical Bayesian models provide a natural approach to capture more complex dependencies. We propose a model in which objects are characterised by a latent feature vector. Each feature is itself partitioned into disjoint groups (subclusters), corresponding to a second layer of hierarchy. In experimental comparisons, the model achieves significantly improved predictive performance on social and biological link prediction tasks. The results indicate that models with a single layer hierarchy over-simplify real networks.
Resumo:
Design knowledge can be acquired from various sources and generally requires an integrated representation for its effective and efficient re-use. Though knowledge about products and processes can illustrate the solutions created (know-what) and the courses of actions (know-how) involved in their creation, the reasoning process (know-why) underlying the solutions and actions is still needed for an integrated representation of design knowledge. Design rationale is an effective way of capturing that missing part, since it records the issues addressed, the options considered, and the arguments used when specific design solutions are created and evaluated. Apart from the need for an integrated representation, effective retrieval methods are also of great importance for the re-use of design knowledge, as the knowledge involved in designing complex products can be huge. Developing methods for the retrieval of design rationale is very useful as part of the effective management of design knowledge, for the following reasons. Firstly, design engineers tend to want to consider issues and solutions before looking at solid models or process specifications in detail. Secondly, design rationale is mainly described using text, which often embodies much relevant design knowledge. Last but not least, design rationale is generally captured by identifying elements and their dependencies, i.e. in a structured way which opens the opportunity for going beyond simple keyword-based searching. In this paper, the management of design rationale for the re-use of design knowledge is presented. The retrieval of design rationale records in particular is discussed in detail. As evidenced in the development and evaluation, the methods proposed are useful for the re-use of design knowledge and can be generalised to be used for the retrieval of other kinds of structured design knowledge. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Design rationale is an effective way of capturing knowledge, since it records the issues addressed, the options considered, and the arguments used when specific decisions are made during the design process. Design rationale is generally captured by identifying elements and their dependencies, i.e. in a structured way. Current retrieval methods focus mainly on either the classification of rationale or on keyword-based searches of records. Keyword-based retrieval is reasonably effective as the information in design rationale records is mainly described using text. However, most of the current keyword-based retrieval methods discard the implicit structures of these records, resulting either in poor precision of retrieval or in isolated pieces of information that are difficult to understand. This ongoing research aims to go beyond keyword-based retrieval by developing methods and tools to facilitate the provision of useful design knowledge in new design projects. Our first step is to understand the structured information derived from the relationship between lumps of text held in different nodes in the design rationale captured via a software tool currently used in industry, and study how this information can be utilised to improve retrieval performance. Specifically, methods for utilising various structured information are developed and implemented on a prototype keyword-based retrieval system developed in our earlier work. The implementation and evaluation of these methods shows that the structured information can be utilised in a number of ways, such as filtering the results and providing more complete information. This allows the retrieval system to present results that are easy to understand, and which closely match designers' queries. Like design rationale, other methods for representing design knowledge also in essence involve structured information and thus the methods proposed can be generalised to be adapted and applied for the retrieval of other kinds of design knowledge. Copyright © 2002-2012 The Design Society. All rights reserved.
Resumo:
MOTIVATION: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct-but often complementary-information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. RESULTS: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI's performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation-chip and protein-protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques-as well as to non-integrative approaches-demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods.
Resumo:
This paper presents the development and the application of a multi-objective optimization framework for the design of two-dimensional multi-element high-lift airfoils. An innovative and efficient optimization algorithm, namely Multi-Objective Tabu Search (MOTS), has been selected as core of the framework. The flow-field around the multi-element configuration is simulated using the commercial computational fluid dynamics (cfd) suite Ansys cfx. Elements shape and deployment settings have been considered as design variables in the optimization of the Garteur A310 airfoil, as presented here. A validation and verification process of the cfd simulation for the Garteur airfoil is performed using available wind tunnel data. Two design examples are presented in this study: a single-point optimization aiming at concurrently increasing the lift and drag performance of the test case at a fixed angle of attack and a multi-point optimization. The latter aims at introducing operational robustness and off-design performance into the design process. Finally, the performance of the MOTS algorithm is assessed by comparison with the leading NSGA-II (Non-dominated Sorting Genetic Algorithm) optimization strategy. An equivalent framework developed by the authors within the industrial sponsor environment is used for the comparison. To eliminate cfd solver dependencies three optimum solutions from the Pareto optimal set have been cross-validated. As a result of this study MOTS has been demonstrated to be an efficient and effective algorithm for aerodynamic optimizations. Copyright © 2012 Tech Science Press.
Resumo:
Urbanisation is the great driving force of the twenty-first century. Cities are associated with both productivity and creativity, and the benefits offered by closely connected and high density living and working contribute to sustainability. At the same time, cities need extensive infrastructure – like water, power, sanitation and transportation systems – to operate effectively. Cities therefore comprise multiple components, forming both static and dynamic systems that are interconnected directly and indirectly on a number of levels, all forming the backdrop for the interaction of people and processes. Bringing together large numbers of people and complex products in rich interactions can lead to vulnerability from hazards, threats and even trends, whether natural hazards, epidemics, political upheaval, demographic changes, economic instability and/or mechanical failures; The key to countering vulnerability is the identification of critical systems and clear understanding of their interactions and dependencies. Critical systems can be assessed methodically to determine the implications of their failure and their interconnectivities with other systems to identify options. The overriding need is to support resilience – defined here as the degree to which a system or systems can continue to function effectively in a changing environment. Cities need to recognise the significance of devising adaptation strategies and processes to address a multitude of uncertainties relating to climate, economy, growth and demography. In this paper we put forward a framework to support cities in understanding the hazards, threats and trends that can make them vulnerable to unexpected changes and unpredictable shocks. The framework draws on an asset model of the city, in which components that contribute to resilience include social capital, economic assets, manufactured assets, and governance. The paper reviews the field, and draws together an overarching framework intended to help cities plan a robust trajectory towards increased resilience through flexibility, resourcefulness and responsiveness. It presents some brief case studies demonstrating the applicability of the proposed framework to a wide variety of circumstances.
Resumo:
Human listeners can identify vowels regardless of speaker size, although the sound waves for an adult and a child speaking the ’same’ vowel would differ enormously. The differences are mainly due to the differences in vocal tract length (VTL) and glottal pulse rate (GPR) which are both related to body size. Automatic speech recognition machines are notoriously bad at understanding children if they have been trained on the speech of an adult. In this paper, we propose that the auditory system adapts its analysis of speech sounds, dynamically and automatically to the GPR and VTL of the speaker on a syllable-to-syllable basis. We illustrate how this rapid adaptation might be performed with the aid of a computational version of the auditory image model, and we propose that an auditory preprocessor of this form would improve the robustness of speech recognisers.
Resumo:
Copulas allow to learn marginal distributions separately from the multivariate dependence structure (copula) that links them together into a density function. Vine factorizations ease the learning of high-dimensional copulas by constructing a hierarchy of conditional bivariate copulas. However, to simplify inference, it is common to assume that each of these conditional bivariate copulas is independent from its conditioning variables. In this paper, we relax this assumption by discovering the latent functions that specify the shape of a conditional copula given its conditioning variables We learn these functions by following a Bayesian approach based on sparse Gaussian processes with expectation propagation for scalable, approximate inference. Experiments on real-world datasets show that, when modeling all conditional dependencies, we obtain better estimates of the underlying copula of the data.