842 resultados para Discrete Choice Model
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
The recent development of using negative stiffness inclusions to achieve extreme overall stiffness and mechanical damping of composite materials reveals a new avenue for constructing high performance materials. One of the negative stiffness sources can be obtained from phase transforming materials in the vicinity of their phase transition, as suggested by the Landau theory. To understand the underlying mechanism from a microscopic viewpoint, we theoretically analyze a 2D, nested triangular lattice cell with pre-chosen elements containing negative stiffness to demonstrate anomalies in overall stiffness and damping. Combining with current knowledge from continuum models, based on the composite theory, such as the Voigt, Reuss, and Hashin-Shtrikman model, we further explore the stability of the system with Lyapunov's indirect stability theorem. The evolution of the microstructure in terms of the discrete system is discussed. A potential application of the results presented here is to develop special thin films with unusual in-plane mechanical properties. © 2006 Elsevier B.V. All rights reserved.
Resumo:
Recently, we have developed the hierarchical Generative Topographic Mapping (HGTM), an interactive method for visualization of large high-dimensional real-valued data sets. In this paper, we propose a more general visualization system by extending HGTM in three ways, which allows the user to visualize a wider range of data sets and better support the model development process. 1) We integrate HGTM with noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM). This enables us to visualize data of inherently discrete nature, e.g., collections of documents, in a hierarchical manner. 2) We give the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode, the user selects "regions of interest," whereas in the automatic mode, an unsupervised minimum message length (MML)-inspired construction of a mixture of LTMs is employed. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. 3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualization plots, since they can highlight the boundaries between data clusters. We illustrate our approach on a toy example and evaluate it on three more complex real data sets. © 2005 IEEE.
Resumo:
Background: Coronary heart disease (CHD) is a public health priority in the UK. The National Service Framework (NSF) has set standards for the prevention, diagnosis and treatment of CHD, which include the use of cholesterol-lowering agents aimed at achieving targets of blood total cholesterol (TC) < 5.0 mmol/L and low density lipoprotein-cholesterol (LDL-C) < 3.0 mmol/L. In order to achieve these targets cost effectively, prescribers need to make an informed choice from the range of statins available. Aim: To estimate the average and relative cost effectiveness of atorvastatin, fluvastatin, pravastatin and simvastatin in achieving the NSF LDL-C and TC targets. Design: Model-based economic evaluation. Methods: An economic model was constructed to estimate the number of patients achieving the NSF targets for LDL-C and TC at each dose of statin, and to calculate the average drug cost and incremental drug cost per patient achieving the target levels. The population baseline LDL-C and TC, and drug efficacy and drug costs were taken from previously published data. Estimates of the distribution of patients receiving each dose of statin were derived from the UK national DIN-LINK database. Results: The estimated annual drug cost per 1000 patients treated with atorvastatin was £289 000, with simvastatin £315 000, with pravastatin £333 000 and with fluvastatin £167 000. The percentages of patients achieving target are 74.4%, 46.4%, 28.4% and 13.2% for atorvastatin, simvastatin, pravastatin and fluvastatin, respectively. Incremental drug cost per extra patient treated to LDL-C and TC targets compared with fluvastafin were £198 and £226 for atorvastatin, £443 and £567 for simvastatin and £1089 and £2298 for pravastatin, using 2002 drug costs. Conclusions: As a result of its superior efficacy, atorvastatin generates a favourable cost-effectiveness profile as measured by drug cost per patient treated to LDL-C and TC targets. For a given drug budget, more patients would achieve NSF LDL-C and TC targets with atorvastatin than with any of the other statins examined.
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
Mathematics Subject Classification: 26A33, 45K05, 60J60, 60G50, 65N06, 80-99.
Estimation of productivity in Korean electric power plants:a semiparametric smooth coefficient model
Resumo:
This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to assess the performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implications for generation policy in Korea as outlined in this study.
Resumo:
2000 Mathematics Subject Classification: 60J80
Resumo:
This chapter contributes to the anthology on learning to research - researching to learn because it emphases a need to design curricula that enables living research, and on-going researcher development, rather than one that restricts student and staff activities, within a marketised approach towards time. In recent decades higher education (HE) has come to be valued for its contribution to the global economy. Referred to as the neo-liberal university, a strong prioritisation has been placed on meeting the needs of industry by providing a better workforce. This perspective emphasises the role of a degree in HE to secure future material affluence, rather than to study as an on-going investment in the self (Molesworth , Nixon & Scullion, 2009: 280). Students are treated primarily as consumers in this model, where through their tuition fees they purchase a product, rather than benefit from the transformative potential university education offers for the whole of life.Given that HE is now measured by the numbers of students it attracts, and later places into well-paid jobs, there is an intense pressure on time, which has led to a method where the learning experiences of students are broken down into discrete modules. Whilst this provides consistency, students can come to view research processes in a fragmented way within the modular system. Topics are presented chronologically, week-by-week and students simply complete a set of tasks to ‘have a degree’, rather than to ‘be learners’ (Molesworth , Nixon & Scullion, 2009: 277) who are living their research, in relation to their own past, present and future. The idea of living research in this context is my own adaptation of an approach suggested by C. Wright Mills (1959) in The Sociological Imagination. Mills advises that successful scholars do not split their work from the rest of their lives, but treat scholarship as a choice of how to live, as well as a choice of career. The marketised slant in HE thus creates a tension firstly, for students who are learning to research. Mills would encourage them to be creative, not instrumental, in their use of time, yet they are journeying through a system that is structured for a swift progression towards a high paid job, rather than crafted for reflexive inquiry, that transforms their understanding throughout life. Many universities are placing a strong focus on discrete skills for student employability, but I suggest that embedding the transformative skills emphasised by Mills empowers students and builds their confidence to help them make connections that aid their employability. Secondly, the marketised approach creates a problem for staff designing the curriculum, if students do not easily make links across time over their years of study and whole programmes. By researching to learn, staff can discover new methods to apply in their design of the curriculum, to help students make important and creative connections across their programmes of study.
Resumo:
We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate) where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways.
Resumo:
Statistical mechanics of two coupled vector fields is studied in the tight-binding model that describes propagation of polarized light in discrete waveguides in the presence of the four-wave mixing. The energy and power conservation laws enable the formulation of the equilibrium properties of the polarization state in terms of the Gibbs measure with positive temperature. The transition line T=∞ is established beyond which the discrete vector solitons are created. Also in the limit of the large nonlinearity an analytical expression for the distribution of Stokes parameters is obtained, which is found to be dependent only on the statistical properties of the initial polarization state and not on the strength of nonlinearity. The evolution of the system to the final equilibrium state is shown to pass through the intermediate stage when the energy exchange between the waveguides is still negligible. The distribution of the Stokes parameters in this regime has a complex multimodal structure strongly dependent on the nonlinear coupling coefficients and the initial conditions.
Resumo:
Markovian models are widely used to analyse quality-of-service properties of both system designs and deployed systems. Thanks to the emergence of probabilistic model checkers, this analysis can be performed with high accuracy. However, its usefulness is heavily dependent on how well the model captures the actual behaviour of the analysed system. Our work addresses this problem for a class of Markovian models termed discrete-time Markov chains (DTMCs). We propose a new Bayesian technique for learning the state transition probabilities of DTMCs based on observations of the modelled system. Unlike existing approaches, our technique weighs observations based on their age, to account for the fact that older observations are less relevant than more recent ones. A case study from the area of bioinformatics workflows demonstrates the effectiveness of the technique in scenarios where the model parameters change over time.
Resumo:
We investigate the mobility of nonlinear localized modes in a generalized discrete Ginzburg-Landau-type model, describing a one-dimensional waveguide array in an active Kerr medium with intrinsic, saturable gain and damping. It is shown that exponentially localized, traveling discrete dissipative breather-solitons may exist as stable attractors supported only by intrinsic properties of the medium, i.e., in the absence of any external field or symmetry-breaking perturbations. Through an interplay by the gain and damping effects, the moving soliton may overcome the Peierls-Nabarro barrier, present in the corresponding conservative system, by self-induced time-periodic oscillations of its power (norm) and energy (Hamiltonian), yielding exponential decays to zero with different rates in the forward and backward directions. In certain parameter windows, bistability appears between fast modes with small oscillations and slower, large-oscillation modes. The velocities and the oscillation periods are typically related by lattice commensurability and exhibit period-doubling bifurcations to chaotically "walking" modes under parameter variations. If the model is augmented by intersite Kerr nonlinearity, thereby reducing the Peierls-Nabarro barrier of the conservative system, the existence regime for moving solitons increases considerably, and a richer scenario appears including Hopf bifurcations to incommensurately moving solutions and phase-locking intervals. Stable moving breathers also survive in the presence of weak disorder. © 2014 American Physical Society.
Resumo:
Cikkünk arról a paradox jelenségről szól, hogy a fogyasztást explicit módon megjelenítő Neumann-modell egyensúlyi megoldásaiban a munkabért meghatározó létszükségleti termékek ára esetenként nulla lehet, és emiatt a reálbér egyensúlyi értéke is nulla lesz. Ez a jelenség mindig bekövetkezik az olyan dekomponálható gazdaságok esetén, amelyekben eltérő növekedési és profitrátájú, alternatív egyensúlyi megoldások léteznek. A jelenség sokkal áttekinthetőbb formában tárgyalható a modell Leontief-eljárásra épülő egyszerűbb változatában is, amit ki is használunk. Megmutatjuk, hogy a legnagyobbnál alacsonyabb szintű növekedési tényezőjű megoldások közgazdasági szempontból értelmetlenek, és így érdektelenek. Ezzel voltaképpen egyrészt azt mutatjuk meg, hogy Neumann kiváló intuíciója jól működött, amikor ragaszkodott modellje egyértelmű megoldásához, másrészt pedig azt is, hogy ehhez nincs szükség a gazdaság dekomponálhatóságának feltételezésére. A vizsgált téma szorosan kapcsolódik az általános profitráta meghatározásának - Sraffa által modern formába öntött - Ricardo-féle elemzéséhez, illetve a neoklasszikus növekedéselmélet nevezetes bér-profit, illetve felhalmozás-fogyasztás átváltási határgörbéihez, ami jelzi a téma elméleti és elmélettörténeti érdekességét is. / === / In the Marx-Neumann version of the Neumann model introduced by Morishima, the use of commodities is split between production and consumption, and wages are determined as the cost of necessary consumption. In such a version it may occur that the equilibrium prices of all goods necessary for consumption are zero, so that the equilibrium wage rate becomes zero too. In fact such a paradoxical case will always arise when the economy is decomposable and the equilibrium not unique in terms of growth and interest rate. It can be shown that a zero equilibrium wage rate will appear in all equilibrium solutions where growth and interest rate are less than maximal. This is another proof of Neumann's genius and intuition, for he arrived at the uniqueness of equilibrium via an assumption that implied that the economy was indecomposable, a condition relaxed later by Kemeny, Morgenstern and Thompson. This situation occurs also in similar models based on Leontief technology and such versions of the Marx-Neumann model make the roots of the problem more apparent. Analysis of them also yields an interesting corollary to Ricardo's corn rate of profit: the real cause of the awkwardness is bad specification of the model: luxury commodities are introduced without there being a final demand for them, and production of them becomes a waste of resources. Bad model specification shows up as a consumption coefficient incompatible with the given technology in the more general model with joint production and technological choice. For the paradoxical situation implies the level of consumption could be raised and/or the intensity of labour diminished without lowering the equilibrium rate of the growth and interest. This entails wasteful use of resources and indicates again that the equilibrium conditions are improperly specified. It is shown that the conditions for equilibrium can and should be redefined for the Marx-Neumann model without assuming an indecomposable economy, in a way that ensures the existence of an equilibrium unique in terms of the growth and interest rate coupled with a positive value for the wage rate, so confirming Neumann's intuition. The proposed solution relates closely to findings of Bromek in a paper correcting Morishima's generalization of wage/profit and consumption/investment frontiers.
Resumo:
In this paper we allow the firms to choose their prices and quantities simultaneously. Quantities are produced in advance and their common sales price is determined by the market. Firms offer their “residual capacities” at their announced prices and the corresponding demand will be served to order. If all firms have small capacities, we obtain the Bertrand solution; while if at least one firm has a sufficiently large capacity, the Cournot outcome and a model of price leadership could emerge.