848 resultados para Native Vegetation Condition, Benchmarking, Bayesian Decision Framework, Regression, Indicators
Resumo:
This paper presents a novel prosody model in the context of computer text-to-speech synthesis applications for tone languages. We have demonstrated its applicability using the Standard Yorùbá (SY) language. Our approach is motivated by the theory that abstract and realised forms of various prosody dimensions should be modelled within a modular and unified framework [Coleman, J.S., 1994. Polysyllabic words in the YorkTalk synthesis system. In: Keating, P.A. (Ed.), Phonological Structure and Forms: Papers in Laboratory Phonology III, Cambridge University Press, Cambridge, pp. 293–324]. We have implemented this framework using the Relational Tree (R-Tree) technique. R-Tree is a sophisticated data structure for representing a multi-dimensional waveform in the form of a tree. The underlying assumption of this research is that it is possible to develop a practical prosody model by using appropriate computational tools and techniques which combine acoustic data with an encoding of the phonological and phonetic knowledge provided by experts. To implement the intonation dimension, fuzzy logic based rules were developed using speech data from native speakers of Yorùbá. The Fuzzy Decision Tree (FDT) and the Classification and Regression Tree (CART) techniques were tested in modelling the duration dimension. For practical reasons, we have selected the FDT for implementing the duration dimension of our prosody model. To establish the effectiveness of our prosody model, we have also developed a Stem-ML prosody model for SY. We have performed both quantitative and qualitative evaluations on our implemented prosody models. The results suggest that, although the R-Tree model does not predict the numerical speech prosody data as accurately as the Stem-ML model, it produces synthetic speech prosody with better intelligibility and naturalness. The R-Tree model is particularly suitable for speech prosody modelling for languages with limited language resources and expertise, e.g. African languages. Furthermore, the R-Tree model is easy to implement, interpret and analyse.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
An approach of building distributed decision support systems is proposed. There is defined a framework of a distributed DSS and examined questions of problem formulation and solving using artificial intellectual agents in system core.
Resumo:
The problem of decision functions quality in pattern recognition is considered. An overview of the approaches to the solution of this problem is given. Within the Bayesian framework, we suggest an approach based on the Bayesian interval estimates of quality on a finite set of events.
Resumo:
The inverse controller is traditionally assumed to be a deterministic function. This paper presents a pedagogical methodology for estimating the stochastic model of the inverse controller. The proposed method is based on Bayes' theorem. Using Bayes' rule to obtain the stochastic model of the inverse controller allows the use of knowledge of uncertainty from both the inverse and the forward model in estimating the optimal control signal. The paper presents the methodology for general nonlinear systems and is demonstrated on nonlinear single-input-single-output (SISO) and multiple-input-multiple-output (MIMO) examples. © 2006 IEEE.
Resumo:
The real purpose of collecting big data is to identify causality in the hope that this will facilitate credible predictivity . But the search for causality can trap one into infinite regress, and thus one takes refuge in seeking associations between variables in data sets. Regrettably, the mere knowledge of associations does not enable predictivity. Associations need to be embedded within the framework of probability calculus to make coherent predictions. This is so because associations are a feature of probability models, and hence they do not exist outside the framework of a model. Measures of association, like correlation, regression, and mutual information merely refute a preconceived model. Estimated measures of associations do not lead to a probability model; a model is the product of pure thought. This paper discusses these and other fundamentals that are germane to seeking associations in particular, and machine learning in general. ACM Computing Classification System (1998): H.1.2, H.2.4., G.3.
Resumo:
Book review: Heidelberg, Dordrecht, London, and New York, Springer, 2010, 189 pp., £93.55 (hardcover), ISBN 978-3-642-04330-7, e-ISBN 978-3-642-04331-4
Resumo:
The first essay developed a respondent model of Bayesian updating for a double-bound dichotomous choice (DB-DC) contingent valuation methodology. I demonstrated by way of data simulations that current DB-DC identifications of true willingness-to-pay (WTP) may often fail given this respondent Bayesian updating context. Further simulations demonstrated that a simple extension of current DB-DC identifications derived explicitly from the Bayesian updating behavioral model can correct for much of the WTP bias. Additional results provided caution to viewing respondents as acting strategically toward the second bid. Finally, an empirical application confirmed the simulation outcomes. The second essay applied a hedonic property value model to a unique water quality (WQ) dataset for a year-round, urban, and coastal housing market in South Florida, and found evidence that various WQ measures affect waterfront housing prices in this setting. However, the results indicated that this relationship is not consistent across any of the six particular WQ variables used, and is furthermore dependent upon the specific descriptive statistic employed to represent the WQ measure in the empirical analysis. These results continue to underscore the need to better understand both the WQ measure and its statistical form homebuyers use in making their purchase decision. The third essay addressed a limitation to existing hurricane evacuation modeling aspects by developing a dynamic model of hurricane evacuation behavior. A household's evacuation decision was framed as an optimal stopping problem where every potential evacuation time period prior to the actual hurricane landfall, the household's optimal choice is to either evacuate, or to wait one more time period for a revised hurricane forecast. A hypothetical two-period model of evacuation and a realistic multi-period model of evacuation that incorporates actual forecast and evacuation cost data for my designated Gulf of Mexico region were developed for the dynamic analysis. Results from the multi-period model were calibrated with existing evacuation timing data from a number of hurricanes. Given the calibrated dynamic framework, a number of policy questions that plausibly affect the timing of household evacuations were analyzed, and a deeper understanding of existing empirical outcomes in regard to the timing of the evacuation decision was achieved.
Resumo:
This study investigated the utility of the Story Model for decision making at the jury level by examining the influence of evidence order and deliberation style on story consistency and guilt. Participants were shown a video-taped trial stimulus and then provided case perceptions including a guilt judgment and a narrative about what occurred during the incident. Participants then deliberated for approximately thirty minutes using either an evidence-driven or verdict-driven deliberation style before again providing case perceptions, including a guilt determination, a narrative about what happened during the incident, and an evidence recognition test. Multi-level regression analyses revealed that evidence order, deliberation style and sample interacted to influence both story consistency measures and guilt. Among students, participants in the verdict-driven deliberation condition formed more consistent pro-prosecution stories when the prosecution presented their case in story-order, while participants in the evidence-driven deliberation condition formed more consistent pro-prosecution stories when the defense's case was presented in story-order. Findings were the opposite among community members, with participants in the verdict-driven deliberation condition forming more consistent pro-prosecution stories when the defense's case was presented in story-order, and participants in the evidence-driven deliberation condition forming more consistent pro-prosecution stories when the prosecution's case was presented in story-order. Additionally several story consistency measures influenced guilt decisions. Thus there is some support for the hypothesis that story consistency mediates the influence of evidence order and deliberation style on guilt decisions.
Resumo:
We analyzed the dynamics of freshwater marsh vegetation of Taylor Slough in eastern Everglades National Park for the 1979 to 2003 period, focusing on cover of individual plant species and on cover and composition of marsh communities in areas potentially influenced by a canal pump station (‘‘S332’’) and its successor station (‘‘S332D’’). Vegetation change analysis incorporated the hydrologic record at these sites for three intervals: pre-S332 (1961–1980), S332 (1980–1999), post-S332 (1999–2002). During S332 and post-S332 intervals, water level in Taylor Slough was affected by operations of S332 and S332D. To relate vegetation change to plot-level hydrological conditions in Taylor Slough, we developed a weighted averaging regression and calibration model (WA) using data from the marl prairies of Everglades National Park and Big Cypress National Preserve. We examined vegetation pattern along five transects. Transects 1–3 were established in 1979 south of the water delivery structures, and were influenced by their operations. Transects 4 and 5 were established in 1997, the latter west of these structures and possibly under their influence. Transect 4 was established in the northern drainage basin of Taylor Slough, beyond the likely zones of influence of S332 and S332D. The composition of all three southern transects changed similarly after 1979. Where muhly grass (Muhlenbergia capillaris var. filipes) was once dominant, sawgrass (Cladium jamaicense), replaced it, while where sawgrass initially predominated, hydric species such as spikerush (Eleocharis cellulosa Torr.) overtook it. Most of the changes in species dominance in Transects 1–3 occurred after 1992, were mostly in place by 1995–1996, and continued through 1999, indicating how rapidly vegetation in seasonal Everglades marshes can respond to hydrological modifications. During the post-S332 period, these long-term trends began reversing. In the two northern transects, total cover and dominance of both muhly grass and sawgrass increased from 1997 to 2003. Thus, during the 1990’s, vegetation composition south of S332 became more like that of long hydroperiod marshes, but afterward it partially returned to its 1979 condition, i.e., a community characteristic of less prolonged flooding. In contrast, the vegetation change along the two northern transects since 1997 showed little relationship to hydrologic status.
Resumo:
This report examines the interaction between hydrology and vegetation over a 10-year period, between 2001/02 and 2012 within six permanent tree island plots located on three tree islands, two plots each per tree island, established in 2001/02, along a hydrologic and productivity gradient. We hypothesize that: (H1) hydrologic differences within plots between census dates will result in marked differences in a) tree and sapling densities, b) tree basal area, and c) forest structure, i.e., canopy volume and height, and (H2) tree island growth, development, and succession is dependent on hydrologic fluxes, particularly during periods of prolonged droughts or below average hydroperiods.
Resumo:
The first essay developed a respondent model of Bayesian updating for a double-bound dichotomous choice (DB-DC) contingent valuation methodology. I demonstrated by way of data simulations that current DB-DC identifications of true willingness-to-pay (WTP) may often fail given this respondent Bayesian updating context. Further simulations demonstrated that a simple extension of current DB-DC identifications derived explicitly from the Bayesian updating behavioral model can correct for much of the WTP bias. Additional results provided caution to viewing respondents as acting strategically toward the second bid. Finally, an empirical application confirmed the simulation outcomes. The second essay applied a hedonic property value model to a unique water quality (WQ) dataset for a year-round, urban, and coastal housing market in South Florida, and found evidence that various WQ measures affect waterfront housing prices in this setting. However, the results indicated that this relationship is not consistent across any of the six particular WQ variables used, and is furthermore dependent upon the specific descriptive statistic employed to represent the WQ measure in the empirical analysis. These results continue to underscore the need to better understand both the WQ measure and its statistical form homebuyers use in making their purchase decision. The third essay addressed a limitation to existing hurricane evacuation modeling aspects by developing a dynamic model of hurricane evacuation behavior. A household’s evacuation decision was framed as an optimal stopping problem where every potential evacuation time period prior to the actual hurricane landfall, the household’s optimal choice is to either evacuate, or to wait one more time period for a revised hurricane forecast. A hypothetical two-period model of evacuation and a realistic multi-period model of evacuation that incorporates actual forecast and evacuation cost data for my designated Gulf of Mexico region were developed for the dynamic analysis. Results from the multi-period model were calibrated with existing evacuation timing data from a number of hurricanes. Given the calibrated dynamic framework, a number of policy questions that plausibly affect the timing of household evacuations were analyzed, and a deeper understanding of existing empirical outcomes in regard to the timing of the evacuation decision was achieved.
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.
Resumo:
In Western industrialized countries, it is well established that legally competent individuals may choose a surrogate healthcare decision-maker to represent their interests should they lose the capacity to do so themselves. There are few limitations on who they may select to fulfill this function. However, many jurisdictions place restrictions on or prohibit the patient's attending physician or other provider involved with an individual's care to serve in this role. Several authors have previously suggested that respect for the autonomy of patients requires that there be few (if any) constraints on whomever they may appoint as a proxy. In this essay we revisit this topic by first providing a survey of current state laws governing this activity. We then analyze the clinical and ethical circumstances in which potential difficulties could arise. We take a more nuanced and circumspect view of prior suggestions that patients should have virtually unfettered liberty to choose their healthcare proxies. We suggest a strategy to balance the freedom of patients' right to choose their surrogates with fiduciary duty of the state as regulator of medical practice. We identify six domains of possible concern with such relationships and suggest straightforward methods of mitigating their potential negative effects that could be plausibly be incorporated into physician practice.