906 resultados para R15 - Econometric and Input Output Models
Resumo:
In this paper, we propose a speech recognition engine using hybrid model of Hidden Markov Model (HMM) and Gaussian Mixture Model (GMM). Both the models have been trained independently and the respective likelihood values have been considered jointly and input to a decision logic which provides net likelihood as the output. This hybrid model has been compared with the HMM model. Training and testing has been done by using a database of 20 Hindi words spoken by 80 different speakers. Recognition rates achieved by normal HMM are 83.5% and it gets increased to 85% by using the hybrid approach of HMM and GMM.
Resumo:
We show theoretically and experimentally a mechanismbehind the emergence of wide or bimodal protein distributions in biochemical networks with nonlinear input-output characteristics (the dose-response curve) and variability in protein abundance. Large cell-to-cell variation in the nonlinear dose-response characteristics can be beneficial to facilitate two distinct groups of response levels as opposed to a graded response. Under the circumstances that we quantify mathematically, the two distinct responses can coexist within a cellular population, leading to the emergence of a bimodal protein distribution. Using flow cytometry, we demonstrate the appearance of wide distributions in the hypoxia-inducible factor-mediated response network in HCT116 cells. With help of our theoretical framework, we perform a novel calculation of the magnitude of cell-to-cell heterogeneity in the dose-response obtained experimentally. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Resumo:
We report a distributed multifunctional fiber sensing network based on weak-fiber Bragg gratings (WFBGs) and long period fiber grating (LPG) assisted OTDR system. The WFBGs are applied for temperature, strain, and vibration monitoring at key position, and the LPG is used as a linear filter in the system to convert the wavelength shift of WFBGs caused by environmental change into the power change. The simulation results show that it is possible to integrate more than 4472 WFBGs in the system when the reflectivity of WFBGs is less than {10}^{-5}. Besides, the back-Rayleigh scattering along the whole fiber can also be detected which makes distributed bend sensing possible. As an experimental demonstration, we have used three WFBGs UV-inscribed with 50-m interval at the end of a 2.6-km long fiber, which part was subjected for temperature, strain, and vibration sensing, respectively. The ratio of the intensity of output and input light is used for temperature and strain sensing, and the results show strain and temperature sensitivities are 4.2 \times {10}^{-4}{/\mu \varepsilon } and 5.9 \times {10}^{-3}{{/ {^{\circ }}\textrm {C}}} , respectively. Detection of multiple vibrations and single vibration with the broad frequency band up to 500 Hz are also achieved. In addition, distributed bend sensing which could be simultaneously realized in this system has been proposed.
Resumo:
2000 Mathematics Subject Classification: 62H12, 62P99
Resumo:
A brief introduction into the theory of differential inclusions, viability theory and selections of set valued mappings is presented. As an application the implicit scheme of the Leontief dynamic input-output model is considered.
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
Geochemical and geophysical approaches have been used to investigate the freshwater and saltwater dynamics in the coastal Biscayne Aquifer and Biscayne Bay. Stable isotopes of oxygen and hydrogen, and concentrations of Sr2+ and Ca2+ were combined in two geochemical mixing models to provide estimates of the various freshwater inputs (precipitation, canal water, and groundwater) to Biscayne Bay and the coastal canal system in South Florida. Shallow geophysical electromagnetic and direct current resistivity surveys were used to image the geometry and stratification of the saltwater mixing zone in the near coastal (less than 1km inland) Biscayne Aquifer. The combined stable isotope and trace metal models suggest a ratio of canal input-precipitation-groundwater of 38%–52%–10% in the wet season and 37%–58%–5% in the dry season with an error of 25%, where most (20%) of the error was attributed to the isotope regression model, while the remaining 5% error was attributed to the Sr2+/Ca2+ mixing model. These models suggest rainfall is the dominate source of freshwater to Biscayne Bay. For a bay-wide water budget that includes saltwater and freshwater mixing, fresh groundwater accounts for less than 2% of the total input. A similar Sr 2+/Ca2+ tracer model indicates precipitation is the dominate source in 9 out of 10 canals that discharge into Biscayne Bay. The two-component mixing model converged for 100% of the freshwater canal samples in this study with 63% of the water contributed to the canals coming from precipitation and 37% from groundwater inputs ±4%. There was a seasonal shift from 63% precipitation input in the dry season to 55% precipitation input in the wet season. The three end-member mixing model converged for only 60% of the saline canal samples possibly due to non-conservative behavior of Sr2+ and Ca2+ in saline groundwater discharging into the canal system. Electromagnetic and Direct Current resistivity surveys were successful at locating and estimating the geometry and depth of the freshwater/saltwater interface in the Biscayne Aquifer at two near coastal sites. A saltwater interface that deepened as the survey moved inland was detected with a maximum interpreted depth to the interface of 15 meters, approximately 0.33 km inland from the shoreline. ^
Resumo:
The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.
Resumo:
The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.
Resumo:
An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
The distribution, abundance, behaviour, and morphology of marine species is affected by spatial variability in the wave environment. Maps of wave metrics (e.g. significant wave height Hs, peak energy wave period Tp, and benthic wave orbital velocity URMS) are therefore useful for predictive ecological models of marine species and ecosystems. A number of techniques are available to generate maps of wave metrics, with varying levels of complexity in terms of input data requirements, operator knowledge, and computation time. Relatively simple "fetch-based" models are generated using geographic information system (GIS) layers of bathymetry and dominant wind speed and direction. More complex, but computationally expensive, "process-based" models are generated using numerical models such as the Simulating Waves Nearshore (SWAN) model. We generated maps of wave metrics based on both fetch-based and process-based models and asked whether predictive performance in models of benthic marine habitats differed. Predictive models of seagrass distribution for Moreton Bay, Southeast Queensland, and Lizard Island, Great Barrier Reef, Australia, were generated using maps based on each type of wave model. For Lizard Island, performance of the process-based wave maps was significantly better for describing the presence of seagrass, based on Hs, Tp, and URMS. Conversely, for the predictive model of seagrass in Moreton Bay, based on benthic light availability and Hs, there was no difference in performance using the maps of the different wave metrics. For predictive models where wave metrics are the dominant factor determining ecological processes it is recommended that process-based models be used. Our results suggest that for models where wave metrics provide secondarily useful information, either fetch- or process-based models may be equally useful.
Resumo:
The real-time optimization of large-scale systems is a difficult problem due to the need for complex models involving uncertain parameters and the high computational cost of solving such problems by a decentralized approach. Extremum-seeking control (ESC) is a model-free real-time optimization technique which can estimate unknown parameters and can optimize nonlinear time-varying systems using only a measurement of the cost function to be minimized. In this thesis, we develop a distributed version of extremum-seeking control which allows large-scale systems to be optimized without models and with minimal computing power. First, we develop a continuous-time distributed extremum-seeking controller. It has three main components: consensus, parameter estimation, and optimization. The consensus provides each local controller with an estimate of the cost to be minimized, allowing them to coordinate their actions. Using this cost estimate, parameters for a local input-output model are estimated, and the cost is minimized by following a gradient descent based on the estimate of the gradient. Next, a similar distributed extremum-seeking controller is developed in discrete-time. Finally, we consider an interesting application of distributed ESC: formation control of high-altitude balloons for high-speed wireless internet. These balloons must be steered into a favourable formation where they are spread out over the Earth and provide coverage to the entire planet. Distributed ESC is applied to this problem, and is shown to be effective for a system of 1200 ballons subjected to realistic wind currents. The approach does not require a wind model and uses a cost function based on a Voronoi partition of the sphere. Distributed ESC is able to steer balloons from a few initial launch sites into a formation which provides coverage to the entire Earth and can maintain a similar formation as the balloons move with the wind around the Earth.
Resumo:
Ground-source heat pump (GSHP) systems represent one of the most promising techniques for heating and cooling in buildings. These systems use the ground as a heat source/sink, allowing a better efficiency thanks to the low variations of the ground temperature along the seasons. The ground-source heat exchanger (GSHE) then becomes a key component for optimizing the overall performance of the system. Moreover, the short-term response related to the dynamic behaviour of the GSHE is a crucial aspect, especially from a regulation criteria perspective in on/off controlled GSHP systems. In this context, a novel numerical GSHE model has been developed at the Instituto de Ingeniería Energética, Universitat Politècnica de València. Based on the decoupling of the short-term and the long-term response of the GSHE, the novel model allows the use of faster and more precise models on both sides. In particular, the short-term model considered is the B2G model, developed and validated in previous research works conducted at the Instituto de Ingeniería Energética. For the long-term, the g-function model was selected, since it is a previously validated and widely used model, and presents some interesting features that are useful for its combination with the B2G model. The aim of the present paper is to describe the procedure of combining these two models in order to obtain a unique complete GSHE model for both short- and long-term simulation. The resulting model is then validated against experimental data from a real GSHP installation.