930 resultados para Input-output data


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The need for low bit-rate speech coding is the result of growing demand on the available radio bandwidth for mobile communications both for military purposes and for the public sector. To meet this growing demand it is required that the available bandwidth be utilized in the most economic way to accommodate more services. Two low bit-rate speech coders have been built and tested in this project. The two coders combine predictive coding with delta modulation, a property which enables them to achieve simultaneously the low bit-rate and good speech quality requirements. To enhance their efficiency, the predictor coefficients and the quantizer step size are updated periodically in each coder. This enables the coders to keep up with changes in the characteristics of the speech signal with time and with changes in the dynamic range of the speech waveform. However, the two coders differ in the method of updating their predictor coefficients. One updates the coefficients once every one hundred sampling periods and extracts the coefficients from input speech samples. This is known in this project as the Forward Adaptive Coder. Since the coefficients are extracted from input speech samples, these must be transmitted to the receiver to reconstruct the transmitted speech sample, thus adding to the transmission bit rate. The other updates its coefficients every sampling period, based on information of output data. This coder is known as the Backward Adaptive Coder. Results of subjective tests showed both coders to be reasonably robust to quantization noise. Both were graded quite good, with the Forward Adaptive performing slightly better, but with a slightly higher transmission bit rate for the same speech quality, than its Backward counterpart. The coders yielded acceptable speech quality of 9.6kbps for the Forward Adaptive and 8kbps for the Backward Adaptive.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

DEA literature continues apace but software has lagged behind. This session uses suitably selected data to present newly developed software which includes many of the most recent DEA models. The software enables the user to address a variety of issues not frequently found in existing DEA software such as: -Assessments under a variety of possible assumptions of returns to scale including NIRS and NDRS; -Scale elasticity computations; -Numerous Input/Output variables and truly unlimited number of assessment units (DMUs) -Panel data analysis -Analysis of categorical data (multiple categories) -Malmquist Index and its decompositions -Computations of Supper efficiency -Automated removal of super-efficient outliers under user-specified criteria; -Graphical presentation of results -Integrated statistical tests

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The primary purpose of this thesis was to present a theoretical large-signal analysis to study the power gain and efficiency of a microwave power amplifier for LS-band communications using software simulation. Power gain, efficiency, reliability, and stability are important characteristics in the power amplifier design process. These characteristics affect advance wireless systems, which require low-cost device amplification without sacrificing system performance. Large-signal modeling and input and output matching components are used for this thesis. Motorola's Electro Thermal LDMOS model is a new transistor model that includes self-heating affects and is capable of small-large signal simulations. It allows for most of the design considerations to be on stability, power gain, bandwidth, and DC requirements. The matching technique allows for the gain to be maximized at a specific target frequency. Calculations and simulations for the microwave power amplifier design were performed using Matlab and Microwave Office respectively. Microwave Office is the simulation software used in this thesis. The study demonstrated that Motorola's Electro Thermal LDMOS transistor in microwave power amplifier design process is a viable solution for common-source amplifier applications in high power base stations. The MET-LDMOS met the stability requirements for the specified frequency range without a stability-improvement model. The power gain of the amplifier circuit was improved through proper microwave matching design using input/output-matching techniques. The gain and efficiency of the amplifier improve approximately 4dB and 7.27% respectively. The gain value is roughly .89 dB higher than the maximum gain specified by the MRF21010 data sheet specifications. This work can lead to efficient modeling and development of high power LDMOS transistor implementations in commercial and industry applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:


In order to predict compressive strength of geopolymers prepared from alumina-silica natural products, based on the effect of Al 2 O 3 /SiO 2, Na 2 O/Al 2 O 3, Na 2 O/H 2 O, and Na/[Na+K], more than 50 pieces of data were gathered from the literature. The data was utilized to train and test a multilayer artificial neural network (ANN). Therefore a multilayer feedforward network was designed with chemical compositions of alumina silicate and alkali activators as inputs and compressive strength as output. In this study, a feedforward network with various numbers of hidden layers and neurons were tested to select the optimum network architecture. The developed three-layer neural network simulator model used the feedforward back propagation architecture, demonstrated its ability in training the given input/output patterns. The cross-validation data was used to show the validity and high prediction accuracy of the network. This leads to the optimum chemical composition and the best paste can be made from activated alumina-silica natural products using alkaline hydroxide, and alkaline silicate. The research results are in agreement with mechanism of geopolymerization.


Read More: http://ascelibrary.org/doi/abs/10.1061/(ASCE)MT.1943-5533.0000829

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the current study, we compared technical efficiency of smallholder rice farmers with and without credit in northern Ghana using data from a farm household survey. We fitted a stochastic frontier production function to input and output data to measure technical efficiency. We addressed self-selection into credit participation using propensity score matching and found that the mean efficiency did not differ between credit users and non-users. Credit-participating households had an efficiency of 63.0 percent compared to 61.7 percent for non-participants. The results indicate significant inefficiencies in production and thus a high scope for improving farmers’ technical efficiency through better use of available resources at the current level of technology. Apart from labour and capital, all the conventional farm inputs had a significant effect on rice production. The determinants of efficiency included the respondent’s age, sex, educational status, distance to the nearest market, herd ownership, access to irrigation and specialisation in rice production. From a policy perspective, we recommend that the credit should be channelled to farmers who demonstrate the need for it and show the commitment to improve their production through external financing. Such a screening mechanism will ensure that the credit goes to the right farmers who need it to improve their technical efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Utilizing the framework of effective surface quasi-geostrophic (eSQG) theory, we explored the potential of reconstructing the 3D upper ocean circulation structures, including the balanced vertical velocity (w) field, from high-resolution sea surface height (SSH) data of the planned SWOT satellite mission. Specifically, we utilized the 1/30°, submesoscale-resolving, OFES model output and subjected it through the SWOT simulator that generates the along-swath SSH data with expected measurement errors. Focusing on the Kuroshio Extension region in the North Pacific where regional Rossby numbers range from 0.22 to 0.32, we found that the eSQG dynamics constitutes an effective framework for reconstructing the 3D upper ocean circulation field. Using the modeled SSH data as input, the eSQG-reconstructed relative vorticity (ζ) and w fields are found to reach a correlation of 0.7–0.9 and 0.6–0.7, respectively, in the 1,000m upper ocean when compared to the original model output. Degradation due to the SWOT sampling and measurement errors in the input SSH data for the ζ and w reconstructions is found to be moderate, 5–25% for the 3D ζ field and 15-35% for the 3D w field. There exists a tendency for this degradation ratio to decrease in regions where the regional eddy variability (or Rossby number) increases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Uno de los grandes retos de la HPC (High Performance Computing) consiste en optimizar el subsistema de Entrada/Salida, (E/S), o I/O (Input/Output). Ken Batcher resume este hecho en la siguiente frase: "Un supercomputador es un dispositivo que convierte los problemas limitados por la potencia de cálculo en problemas limitados por la E/S" ("A Supercomputer is a device for turning compute-bound problems into I/O-bound problems") . En otras palabras, el cuello de botella ya no reside tanto en el procesamiento de los datos como en la disponibilidad de los mismos. Además, este problema se exacerbará con la llegada del Exascale y la popularización de las aplicaciones Big Data. En este contexto, esta tesis contribuye a mejorar el rendimiento y la facilidad de uso del subsistema de E/S de los sistemas de supercomputación. Principalmente se proponen dos contribuciones al respecto: i) una interfaz de E/S desarrollada para el lenguaje Chapel que mejora la productividad del programador a la hora de codificar las operaciones de E/S; y ii) una implementación optimizada del almacenamiento de datos de secuencias genéticas. Con más detalle, la primera contribución estudia y analiza distintas optimizaciones de la E/S en Chapel, al tiempo que provee a los usuarios de una interfaz simple para el acceso paralelo y distribuido a los datos contenidos en ficheros. Por tanto, contribuimos tanto a aumentar la productividad de los desarrolladores, como a que la implementación sea lo más óptima posible. La segunda contribución también se enmarca dentro de los problemas de E/S, pero en este caso se centra en mejorar el almacenamiento de los datos de secuencias genéticas, incluyendo su compresión, y en permitir un uso eficiente de esos datos por parte de las aplicaciones existentes, permitiendo una recuperación eficiente tanto de forma secuencial como aleatoria. Adicionalmente, proponemos una implementación paralela basada en Chapel.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we aim at contributing to the new field of research that intends to bring up-to-date the tools and statistics currently used to look to the current reality given by Global Value Chains (GVC) in international trade and Foreign Direct Investment (FDI). Namely, we make use of the most recent data published by the World Input-Output Database to suggest indicators to measure the participation and net gains of countries by being a part of GVC; and use those indicators in a pooled-regression model to estimate determinants of FDI stocks in Organization for Economic Co-operation and Development (OECD)-member countries. We conclude that one of the measures proposed proves to be statistically significant in explaining the bilateral stock of FDI in OECD countries, meaning that the higher the transnational income generated between two given countries by GVC, taken as a proxy to the participation of those countries in GVC, the higher one could expect the FDI entering those countries to be. The regression also shows the negative impact of the global financial crisis that started in 2009 in the world’s bilateral FDI stocks and, additionally, the particular and significant role played by the People’s Republic of China in determining these stocks.