828 resultados para Input-output data


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The primary purpose of this thesis was to present a theoretical large-signal analysis to study the power gain and efficiency of a microwave power amplifier for LS-band communications using software simulation. Power gain, efficiency, reliability, and stability are important characteristics in the power amplifier design process. These characteristics affect advance wireless systems, which require low-cost device amplification without sacrificing system performance. Large-signal modeling and input and output matching components are used for this thesis. Motorola's Electro Thermal LDMOS model is a new transistor model that includes self-heating affects and is capable of small-large signal simulations. It allows for most of the design considerations to be on stability, power gain, bandwidth, and DC requirements. The matching technique allows for the gain to be maximized at a specific target frequency. Calculations and simulations for the microwave power amplifier design were performed using Matlab and Microwave Office respectively. Microwave Office is the simulation software used in this thesis. The study demonstrated that Motorola's Electro Thermal LDMOS transistor in microwave power amplifier design process is a viable solution for common-source amplifier applications in high power base stations. The MET-LDMOS met the stability requirements for the specified frequency range without a stability-improvement model. The power gain of the amplifier circuit was improved through proper microwave matching design using input/output-matching techniques. The gain and efficiency of the amplifier improve approximately 4dB and 7.27% respectively. The gain value is roughly .89 dB higher than the maximum gain specified by the MRF21010 data sheet specifications. This work can lead to efficient modeling and development of high power LDMOS transistor implementations in commercial and industry applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:


In order to predict compressive strength of geopolymers prepared from alumina-silica natural products, based on the effect of Al 2 O 3 /SiO 2, Na 2 O/Al 2 O 3, Na 2 O/H 2 O, and Na/[Na+K], more than 50 pieces of data were gathered from the literature. The data was utilized to train and test a multilayer artificial neural network (ANN). Therefore a multilayer feedforward network was designed with chemical compositions of alumina silicate and alkali activators as inputs and compressive strength as output. In this study, a feedforward network with various numbers of hidden layers and neurons were tested to select the optimum network architecture. The developed three-layer neural network simulator model used the feedforward back propagation architecture, demonstrated its ability in training the given input/output patterns. The cross-validation data was used to show the validity and high prediction accuracy of the network. This leads to the optimum chemical composition and the best paste can be made from activated alumina-silica natural products using alkaline hydroxide, and alkaline silicate. The research results are in agreement with mechanism of geopolymerization.


Read More: http://ascelibrary.org/doi/abs/10.1061/(ASCE)MT.1943-5533.0000829

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the current study, we compared technical efficiency of smallholder rice farmers with and without credit in northern Ghana using data from a farm household survey. We fitted a stochastic frontier production function to input and output data to measure technical efficiency. We addressed self-selection into credit participation using propensity score matching and found that the mean efficiency did not differ between credit users and non-users. Credit-participating households had an efficiency of 63.0 percent compared to 61.7 percent for non-participants. The results indicate significant inefficiencies in production and thus a high scope for improving farmers’ technical efficiency through better use of available resources at the current level of technology. Apart from labour and capital, all the conventional farm inputs had a significant effect on rice production. The determinants of efficiency included the respondent’s age, sex, educational status, distance to the nearest market, herd ownership, access to irrigation and specialisation in rice production. From a policy perspective, we recommend that the credit should be channelled to farmers who demonstrate the need for it and show the commitment to improve their production through external financing. Such a screening mechanism will ensure that the credit goes to the right farmers who need it to improve their technical efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Utilizing the framework of effective surface quasi-geostrophic (eSQG) theory, we explored the potential of reconstructing the 3D upper ocean circulation structures, including the balanced vertical velocity (w) field, from high-resolution sea surface height (SSH) data of the planned SWOT satellite mission. Specifically, we utilized the 1/30°, submesoscale-resolving, OFES model output and subjected it through the SWOT simulator that generates the along-swath SSH data with expected measurement errors. Focusing on the Kuroshio Extension region in the North Pacific where regional Rossby numbers range from 0.22 to 0.32, we found that the eSQG dynamics constitutes an effective framework for reconstructing the 3D upper ocean circulation field. Using the modeled SSH data as input, the eSQG-reconstructed relative vorticity (ζ) and w fields are found to reach a correlation of 0.7–0.9 and 0.6–0.7, respectively, in the 1,000m upper ocean when compared to the original model output. Degradation due to the SWOT sampling and measurement errors in the input SSH data for the ζ and w reconstructions is found to be moderate, 5–25% for the 3D ζ field and 15-35% for the 3D w field. There exists a tendency for this degradation ratio to decrease in regions where the regional eddy variability (or Rossby number) increases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Uno de los grandes retos de la HPC (High Performance Computing) consiste en optimizar el subsistema de Entrada/Salida, (E/S), o I/O (Input/Output). Ken Batcher resume este hecho en la siguiente frase: "Un supercomputador es un dispositivo que convierte los problemas limitados por la potencia de cálculo en problemas limitados por la E/S" ("A Supercomputer is a device for turning compute-bound problems into I/O-bound problems") . En otras palabras, el cuello de botella ya no reside tanto en el procesamiento de los datos como en la disponibilidad de los mismos. Además, este problema se exacerbará con la llegada del Exascale y la popularización de las aplicaciones Big Data. En este contexto, esta tesis contribuye a mejorar el rendimiento y la facilidad de uso del subsistema de E/S de los sistemas de supercomputación. Principalmente se proponen dos contribuciones al respecto: i) una interfaz de E/S desarrollada para el lenguaje Chapel que mejora la productividad del programador a la hora de codificar las operaciones de E/S; y ii) una implementación optimizada del almacenamiento de datos de secuencias genéticas. Con más detalle, la primera contribución estudia y analiza distintas optimizaciones de la E/S en Chapel, al tiempo que provee a los usuarios de una interfaz simple para el acceso paralelo y distribuido a los datos contenidos en ficheros. Por tanto, contribuimos tanto a aumentar la productividad de los desarrolladores, como a que la implementación sea lo más óptima posible. La segunda contribución también se enmarca dentro de los problemas de E/S, pero en este caso se centra en mejorar el almacenamiento de los datos de secuencias genéticas, incluyendo su compresión, y en permitir un uso eficiente de esos datos por parte de las aplicaciones existentes, permitiendo una recuperación eficiente tanto de forma secuencial como aleatoria. Adicionalmente, proponemos una implementación paralela basada en Chapel.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we aim at contributing to the new field of research that intends to bring up-to-date the tools and statistics currently used to look to the current reality given by Global Value Chains (GVC) in international trade and Foreign Direct Investment (FDI). Namely, we make use of the most recent data published by the World Input-Output Database to suggest indicators to measure the participation and net gains of countries by being a part of GVC; and use those indicators in a pooled-regression model to estimate determinants of FDI stocks in Organization for Economic Co-operation and Development (OECD)-member countries. We conclude that one of the measures proposed proves to be statistically significant in explaining the bilateral stock of FDI in OECD countries, meaning that the higher the transnational income generated between two given countries by GVC, taken as a proxy to the participation of those countries in GVC, the higher one could expect the FDI entering those countries to be. The regression also shows the negative impact of the global financial crisis that started in 2009 in the world’s bilateral FDI stocks and, additionally, the particular and significant role played by the People’s Republic of China in determining these stocks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study mainly aims to provide an inter-industry analysis through the subdivision of various industries in flow of funds (FOF) accounts. Combined with the Financial Statement Analysis data from 2004 and 2005, the Korean FOF accounts are reconstructed to form "from-whom-to-whom" basis FOF tables, which are composed of 115 institutional sectors and correspond to tables and techniques of input–output (I–O) analysis. First, power of dispersion indices are obtained by applying the I–O analysis method. Most service and IT industries, construction, and light industries in manufacturing are included in the first quadrant group, whereas heavy and chemical industries are placed in the fourth quadrant since their power indices in the asset-oriented system are comparatively smaller than those of other institutional sectors. Second, investments and savings, which are induced by the central bank, are calculated for monetary policy evaluations. Industries are bifurcated into two groups to compare their features. The first group refers to industries whose power of dispersion in the asset-oriented system is greater than 1, whereas the second group indicates that their index is less than 1. We found that the net induced investments (NII)–total liabilities ratios of the first group show levels half those of the second group since the former's induced savings are obviously greater than the latter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: Custom cranio-orbital implants have been shown to achieve better performance than their hand-shaped counterparts by restoring skull anatomy more accurately and by reducing surgery time. Designing a custom implant involves reconstructing a model of the patient's skull using their computed tomography (CT) scan. The healthy side of the skull model, contralateral to the damaged region, can then be used to design an implant plan. Designing implants for areas of thin bone, such as the orbits, is challenging due to poor CT resolution of bone structures. This makes preoperative design time-intensive since thin bone structures in CT data must be manually segmented. The objective of this thesis was to research methods to accurately and efficiently design cranio-orbital implant plans, with a focus on the orbits, and to develop software that integrates these methods. Methods: The software consists of modules that use image and surface restoration approaches to enhance both the quality of CT data and the reconstructed model. It enables users to input CT data, and use tools to output a skull model with restored anatomy. The skull model can then be used to design the implant plan. The software was designed using 3D Slicer, an open-source medical visualization platform. It was tested on CT data from thirteen patients. Results: The average time it took to create a skull model with restored anatomy using our software was 0.33 hours ± 0.04 STD. In comparison, the design time of the manual segmentation method took between 3 and 6 hours. To assess the structural accuracy of the reconstructed models, CT data from the thirteen patients was used to compare the models created using our software with those using the manual method. When registering the skull models together, the difference between each set of skulls was found to be 0.4 mm ± 0.16 STD. Conclusions: We have developed a software to design custom cranio-orbital implant plans, with a focus on thin bone structures. The method described decreases design time, and is of similar accuracy to the manual method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Indonesia’s construction industry is important to the national economy. However, its competitiveness is considered low due to the lack of success of its development strategy and policy. A new approach known as the cluster approach is being used to make strategy and policy in order to develop a stronger, and more competitive industry. This paper discusses the layout of the Indonesian construction cluster and its competitiveness. The archival analysis research approach was used to identify the construction cluster. The analysis was based on the input-output (I/O) tables of the years 1995 and 2000, which were published by the Indonesian Central Bureau of Statistics. The results suggest that the Indonesian construction cluster consists of the industries directly involved in construction as the core, with the other related and supporting industries as the balance. The anatomy of the Indonesian construction cluster permits structural changes to happen within it. These changes depend on policies that regulate the cluster’s constituents

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Economics education research studies conducted in the UK, USA and Australia to investigate the effects of learning inputs on academic performance have been dominated by the input-output model (Shanahan and Meyer, 2001). In the Student Experience of Learning framework, however, the link between learning inputs and outputs is mediated by students' learning approaches which in turn are influenced by their perceptions of the learning contexts (Evans, Kirby, & Fabrigar, 2003). Many learning inventories such as Biggs' Study Process Questionnaires and Entwistle and Ramsden' Approaches to Study Inventory have been designed to measure approaches to academic learning. However, there is a limitation to using generalised learning inventories in that they tend to aggregate different learning approaches utilised in different assessments. As a result, important relationships between learning approaches and learning outcomes that exist in specific assessment context(s) will be missed (Lizzio, Wilson, & Simons, 2002). This paper documents the construction of an assessment specific instrument to measure learning approaches in economics. The post-dictive validity of the instrument was evaluated by examining the association of learning approaches to students' perceived assessment demand in different assessment contexts.