834 resultados para Input-Output analysis
Resumo:
This paper focuses on the construction of the narrative for elementary school students, searching to identify strategies they employed in the production of narrative texts representative of miniconto genre. Therefore, we take a sample forty texts produced by students of 6 and 9 years of basic education, twenty in the 6th year students (ten public school and ten private school) and twenty students in 9th grade (distributed similarly between public education and private). In general, we aim to understand the mechanisms by which producers build their narratives, as well as providing input for analysis of textual production of this genre. This research is based on Functional-Linguistic assumptions of the American side, inspired by Givón (2001), Thompson (2005), Hopper (1987), Bybee (2010), Traugott (2003), Martelotta (2008), Furtado da Cunha (2011), among others. In addition, from the theoretical framework presented by Labov (1972) about the narrative, coupled with Batoréo contribution (1998), we observed the recurring elements in the structure of narratives under study: abstract, orientation, complication, resolution, evaluation and coda. Also approached, but that in a complementary way, the notion of gender presented in Marcuschi (2002). This is a research quantitative and qualitative, with descriptive and analytical-interpretive bias. In corpus analysis, we consider the following categories: gender discourse miniconto; compositional structure of the narrative; informativeness (discursive progression, thematic coherence and narrative, topical-referential distribution); informative relevance (figure / ground). At the end of the work, our initial hypothesis of the better performance of students in 9th grade, compared to 6, and the particular context of education in relation to the public context, not confirmed, since, in the comparative study revealed that the groups have similar performance as the construction of the narrative, making use of the same strategies in its construction.
Resumo:
This paper focuses on the construction of the narrative for elementary school students, searching to identify strategies they employed in the production of narrative texts representative of miniconto genre. Therefore, we take a sample forty texts produced by students of 6 and 9 years of basic education, twenty in the 6th year students (ten public school and ten private school) and twenty students in 9th grade (distributed similarly between public education and private). In general, we aim to understand the mechanisms by which producers build their narratives, as well as providing input for analysis of textual production of this genre. This research is based on Functional-Linguistic assumptions of the American side, inspired by Givón (2001), Thompson (2005), Hopper (1987), Bybee (2010), Traugott (2003), Martelotta (2008), Furtado da Cunha (2011), among others. In addition, from the theoretical framework presented by Labov (1972) about the narrative, coupled with Batoréo contribution (1998), we observed the recurring elements in the structure of narratives under study: abstract, orientation, complication, resolution, evaluation and coda. Also approached, but that in a complementary way, the notion of gender presented in Marcuschi (2002). This is a research quantitative and qualitative, with descriptive and analytical-interpretive bias. In corpus analysis, we consider the following categories: gender discourse miniconto; compositional structure of the narrative; informativeness (discursive progression, thematic coherence and narrative, topical-referential distribution); informative relevance (figure / ground). At the end of the work, our initial hypothesis of the better performance of students in 9th grade, compared to 6, and the particular context of education in relation to the public context, not confirmed, since, in the comparative study revealed that the groups have similar performance as the construction of the narrative, making use of the same strategies in its construction.
Resumo:
We study a small circuit of coupled nonlinear elements to investigate general features of signal transmission through networks. The small circuit itself is perceived as building block for larger networks. Individual dynamics and coupling are motivated by neuronal systems: We consider two types of dynamical modes for an individual element, regular spiking and chattering and each individual element can receive excitatory and/or inhibitory inputs and is subjected to different feedback types (excitatory and inhibitory; forward and recurrent). Both, deterministic and stochastic simulations are carried out to study the input-output relationships of these networks. Major results for regular spiking elements include frequency locking, spike rate amplification for strong synaptic coupling, and inhibition-induced spike rate control which can be interpreted as a output frequency rectification. For chattering elements, spike rate amplification for low frequencies and silencing for large frequencies is characteristic
Resumo:
The real-time optimization of large-scale systems is a difficult problem due to the need for complex models involving uncertain parameters and the high computational cost of solving such problems by a decentralized approach. Extremum-seeking control (ESC) is a model-free real-time optimization technique which can estimate unknown parameters and can optimize nonlinear time-varying systems using only a measurement of the cost function to be minimized. In this thesis, we develop a distributed version of extremum-seeking control which allows large-scale systems to be optimized without models and with minimal computing power. First, we develop a continuous-time distributed extremum-seeking controller. It has three main components: consensus, parameter estimation, and optimization. The consensus provides each local controller with an estimate of the cost to be minimized, allowing them to coordinate their actions. Using this cost estimate, parameters for a local input-output model are estimated, and the cost is minimized by following a gradient descent based on the estimate of the gradient. Next, a similar distributed extremum-seeking controller is developed in discrete-time. Finally, we consider an interesting application of distributed ESC: formation control of high-altitude balloons for high-speed wireless internet. These balloons must be steered into a favourable formation where they are spread out over the Earth and provide coverage to the entire planet. Distributed ESC is applied to this problem, and is shown to be effective for a system of 1200 ballons subjected to realistic wind currents. The approach does not require a wind model and uses a cost function based on a Voronoi partition of the sphere. Distributed ESC is able to steer balloons from a few initial launch sites into a formation which provides coverage to the entire Earth and can maintain a similar formation as the balloons move with the wind around the Earth.
Resumo:
In order to predict compressive strength of geopolymers prepared from alumina-silica natural products, based on the effect of Al 2 O 3 /SiO 2, Na 2 O/Al 2 O 3, Na 2 O/H 2 O, and Na/[Na+K], more than 50 pieces of data were gathered from the literature. The data was utilized to train and test a multilayer artificial neural network (ANN). Therefore a multilayer feedforward network was designed with chemical compositions of alumina silicate and alkali activators as inputs and compressive strength as output. In this study, a feedforward network with various numbers of hidden layers and neurons were tested to select the optimum network architecture. The developed three-layer neural network simulator model used the feedforward back propagation architecture, demonstrated its ability in training the given input/output patterns. The cross-validation data was used to show the validity and high prediction accuracy of the network. This leads to the optimum chemical composition and the best paste can be made from activated alumina-silica natural products using alkaline hydroxide, and alkaline silicate. The research results are in agreement with mechanism of geopolymerization.
Read More: http://ascelibrary.org/doi/abs/10.1061/(ASCE)MT.1943-5533.0000829
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
Uno de los grandes retos de la HPC (High Performance Computing) consiste en optimizar el subsistema de Entrada/Salida, (E/S), o I/O (Input/Output). Ken Batcher resume este hecho en la siguiente frase: "Un supercomputador es un dispositivo que convierte los problemas limitados por la potencia de cálculo en problemas limitados por la E/S" ("A Supercomputer is a device for turning compute-bound problems into I/O-bound problems") . En otras palabras, el cuello de botella ya no reside tanto en el procesamiento de los datos como en la disponibilidad de los mismos. Además, este problema se exacerbará con la llegada del Exascale y la popularización de las aplicaciones Big Data. En este contexto, esta tesis contribuye a mejorar el rendimiento y la facilidad de uso del subsistema de E/S de los sistemas de supercomputación. Principalmente se proponen dos contribuciones al respecto: i) una interfaz de E/S desarrollada para el lenguaje Chapel que mejora la productividad del programador a la hora de codificar las operaciones de E/S; y ii) una implementación optimizada del almacenamiento de datos de secuencias genéticas. Con más detalle, la primera contribución estudia y analiza distintas optimizaciones de la E/S en Chapel, al tiempo que provee a los usuarios de una interfaz simple para el acceso paralelo y distribuido a los datos contenidos en ficheros. Por tanto, contribuimos tanto a aumentar la productividad de los desarrolladores, como a que la implementación sea lo más óptima posible. La segunda contribución también se enmarca dentro de los problemas de E/S, pero en este caso se centra en mejorar el almacenamiento de los datos de secuencias genéticas, incluyendo su compresión, y en permitir un uso eficiente de esos datos por parte de las aplicaciones existentes, permitiendo una recuperación eficiente tanto de forma secuencial como aleatoria. Adicionalmente, proponemos una implementación paralela basada en Chapel.
Resumo:
Synthetic biology, by co-opting molecular machinery from existing organisms, can be used as a tool for building new genetic systems from scratch, for understanding natural networks through perturbation, or for hybrid circuits that piggy-back on existing cellular infrastructure. Although the toolbox for genetic circuits has greatly expanded in recent years, it is still difficult to separate the circuit function from its specific molecular implementation. In this thesis, we discuss the function-driven design of two synthetic circuit modules, and use mathematical models to understand the fundamental limits of circuit topology versus operating regimes as determined by the specific molecular implementation. First, we describe a protein concentration tracker circuit that sets the concentration of an output protein relative to the concentration of a reference protein. The functionality of this circuit relies on a single negative feedback loop that is implemented via small programmable protein scaffold domains. We build a mass-action model to understand the relevant timescales of the tracking behavior and how the input/output ratios and circuit gain might be tuned with circuit components. Second, we design an event detector circuit with permanent genetic memory that can record order and timing between two chemical events. This circuit was implemented using bacteriophage integrases that recombine specific segments of DNA in response to chemical inputs. We simulate expected population-level outcomes using a stochastic Markov-chain model, and investigate how inferences on past events can be made from differences between single-cell and population-level responses. Additionally, we present some preliminary investigations on spatial patterning using the event detector circuit as well as the design of stationary phase promoters for growth-phase dependent activation. These results advance our understanding of synthetic gene circuits, and contribute towards the use of circuit modules as building blocks for larger and more complex synthetic networks.
Resumo:
In this paper, we aim at contributing to the new field of research that intends to bring up-to-date the tools and statistics currently used to look to the current reality given by Global Value Chains (GVC) in international trade and Foreign Direct Investment (FDI). Namely, we make use of the most recent data published by the World Input-Output Database to suggest indicators to measure the participation and net gains of countries by being a part of GVC; and use those indicators in a pooled-regression model to estimate determinants of FDI stocks in Organization for Economic Co-operation and Development (OECD)-member countries. We conclude that one of the measures proposed proves to be statistically significant in explaining the bilateral stock of FDI in OECD countries, meaning that the higher the transnational income generated between two given countries by GVC, taken as a proxy to the participation of those countries in GVC, the higher one could expect the FDI entering those countries to be. The regression also shows the negative impact of the global financial crisis that started in 2009 in the world’s bilateral FDI stocks and, additionally, the particular and significant role played by the People’s Republic of China in determining these stocks.
Resumo:
Firms in China within the same industry but with different ownership and size have very different production functions and can face very different emission regulations and financial conditions. This fact has largely been ignored in most of the existing literature on climate change. Using a newly augmented Chinese input–output table in which information about firm size and ownership are explicitly reported, this paper employs a dynamic computable general equilibrium (CGE) model to analyze the impact of alternative climate policy designs with respect to regulation and financial conditions on heterogeneous firms. The simulation results indicate that with a business-as-usual regulatory structure, the effectiveness and economic efficiency of climate policies is significantly undermined. Expanding regulation to cover additional firms has a first-order effect of improving efficiency. However, over-investment in energy technologies in certain firms may decrease the overall efficiency of investments and dampen long-term economic growth by competing with other fixed-capital investments for financial resources. Therefore, a market-oriented arrangement for sharing emission reduction burden and a mechanism for allocating green investment is crucial for China to achieve a more ambitious emission target in the long run.
Resumo:
En el escenario actual macro económico la toma de decisiones en materia económica de cualquier entidad pública debe de estar sustentada por una adecuada inteligencia económica. Es prioritario disponer de modelos, procesos, técnicas y herramientas que permitan garantizar un control adecuado de todas sus inversiones. En la presente tesis exponemos un modelo de gestión del conocimiento basado en el marco Input-Output (IO), que nos permite conocer el impacto económico de los programas públicos. Este modelo está soportado por un sistema de información que coadyuvará a los analistas económicos para la toma de decisiones en el campo de las inversiones públicas. El principal objetivo de la tesis es la creación y desarrollo de este modelo denominado MOCIE (Modelo del Conocimiento para el Impacto Económico). No obstante, en la tesis además de profundizar sobre este modelo y la gestión del conocimiento en materia económica de los grandes programas públicos, se ha realizado un estudio que ha abarcado diferentes líneas de investigación complementarias como el análisis IO, la ingeniería y la arquitectura de sistemas de la información, la economía de la Defensa o la computación genética. El modelo propuesto en esta tesis se ha puesto en práctica en un sector económico muy específico dentro de la economía nacional: el sector de la defensa. Por tanto, ha sido también necesario realizar un estudio en profundidad del propio sector y de la gestión de los programas públicos en el ámbito de la Defensa. MOCIE se estructura a través de tres capas de gestión del conocimiento que nos permiten, por un lado, la percepción de los componentes existentes en el entorno de las inversiones públicas y la comprensión de su significado en términos de impacto económico y su relación con el resto de variables macroeconómicas nacionales y, por otro, la proyección y monitorización de su estado a lo largo de todo el ciclo de vida de los programas públicos...
Resumo:
We experimentally study the temporal dynamics of amplitude-modulated laser beams propagating through a water dispersion of graphene oxide sheets in a fiber-to-fiber U-bench. Nonlinear refraction induced in the sample by thermal effects leads to both phase reversing of the transmitted signals and dynamic hysteresis in the input- output power curves. A theoretical model including beam propagation and thermal lensing dynamics reproduces the experimental findings. © 2015 Optical Society of America.
Resumo:
La articulación productiva, como una característica esencial de la estructura económica, es un tema poco analizado por la teoría económica. Por lo tanto, esta tesis pretende incorporarla en el análisis económico de manera explícita, para estudiar los efectos que tienen sobre las características de las estructuras económicas la actual forma de organización de la producción y el comercio a nivel mundial. La investigación analiza las repercusiones de la producción fragmentada sobre las características de las estructuras económicas y los posibles efectos para el desarrollo económico. Se estudian tres países – Corea del Sur, España y México –, caracterizados por una fuerte integración internacional. Con el fin de evaluar el desempeño económico bajo modelos de desarrollo disímiles, se comparan el año de 1980 con al primer decenio del 2000, con el empleo de tablas input-output (TIO)...
Resumo:
La innovación tecnológica se encuentra estrechamente vinculada con el desenvolvimiento de la estructura productiva. Durante mucho tiempo, esta relación era muy evidente, ya que la tecnología como fuente impulsora del crecimiento se asociaba a la inversión en maquinaria y equipo. Ésta última, integraba el componente de innovación que generaba el incremento en productividad. El modelo neoclásico ortodoxo centro el estudio del crecimiento económico en el estudio de Solow (1957), quien explica que la fuente del crecimiento se encuentra en un factor exógeno, que identifica como cambio tecnológico y, de esta manera, resta importancia a la inversión como un determinante del incremento en el producto. En consecuencia, los aportes teóricos recientes, particularmente el neoclásico, centran su atención en los factores que inciden en la generación y desarrollo de innovaciones, pero dejan de lado la vinculación con la estructura económica. En este trabajo se analiza la importancia de los bienes de capital en Alemania, Japón y Estados Unidos, empleando tablas input-output, durante 1980 y 2005, definiendo la vinculación entre cambio tecnológico y la estructura productiva. 1. Síntesis El objetivo es demostrar que la industria de bienes de capital vincula la dinámica tecnológica y los procesos de innovación con el desenvolvimiento de la estructura productiva, por ser clave en el desarrollo económico como fuerza motriz del sistema al generar cambio tecnológico, transferir innovaciones y articular el cambio tecnológico con la estructura productiva. De la Tesis se extraen los siguientes resultados: a) Desde una visión teórica Nathan Rosenberg justifica la importancia de los bienes de capital, por su capacidad de generar cambio tecnológico incorporado, así como por su carácter articulador con otros sectores productivos, favoreciendo el proceso de desarrollo económico. Por el lado metodológico, el análisis input-output resulta ser el más idóneo para el propósito de la investigación. b) La importancia del cambio tecnológico Mediante el estudio de patentes, se demostró que la dinámica de innovación del sector está relacionada con los flujos de conocimiento, al crear y difundir innovaciones, y por su habilidad de asimilar los avances generados en otros sectores. Con el empleo del “Análisis de Flujo Mínimo”, se determina que los bienes de capital juegan un papel destacado por sus efectos de propagación del esfuerzo innovador; asimismo, por ser generadores de innovaciones, su inversión en I+D se traduce en una alta capacidad para impactar a los sectores con los que se vincula...
Resumo:
Chama-se a atenção para a importância fundamental (e sua possível utilização, em termos de políticas públicas de base regional) do primeiro quadrante da matriz input-output para a região Alentejo.