856 resultados para C33 - Models with Panel Data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the extent to which electricity supply constraints could affect sectoral specialization. For this purpose, an empirical trade model is estimated from 1990-2008 panel data on 15 OECD countries and 12 manufacturing sectors. We find that along with Ricardian technological differences and Heckscher-Ohlin factor-endowment differences, productivity-adjusted electricity capacity drives sectoral specialization in several sectors. Among them, electrical equipment, transport equipment, machinery, chemicals, and paper products will see lower output shares as a result of decreases in productivity-adjusted electricity capacity. Furthermore, our dynamic panel estimation reveals that the effects of Ricardian technological differences dominate in the short-run, and factor endowment differences and productivity-adjusted electricity capacity tend to have a significant effect in only the long-run.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, we examine the voting behavior in Indonesian parliamentary elections from 1999 to 2014. After summarizing the changes in Indonesian parties' share of the vote from a historical standpoint, we investigate the voting behavior with simple regression models to analyze the effect of regional characteristics on Islamic/secular parties' vote share, using aggregated panel data at the district level. Then, we also test the hypothesis of retrospective economic voting. The results show that districts which formerly stood strongly behind Islamic parties continued to select those parties, or gave preference to abstention over the parties in some elections. From the point of view of retrospective economic voting, we found that districts which experienced higher per capita economic growth gave more support to the ruling parties, although our results remain tentative because information on 2014 is not yet available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artículo presenta el estudio de la rotura de paneles sándwich de yeso laminado y lana de roca bajo solicitaciones de flexo-tracción dentro de su plano. Estos paneles se emplean para conformar tabiques interiores de edificación y con frecuencia se fisuran por flechas excesivas en los forjados. Actualmente no hay modelos de cálculo fiables ni datos experimentales que permitan estudiar este problema. Este trabajo presenta los resultados de una campaña experimental encaminada a caracterizar el comportamiento en rotura de los paneles sándwich y de sus componentes individuales. Además, se presenta un modelo cohesivo con fisura embebida que permite simular el comportamiento en rotura del panel sándwich conjunto. Por último se presentan los resultados de los ensayos de fractura en modo mixto (tracción/cortante) de paneles comerciales y se reproduce su comportamiento con el modelo cohesivo propuesto, obteniéndose un buen ajuste. This paper presents the study of plasterboard and rockwool sandwich panels cracking under flexural loading. These panels are usually used to perform interior partition walls and they frequently show cracking pathology due to excessive deflexion of the slabs. There are currently no reliable simulation models and experimental data for the study of this problem. This paper presents the results of an experimental campaign aimed to characterize the fracture behaviour of sandwich panels and their individual components. In addition, the paper presents a cohesive model with embedded crack to simulate the fracture behaviour of the panel. Finally we present the results of tests for mixed mode fracture (tensile / shear) commercial panels and their behaviour is reproduced with the cohesive model proposed, yielding a good fit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Canopies are complex multilayered structures comprising individual plant crowns exposing a multifaceted surface area to sunlight. Foliage arrangement and properties are the main mediators of canopy functions. The leaves act as light traps whose exposure to sunlight varies with time of the day, date and latitude in a trade-off between photosynthetic light harvesting and excessive or photoinhibitory light avoidance. To date, ecological research based upon leaf sampling has been limited by the available echnology, with which data acquisition becomes labour intensive and time-consuming, given the verwhelming number of leaves involved. 2. In the present study, our goal involved developing a tool capable of easuring a sufficient number of leaves to enable analysis of leaf populations, tree crowns and canopies.We specifically tested whether a cell phone working as a 3Dpointer could yield reliable, repeatable and valid leaf anglemeasurements with a simple gesture. We evaluated the accuracy of this method under controlled conditions, using a 3D digitizer, and we compared performance in the field with the methods commonly used. We presented an equation to estimate the potential proportion of the leaf exposed to direct sunlight (SAL) at any given time and compared the results with those obtained bymeans of a graphicalmethod. 3. We found a strong and highly significant correlation between the graphical methods and the equation presented. The calibration process showed a strong correlation between the results derived from the two methods with amean relative difference below 10%. Themean relative difference in calculation of instantaneous exposure was below 5%. Our device performed equally well in diverse locations, in which we characterized over 700 leaves in a single day. 4. The newmethod, involving the use of a cell phone, ismuchmore effective than the traditionalmethods or digitizers when the goal is to scale up from leaf position to performance of leaf populations, tree crowns or canopies. Our methodology constitutes an affordable and valuable tool within which to frame a wide range of ecological hypotheses and to support canopy modelling approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comments This article is a U.S. government work, and is not subject to copyright in the United States. Abstract Potential consequences of climate change on crop production can be studied using mechanistic crop simulation models. While a broad variety of maize simulation models exist, it is not known whether different models diverge on grain yield responses to changes in climatic factors, or whether they agree in their general trends related to phenology, growth, and yield. With the goal of analyzing the sensitivity of simulated yields to changes in temperature and atmospheric carbon dioxide concentrations [CO2], we present the largest maize crop model intercomparison to date, including 23 different models. These models were evaluated for four locations representing a wide range of maize production conditions in the world: Lusignan (France), Ames (USA), Rio Verde (Brazil) and Morogoro (Tanzania). While individual models differed considerably in absolute yield simulation at the four sites, an ensemble of a minimum number of models was able to simulate absolute yields accurately at the four sites even with low data for calibration, thus suggesting that using an ensemble of models has merit. Temperature increase had strong negative influence on modeled yield response of roughly 0.5 Mg ha 1 per °C. Doubling [CO2] from 360 to 720 lmol mol 1 increased grain yield by 7.5% on average across models and the sites. That would therefore make temperature the main factor altering maize yields at the end of this century. Furthermore, there was a large uncertainty in the yield response to [CO2] among models. Model responses to temperature and [CO2] did not differ whether models were simulated with low calibration information or, simulated with high level of calibration information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Probabilistic graphical models are a huge research field in artificial intelligence nowadays. The scope of this work is the study of directed graphical models for the representation of discrete distributions. Two of the main research topics related to this area focus on performing inference over graphical models and on learning graphical models from data. Traditionally, the inference process and the learning process have been treated separately, but given that the learned models structure marks the inference complexity, this kind of strategies will sometimes produce very inefficient models. With the purpose of learning thinner models, in this master thesis we propose a new model for the representation of network polynomials, which we call polynomial trees. Polynomial trees are a complementary representation for Bayesian networks that allows an efficient evaluation of the inference complexity and provides a framework for exact inference. We also propose a set of methods for the incremental compilation of polynomial trees and an algorithm for learning polynomial trees from data using a greedy score+search method that includes the inference complexity as a penalization in the scoring function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes an automatic methodology for modeling complex systems. Our methodology is based on the combination of Grammatical Evolution and classical regression to obtain an optimal set of features that take part of a linear and convex model. This technique provides both Feature Engineering and Symbolic Regression in order to infer accurate models with no effort or designer's expertise requirements. As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. These facilities consume from 10 to 100 times more power per square foot than typical office buildings. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. For this case study, our methodology minimizes error in power prediction. This work has been tested using real Cloud applications resulting on an average error in power estimation of 3.98%. Our work improves the possibilities of deriving Cloud energy efficient policies in Cloud data centers being applicable to other computing environments with similar characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Actualmente el sector privado posee un papel relevante en la provisión y gestión de infraestructuras de transporte en los países de ingreso medio‐bajo, principalmente a través de los proyectos de participación público‐privada (PPPs). Muchos países han impulsado este tipo de proyectos con el fin de hacer frente a la gran demanda de infraestructuras de transporte existente, debido a la escasez de recursos públicos y a la falta de eficiencia en la provisión de los servicios públicos. Como resultado, las PPPs han experimentado un crecimiento importante en las últimas dos décadas a nivel mundial. A pesar de esta tendencia creciente, muchos países no han sido capaces de atraer la participación del sector privado para la provisión de sus infraestructuras o no han logrado el nivel de participación privada que habrían requerido para alcanzar sus objetivos. Según numerosos autores, el desarrollo y el éxito de los proyectos PPP de infraestructuras de transporte de cualquier país está condicionado por una diversidad de factores, siendo uno de ellos la calidad de su entorno institucional. La presente tesis tiene como objetivo principal analizar la influencia del entorno institucional en el volumen de inversión en proyectos de participación público‐privada de infraestructuras de transporte en los países de ingreso medio‐bajo. Para acometer dicho objetivo se ha realizado un análisis empírico de 81 países distribuidos en seis regiones del mundo, durante el periodo 1996‐2013. En el análisis se han desarrollado dos modelos empíricos aplicando principalmente dos metodologías: el contraste de hipótesis y los modelos de datos de panel Tobit. El desarrollo de estos modelos ha permitido analizar de una forma exhaustiva el tema de estudio. Los resultados obtenidos aportan evidencia de que la calidad del entorno institucional posee una influencia significativa en el volumen de inversión en los proyectos PPP de transporte. En general, en esta tesis se muestran evidencias empíricas de que el sector privado ha tendido a invertir en mayor medida en países con entornos institucionales fuertes, es decir, en aquellos países en los que ha existido un mayor nivel de Estado de derecho, estabilidad política y regulatoria, efectividad del gobierno, así como un mayor control de la corrupción. Además, aquellos países donde se ha registrado una mejora en el nivel de su calidad institucional también han experimentado un incremento en el volumen de inversión en PPP de transporte. The private sector has an important role in the provision and management of transport infrastructure in countries of medium‐low income, primarily through projects of public‐private partnerships (PPPs). Many countries have developed PPP projects to meet the high demand of transport infrastructure, due to the scarcity of public resources and the lack of efficiency in the provision of public services. As a result, PPPs have experienced a significant growth, worldwide, in the past two decades. Despite this growing trend, many countries have not been able to attract private sector participation in the provision of infrastructure or have not accomplished the level of private participation that would have required to achieve its objectives. According to various authors, the development of PPP projects for transport infrastructure is determined by a number of factors, one of them being the quality of the institutional environment. The main objective of this dissertation is to analyze the influence of the institutional environment on the volume of investment, in projects of public‐private partnerships for transport infrastructure in countries of medium‐low income. In order to meet this objective, we conducted an empirical analysis of 81 countries, in six regions of the world, during the period of 1996‐2013. The analysis used two empirical models, implementing different methodologies and various statistical techniques: hypothesis testing, and Tobit model using panel data. The development of these models allowed to carry out a more comprehensive analysis. The results show that the quality of the institutional environment has a significant influence on the volume of investment in PPP projects of transport. Overall, this dissertation shows that the private sector tends to invest more in countries with stronger institutional environments, i.e. countries where there has been a higher level of Rule of Law, political and regulatory stability, and an effective control of corruption. In addition, those that have improved the level of institutional quality have also experienced an increase in the volume of investment in PPP of transport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El propósito de esta tesis es estudiar la aproximación a los fenómenos de transporte térmico en edificación acristalada a través de sus réplicas a escala. La tarea central de esta tesis es, por lo tanto, la comparación del comportamiento térmico de modelos a escala con el correspondiente comportamiento térmico del prototipo a escala real. Los datos principales de comparación entre modelo y prototipo serán las temperaturas. En el primer capítulo del Estado del Arte de esta tesis se hará un recorrido histórico por los usos de los modelos a escala desde la antigüedad hasta nuestro días. Dentro de éste, en el Estado de la Técnica, se expondrán los beneficios que tiene su empleo y las dificultades que conllevan. A continuación, en el Estado de la Investigación de los modelos a escala, se analizarán artículos científicos y tesis. Precisamente, nos centraremos en aquellos modelos a escala que son funcionales. Los modelos a escala funcionales son modelos a escala que replican, además, una o algunas de las funciones de sus prototipos. Los modelos a escala pueden estar distorsionados o no. Los modelos a escala distorsionados son aquellos con cambios intencionados en las dimensiones o en las características constructivas para la obtención de una respuesta específica por ejemplo, replicar el comportamiento térmico. Los modelos a escala sin distorsión, o no distorsionados, son aquellos que mantienen, en la medida de lo posible, las proporciones dimensionales y características constructivas de sus prototipos de referencia. Estos modelos a escala funcionales y no distorsionados son especialmente útiles para los arquitectos ya que permiten a la vez ser empleados como elementos funcionales de análisis y como elementos de toma de decisiones en el diseño constructivo. A pesar de su versatilidad, en general, se observará que se han utilizado muy poco estos modelos a escala funcionales sin distorsión para el estudio del comportamiento térmico de la edificación. Posteriormente, se expondrán las teorías para el análisis de los datos térmicos recogidos de los modelos a escala y su aplicabilidad a los correspondientes prototipos a escala real. Se explicarán los experimentos llevados a cabo, tanto en laboratorio como a intemperie. Se han realizado experimentos con modelos sencillos cúbicos a diferentes escalas y sometidos a las mismas condiciones ambientales. De estos modelos sencillos hemos dado el salto a un modelo reducido de una edificación acristalada relativamente sencilla. Los experimentos consisten en ensayos simultáneos a intemperie del prototipo a escala real y su modelo reducido del Taller de Prototipos de la Escuela Técnica Superior de Arquitectura de Madrid (ETSAM). Para el análisis de los datos experimentales hemos aplicado las teorías conocidas, tanto comparaciones directas como el empleo del análisis dimensional. Finalmente, las simulaciones nos permiten comparaciones flexibles con los datos experimentales, por ese motivo, hemos utilizado tanto programas comerciales como un algoritmo de simulación desarrollado ad hoc para esta investigación. Finalmente, exponemos la discusión y las conclusiones de esta investigación. Abstract The purpose of this thesis is to study the approximation to phenomena of heat transfer in glazed buildings through their scale replicas. The central task of this thesis is, therefore, the comparison of the thermal performance of scale models without distortion with the corresponding thermal performance of their full-scale prototypes. Indoor air temperatures of the scale model and the corresponding prototype are the data to be compared. In the first chapter on the State of the Art, it will be shown a broad vision, consisting of a historic review of uses of scale models, from antiquity to our days. In the section State of the Technique, the benefits and difficulties associated with their implementation are presented. Additionally, in the section State of the Research, current scientific papers and theses on scale models are reviewed. Specifically, we focus on functional scale models. Functional scale models are scale models that replicate, additionally, one or some of the functions of their corresponding prototypes. Scale models can be distorted or not. Scale models with distortion are considered scale models with intentional changes, on one hand, in dimensions scaled unevenly and, on the other hand, in constructive characteristics or materials, in order to get a specific performance for instance, a specific thermal performance. Consequently, scale models without distortion, or undistorted scale models scaled evenly, are those replicating, to the extent possible, without distortion, the dimensional proportions and constructive configurations of their prototypes of reference. These undistorted and functional scale models are especially useful for architects because they can be used, simultaneously, as functional elements of analysis and as decision-making elements during the design. Although they are versatile, in general, it is remarkable that these types of models are used very little for the study of the thermal performance of buildings. Subsequently, the theories related to the analysis of the experimental thermal data collected from the scale models and their applicability to the corresponding full-scale prototypes, will be explained. Thereafter, the experiments in laboratory and at outdoor conditions are detailed. Firstly, experiments carried out with simple cube models at different scales are explained. The prototype larger in size and the corresponding undistorted scale model have been subjected to same environmental conditions in every experimental test. Secondly, a step forward is taken carrying out some simultaneous experimental tests of an undistorted scale model, replica of a relatively simple lightweight and glazed building construction. This experiment consists of monitoring the undistorted scale model of the prototype workshop located in the School of Architecture (ETSAM) of the Technical University of Madrid (UPM). For the analysis of experimental data, known related theories and resources are applied, such as, direct comparisons, statistical analyses, Dimensional Analysis and last, but not least important, simulations. Simulations allow us, specifically, flexible comparisons with experimental data. Here, apart the use of the simulation software EnergyPlus, a simulation algorithm is developed ad hoc for this research. Finally, the discussion and conclusions of this research are exposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As várias teorias acerca da estrutura de capital despertam interesse motivando diversos estudos sobre o assunto sem, no entanto, ter um consenso. Outro tema aparentemente pouco explorado refere-se ao ciclo de vida das empresas e como ele pode influenciar a estrutura de capital. Este estudo teve como objetivo verificar quais determinantes possuem maior relevância no endividamento das empresas e se estes determinantes alteram-se dependendo do ciclo de vida da empresa apoiada pelas teorias Trade Off, Pecking Order e Teoria da Agência. Para alcançar o objetivo deste trabalho foi utilizado análise em painel de efeito fixo sendo a amostra composta por empresas brasileiras de capital aberto, com dados secundários disponíveis na Economática® no período de 2005 a 2013, utilizando-se os setores da BM&FBOVESPA. Como resultado principal destaca-se o mesmo comportamento entre a amostra geral, alto e baixo crescimento pelo endividamento contábil para o determinante Lucratividade apresentando uma relação negativa, e para os determinantes Oportunidade de Crescimento e Tamanho, estes com uma relação positiva. Para os grupos de alto e baixo crescimento alguns determinantes apresentaram resultados diferentes, como a singularidade que resultou significância nestes dois grupos, sendo positiva no baixo crescimento e negativa no alto crescimento, para o valor colateral dos ativos e benefício fiscal não dívida apresentaram significância apenas no grupo de baixo crescimento. Para o endividamento a valor de mercado foi observado significância para o Benefício fiscal não dívida e Singularidade. Este resultado reforça o argumento de que o ciclo de vida influência a estrutura de capital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of cell signaling, kinetic proofreading was introduced to explain how cells can discriminate among ligands based on a kinetic parameter, the ligand-receptor dissociation rate constant. In the kinetic proofreading model of cell signaling, responses occur only when a bound receptor undergoes a complete series of modifications. If the ligand dissociates prematurely, the receptor returns to its basal state and signaling is frustrated. We extend the model to deal with systems where aggregation of receptors is essential to signal transduction, and present a version of the model for systems where signaling depends on an extrinsic kinase. We also investigate the kinetics of signaling molecules, “messengers,” that are generated by aggregated receptors but do not remain associated with the receptor complex. We show that the extended model predicts modes of signaling that exhibit kinetic discrimination for some range of parameters but for other parameter values show little or no discrimination and thus escape kinetic proofreading. We compare model predictions with experimental data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent experiments have measured the rate of replication of DNA catalyzed by a single enzyme moving along a stretched template strand. The dependence on tension was interpreted as evidence that T7 and related DNA polymerases convert two (n = 2) or more single-stranded template bases to double helix geometry in the polymerization site during each catalytic cycle. However, we find structural data on the T7 enzyme–template complex indicate n = 1. We also present a model for the “tuning” of replication rate by mechanical tension. This model considers only local interactions in the neighborhood of the enzyme, unlike previous models that use stretching curves for the entire polymer chain. Our results, with n = 1, reconcile force-dependent replication rate studies with structural data on DNA polymerase complexes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We revisit the population synthesis of isolated radio-pulsars incorporating recent advances on the evolution of the magnetic field and the angle between the magnetic and rotational axes from new simulations of the magneto-thermal evolution and magnetosphere models, respectively. An interesting novelty in our approach is that we do not assume the existence of a death line. We discuss regions in parameter space that are more consistent with the observational data. In particular, we find that any broad distribution of birth spin periods with P0 ≲ 0.5 s can fit the data, and that if the alignment angle is allowed to vary consistently with the torque model, realistic magnetospheric models are favoured compared to models with classical magneto-dipolar radiation losses. Assuming that the initial magnetic field is given by a lognormal distribution, our optimal model has mean strength 〈log B0[G]〉 ≈ 13.0–13.2 with width σ(log B0) = 0.6–0.7. However, there are strong correlations between parameters. This degeneracy in the parameter space can be broken by an independent estimate of the pulsar birth rate or by future studies correlating this information with the population in other observational bands (X-rays and γ-rays).