900 resultados para New Keynesian models
Resumo:
The relevance of human joint models was shown in the literature. In particular, the great importance of models for the joint passive motion simulation (i.e. motion under virtually unloaded conditions) was outlined. They clarify the role played by the principal anatomical structures of the articulation, enhancing the comprehension of surgical treatments, and in particular the design of total ankle replacement and ligament reconstruction. Equivalent rigid link mechanisms proved to be an efficient tool for an accurate simulation of the joint passive motion. This thesis focuses on the ankle complex (i.e. the anatomical structure composed of the tibiotalar and the subtalar joints), which has a considerable role in human locomotion. The lack of interpreting models of this articulation and the poor results of total ankle replacement arthroplasty have strongly suggested devising new mathematical models capable of reproducing the restraining function of each structure of the joint and of replicating the relative motion of the bones which constitute the joint itself. In this contest, novel equivalent mechanisms are proposed for modelling the ankle passive motion. Their geometry is based on the joint’s anatomical structures. In particular, the role of the main ligaments of the articulation is investigated under passive conditions by means of nine 5-5 fully parallel mechanisms. Based on this investigation, a one-DOF spatial mechanism is developed for modelling the passive motion of the lower leg. The model considers many passive structures constituting the articulation, overcoming the limitations of previous models which took into account few anatomical elements of the ankle complex. All the models have been identified from experimental data by means of optimization procedure. Then, the simulated motions have been compared to the experimental one, in order to show the efficiency of the approach and thus to deduce the role of each anatomical structure in the ankle kinematic behavior.
Resumo:
We present recent improvements of the modeling of the disruption of strength dominated bodies using the Smooth Particle Hydrodynamics (SPH) technique. The improvements include an updated strength model and a friction model, which are successfully tested by a comparison with laboratory experiments. In the modeling of catastrophic disruptions of asteroids, a comparison between old and new strength models shows no significant deviation in the case of targets which are initially non-porous, fully intact and have a homogeneous structure (such as the targets used in the study by Benz and Asphaug, 1999). However, for many cases (e.g. initially partly or fully damaged targets and rubble-pile structures) we find that it is crucial that friction is taken into account and the material has a pressure dependent shear strength. Our investigations of the catastrophic disruption threshold (27, as a function of target properties and target sizes up to a few 100 km show that a fully damaged target modeled without friction has a Q(D)*:, which is significantly (5-10 times) smaller than in the case where friction is included. When the effect of the energy dissipation due to compaction (pore crushing) is taken into account as well, the targets become even stronger (Q(D)*; is increased by a factor of 2-3). On the other hand, cohesion is found to have an negligible effect at large scales and is only important at scales less than or similar to 1 km. Our results show the relative effects of strength, friction and porosity on the outcome of collisions among small (less than or similar to 1000 km) bodies. These results will be used in a future study to improve existing scaling laws for the outcome of collisions (e.g. Leinhardt and Stewart, 2012). (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This paper empirically assesses whether monetary policy affects real economic activity through its affect on the aggregate supply side of the macroeconomy. Analysts typically argue that monetary policy either does not affect the real economy, the classical dichotomy, or only affects the real economy in the short run through aggregate demand new Keynesian or new classical theories. Real business cycle theorists try to explain the business cycle with supply-side productivity shocks. We provide some preliminary evidence about how monetary policy affects the aggregate supply side of the macroeconomy through its affect on total factor productivity, an important measure of supply-side performance. The results show that monetary policy exerts a positive and statistically significant effect on the supply-side of the macroeconomy. Moreover, the findings buttress the importance of countercyclical monetary policy as well as support the adoption of an optimal money supply rule. Our results also prove consistent with the effective role of monetary policy in the Great Moderation as well as the more recent rise in productivity growth.
Resumo:
My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.
Resumo:
This study focuses on the present-day surface elevation of the Greenland and Antarctic ice sheets. Based on 3 years of CryoSat-2 data acquisition we derived new elevation models (DEMs) as well as elevation change maps and volume change estimates for both ice sheets. Here we present the new DEMs and their corresponding error maps. The accuracy of the derived DEMs for Greenland and Antarctica is similar to those of previous DEMs obtained by satellite-based laser and radar altimeters. Comparisons with ICESat data show that 80% of the CryoSat-2 DEMs have an uncertainty of less than 3 m ± 15 m. The surface elevation change rates between January 2011 and January 2014 are presented for both ice sheets. We compared our results to elevation change rates obtained from ICESat data covering the time period from 2003 to 2009. The comparison reveals that in West Antarctica the volume loss has increased by a factor of 3. It also shows an anomalous thickening in Dronning Maud Land, East Antarctica which represents a known large-scale accumulation event. This anomaly partly compensates for the observed increased volume loss of the Antarctic Peninsula and West Antarctica. For Greenland we find a volume loss increased by a factor of 2.5 compared to the ICESat period with large negative elevation changes concentrated at the west and southeast coasts. The combined volume change of Greenland and Antarctica for the observation period is estimated to be -503 ± 107 km**3/yr. Greenland contributes nearly 75% to the total volume change with -375 ± 24 km**3/yr.
Resumo:
Innovations in the current interconnected world of organizations have lead to a focus on business models as a fundamental statement of direction and identity. Although industry transformations generally emanate from technological changes, recent examples suggest they may also be due to the introduction of new business models. In the past, different types of airline business models could be clearly separated from each other. However, this has changed in recent years partly due to the concentration process and partly to reaction caused by competitive pressure. At least it can be concluded that in future the distinction of different business models will remain less clear. To advance the use of business models as a concept, it is essential to be able to compare and perform analyses to identify the business models that may have the highest potential. This can essentially contribute to understanding the synergies and incompatibilities in the case of two airlines that are going in for a merger. This is illustrated by the example of Swiss Air-Lufthansa merger analysis. The idea is to develop quantitative methods and tools for comparing and analyzing Aeronautical/Airline business models. The paper identifies available methods of comparing airline business models and lays the ground work for a quantitative model of comparing airline business models. This can be a useful tool for business model analysis when two airlines are merged
Resumo:
El trabajo realizado en la presente tesis doctoral se debe considerar parte del proyecto UPMSat-2, que se enmarca dentro del ámbito de la tecnología aeroespacial. El UPMSat-2 es un microsatélite (de bajo coste y pequeño tamaño) diseñado, construido, probado e integrado por la Universidad Politécnica de Madrid (España), para fines de demostración tecnológica y educación. El objetivo de la presente tesis doctoral es presentar nuevos modelos analíticos para estudiar la interdependencia energética entre los subsistemas de potencia y de control de actitud de un satélite. En primer lugar, se estudia la simulación del subsistema de potencia de un microsatélite, prestando especial atención a la simulación de la fuente de potencia, esto es, los paneles solares. En la tesis se presentan métodos sencillos pero precisos para simular la producción de energía de los paneles en condiciones ambientales variables a través de su circuito equivalente. Los métodos propuestos para el cálculo de los parámetros del circuito equivalente son explícitos (o al menos, con las variables desacopladas), no iterativos y directos; no se necesitan iteraciones o valores iniciales para calcular los parámetros. La precisión de este método se prueba y se compara con métodos similares de la literatura disponible, demostrando una precisión similar para mayor simplicidad. En segundo lugar, se presenta la simulación del subsistema de control de actitud de un microsatélite, prestando especial atención a la nueva ley de control propuesta. La tesis presenta un nuevo tipo de control magnético es aplicable a la órbita baja terrestre (LEO). La ley de control propuesta es capaz de ajustar la velocidad de rotación del satélite alrededor de su eje principal de inercia máximo o mínimo. Además, en el caso de órbitas de alta inclinación, la ley de control favorece la alineación del eje de rotación con la dirección normal al plano orbital. El algoritmo de control propuesto es simple, sólo se requieren magnetopares como actuadores; sólo se requieren magnetómetros como sensores; no hace falta estimar la velocidad angular; no incluye un modelo de campo magnético de la Tierra; no tiene por qué ser externamente activado con información sobre las características orbitales y permite el rearme automático después de un apagado total del subsistema de control de actitud. La viabilidad teórica de la citada ley de control se demuestra a través de análisis de Monte Carlo. Por último, en términos de producción de energía, se demuestra que la actitud propuesto (en eje principal perpendicular al plano de la órbita, y el satélite que gira alrededor de ella con una velocidad controlada) es muy adecuado para la misión UPMSat-2, ya que permite una área superior de los paneles apuntando hacia el sol cuando se compara con otras actitudes estudiadas. En comparación con el control de actitud anterior propuesto para el UPMSat-2 resulta en un incremento de 25% en la potencia disponible. Además, la actitud propuesto mostró mejoras significativas, en comparación con otros, en términos de control térmico, como la tasa de rotación angular por satélite puede seleccionarse para conseguir una homogeneización de la temperatura más alta que apunta satélite y la antena. ABSTRACT The work carried out in the present doctoral dissertation should be considered part of the UPMSat-2 project, falling within the scope of the aerospace technology. The UPMSat-2 is a microsatellite (low cost and small size) designed, constructed integrated and tested for educational and technology demonstration purposes at the Universidad Politécnica de Madrid (Spain). The aim of the present doctoral dissertation is to present new analytical models to study the energy interdependence between the power and the attitude control subsystems of a satellite. First, the simulation of the power subsystem of a microsatellite is studied, paying particular attention to the simulation of the power supply, i.e. the solar panels. Simple but accurate methods for simulate the power production under variable ambient conditions using its equivalent circuit are presented. The proposed methods for calculate the equivalent circuit parameters are explicit (or at least, with decoupled variables), non-iterative and straight forward; no iterations or initial values for the parameters are needed. The accuracy of this method is tested and compared with similar methods from the available literature demonstrating similar precision but higher simplicity. Second, the simulation of the control subsystem of a microsatellite is presented, paying particular attention to the new control law proposed. A new type of magnetic control applied to Low Earth Orbit (LEO) satellites has been presented. The proposed control law is able to set the satellite rotation speed around its maximum or minimum inertia principal axis. Besides, the proposed control law favors the alignment of this axis with the normal direction to the orbital plane for high inclination orbits. The proposed control algorithm is simples, only magnetorquers are required as actuators; only magnetometers are required as sensors; no estimation of the angular velocity is needed; it does not include an in-orbit Earth magnetic field model; it does not need to be externally activated with information about the orbital characteristics and it allows automatic reset after a total shutdown of attitude control subsystem. The theoretical viability of the control law is demonstrated through Monte Carlo analysis. Finally, in terms of power production, it is demonstrated that the proposed attitude (on principal axis perpendicular to the orbit plane, and the satellite rotating around it with a controlled rate) is quite suitable for the UPMSat-2 mission, as it allows a higher area of the panels pointing towards the sun when compared to other studied attitudes. Compared with the previous attitude control proposed for the UPMSat-2 it results in a 25% increment in available power. Besides, the proposed attitude showed significant improvements, when compared to others, in terms of thermal control, as the satellite angular rotation rate can be selected to achieve a higher temperature homogenization of the satellite and antenna pointing.
Resumo:
New age models for twelve Deep Sea Drilling Project sites in the North Pacific have been produced, based on (in order of importance in our dataset) a recompilation of previously published diatom, calcareous nannofossil and foraminifer first and last occurrences, and magnetostratigraphy. The projected ages of radiolarian first and last occurrences derived from the line of correlation of the age/depth plots have been computed from these sites, and 28 radiolarian events have thereby been newly cross calibrated to North Pacific diatom and other stratigraphy. Several of the North Pacific radiolarian events are older than in previously published equatorial Pacific calibrations, and some may be diachronous within the North Pacific. These patterns may be due to complex latitudinal patterns of clinal variation in morphotypes within lineages, or to migration events from the North Pacific towards the Equator.
Resumo:
Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.
Resumo:
How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.
Resumo:
A number of professional sectors have recently moved away from their longstanding career model of up-or-out promotion and embraced innovative alternatives. Professional labor is a critical resource in professional service firms. Therefore, changes to these internal labor markets are likely to trigger other innovations, for example in knowledge management, incentive schemes and team composition. In this chapter we look at how new career models affect the core organizing model of professional firms and, in turn, their capacity for and processes of innovation. We consider how professional firms link the development of human capital and the division of professional labor to distinctive demands for innovation and how novel career systems help them respond to these demands.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
Technological advancements enable new sourcing models in software development such as cloud computing, software-as-a-service, and crowdsourcing. While the first two are perceived as a re-emergence of older models (e.g., ASP), crowdsourcing is a new model that creates an opportunity for a global workforce to compete with established service providers. Organizations engaging in crowdsourcing need to develop the capabilities to successfully utilize this sourcing model in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end client. This paper expands the research on vendor capabilities and IS outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing.
Resumo:
Most prior new product diffusion (NPD) models do not specifically consider the role of the business model in the process. However, the context of NPD in today's market has been changed dramatically by the introduction of new business models. Through reinterpretation and extension, this paper empirically examines the feasibility of applying Bass-type NPD models to products that are commercialized by different business models. More specifically, the results and analysis of this study consider the subscription business model for service products, the freemium business model for digital products, and a pre-paid and post-paid business model that is widely used by mobile network providers. The paper offers new insights derived from implementing the models in real-life cases. It also highlights three themes for future research.
Resumo:
The lecture analyses the traditional business model in scientific communication and describes the new emerging models in the context of Open Access. Copyright and licensing part provides an overview of the legal issues and copyright at the heart of Open Access.