958 resultados para empirical models
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
Contém resumo
Resumo:
The real convergence hypothesis has spurred a myriad of empirical tests and approaches in the economic literature. This Work Project intends to test for real output and growth convergence in all N(N-1)/2 possible pairs of output and output growth gaps of 14 Eurozone countries. This paper follows a time-series approach, as it tests for the presence of unit roots and persistence changes in the above mentioned pairs of output gaps, as well as for the existence of growth convergence with autoregressive models. Overall, significantly greater evidence has been found to support growth convergence rather than output convergence in our sample.
Resumo:
Composite materials have a complex behavior, which is difficult to predict under different types of loads. In the course of this dissertation a methodology was developed to predict failure and damage propagation of composite material specimens. This methodology uses finite element numerical models created with Ansys and Matlab softwares. The methodology is able to perform an incremental-iterative analysis, which increases, gradually, the load applied to the specimen. Several structural failure phenomena are considered, such as fiber and/or matrix failure, delamination or shear plasticity. Failure criteria based on element stresses were implemented and a procedure to reduce the stiffness of the failed elements was prepared. The material used in this dissertation consist of a spread tow carbon fabric with a 0°/90° arrangement and the main numerical model analyzed is a 26-plies specimen under compression loads. Numerical results were compared with the results of specimens tested experimentally, whose mechanical properties are unknown, knowing only the geometry of the specimen. The material properties of the numerical model were adjusted in the course of this dissertation, in order to find the lowest difference between the numerical and experimental results with an error lower than 5% (it was performed the numerical model identification based on the experimental results).
Resumo:
Field lab: Business project
Resumo:
We intend to study the algebraic structure of the simple orthogonal models to use them, through binary operations as building blocks in the construction of more complex orthogonal models. We start by presenting some matrix results considering Commutative Jordan Algebras of symmetric matrices, CJAs. Next, we use these results to study the algebraic structure of orthogonal models, obtained by crossing and nesting simpler ones. Then, we study the normal models with OBS, which can also be orthogonal models. We intend to study normal models with OBS (Orthogonal Block Structure), NOBS (Normal Orthogonal Block Structure), obtaining condition for having complete and suffcient statistics, having UMVUE, is unbiased estimators with minimal covariance matrices whatever the variance components. Lastly, see ([Pereira et al. (2014)]), we study the algebraic structure of orthogonal models, mixed models whose variance covariance matrices are all positive semi definite, linear combinations of known orthogonal pairwise orthogonal projection matrices, OPOPM, and whose least square estimators, LSE, of estimable vectors are best linear unbiased estimator, BLUE, whatever the variance components, so they are uniformly BLUE, UBLUE. From the results of the algebraic structure we will get explicit expressions for the LSE of these models.
Resumo:
Since the financial crisis, risk based portfolio allocations have gained a great deal in popularity. This increase in popularity is primarily due to the fact that they make no assumptions as to the expected return of the assets in the portfolio. These portfolios implicitly put risk management at the heart of asset allocation and thus their recent appeal. This paper will serve as a comparison of four well-known risk based portfolio allocation methods; minimum variance, maximum diversification, inverse volatility and equally weighted risk contribution. Empirical backtests will be performed throughout rising interest rate periods from 1953 to 2015. Additionally, I will compare these portfolios to more simple allocation methods, such as equally weighted and a 60/40 asset-allocation mix. This paper will help to answer the question if these portfolios can survive in a rising interest rate environment.
Resumo:
Both culture coverage and digital journalism are contemporary phenomena that have undergone several transformations within a short period of time. Whenever the media enters a period of uncertainty such as the present one, there is an attempt to innovate in order to seek sustainability, skip the crisis or find a new public. This indicates that there are new trends to be understood and explored, i.e., how are media innovating in a digital environment? Not only does the professional debate about the future of journalism justify the need to explore the issue, but so do the academic approaches to cultural journalism. However, none of the studies so far have considered innovation as a motto or driver and tried to explain how the media are covering culture, achieving sustainability and engaging with the readers in a digital environment. This research examines how European media which specialize in culture or have an important cultural section are innovating in a digital environment. Specifically, we see how these innovation strategies are being taken in relation to the approach to culture and dominant cultural areas, editorial models, the use of digital tools for telling stories, overall brand positioning and extensions, engagement with the public and business models. We conducted a mixed methods study combining case studies of four media projects, which integrates qualitative web features and content analysis, with quantitative web content analysis. Two major general-interest journalistic brands which started as physical newspapers – The Guardian (London, UK) and Público (Lisbon, Portugal) – a magazine specialized in international affairs, culture and design – Monocle (London, UK) – and a native digital media project that was launched by a cultural organization – Notodo, by La Fábrica – were the four case studies chosen. Findings suggest, on one hand, that we are witnessing a paradigm shift in culture coverage in a digital environment, challenging traditional boundaries related to cultural themes and scope, angles, genres, content format and delivery, engagement and business models. Innovation in the four case studies lies especially along the product dimensions (format and content), brand positioning and process (business model and ways to engage with users). On the other hand, there are still perennial values that are crucial to innovation and sustainability, such as commitment to journalism, consistency (to the reader, to brand extensions and to the advertiser), intelligent differentiation and the capability of knowing what innovation means and how it can be applied, since this thesis also confirms that one formula doesn´t suit all. Changing minds, exceeding cultural inertia and optimizing the memory of the websites, looking at them as living, organic bodies, which continuously interact with the readers in many different ways, and not as a closed collection of articles, are still the main challenges for some media.
Resumo:
Neurological disorders are a major concern in modern societies, with increasing prevalence mainly related with the higher life expectancy. Most of the current available therapeutic options can only control and ameliorate the patients’ symptoms, often be-coming refractory over time. Therapeutic breakthroughs and advances have been hampered by the lack of accurate central nervous system (CNS) models. The develop-ment of these models allows the study of the disease onset/progression mechanisms and the preclinical evaluation of novel therapeutics. This has traditionally relied on genetically engineered animal models that often diverge considerably from the human phenotype (developmentally, anatomically and physiologically) and 2D in vitro cell models, which fail to recapitulate the characteristics of the target tissue (cell-cell and cell-matrix interactions, cell polarity). The in vitro recapitulation of CNS phenotypic and functional features requires the implementation of advanced culture strategies that enable to mimic the in vivo struc-tural and molecular complexity. Models based on differentiation of human neural stem cells (hNSC) in 3D cultures have great potential as complementary tools in preclinical research, bridging the gap between human clinical studies and animal models. This thesis aimed at the development of novel human 3D in vitro CNS models by integrat-ing agitation-based culture systems and a wide array of characterization tools. Neural differentiation of hNSC as 3D neurospheres was explored in Chapter 2. Here, it was demonstrated that human midbrain-derived neural progenitor cells from fetal origin (hmNPC) can generate complex tissue-like structures containing functional dopaminergic neurons, as well as astrocytes and oligodendrocytes. Chapter 3 focused on the development of cellular characterization assays for cell aggregates based on light-sheet fluorescence imaging systems, which resulted in increased spatial resolu-tion both for fixed samples or live imaging. The applicability of the developed human 3D cell model for preclinical research was explored in Chapter 4, evaluating the poten-tial of a viral vector candidate for gene therapy. The efficacy and safety of helper-dependent CAV-2 (hd-CAV-2) for gene delivery in human neurons was evaluated, demonstrating increased neuronal tropism, efficient transgene expression and minimal toxicity. The potential of human 3D in vitro CNS models to mimic brain functions was further addressed in Chapter 5. Exploring the use of 13C-labeled substrates and Nucle-ar Magnetic Resonance (NMR) spectroscopy tools, neural metabolic signatures were evaluated showing lineage-specific metabolic specialization and establishment of neu-ron-astrocytic shuttles upon differentiation. Chapter 6 focused on transferring the knowledge and strategies described in the previous chapters for the implementation of a scalable and robust process for the 3D differentiation of hNSC derived from human induced pluripotent stem cells (hiPSC). Here, software-controlled perfusion stirred-tank bioreactors were used as technological system to sustain cell aggregation and dif-ferentiation. The work developed in this thesis provides practical and versatile new in vitro ap-proaches to model the human brain. Furthermore, the culture strategies described herein can be further extended to other sources of neural phenotypes, including pa-tient-derived hiPSC. The combination of this 3D culture strategy with the implemented characterization methods represents a powerful complementary tool applicable in the drug discovery, toxicology and disease modeling.
Resumo:
This paper examines the impact of historic amenities on residential housing prices in the city of Lisbon, Portugal. Our study is directed towards identifying the spatial variation of amenity values for churches, palaces, lithic (stone) architecture and other historic amenities via the housing market, making use of both global and local spatial hedonic models. Our empirical evidence reveals that different types of historic and landmark amenities provide different housing premiums. While having a local non-landmark church within 100 meters increases housing prices by approximately 4.2%, higher concentrations of non-landmark churches within 1000 meters yield negative effects in the order of 0.1% of prices with landmark churches having a greater negative impact around 3.4%. In contrast, higher concentration of both landmark and non-landmark lithic structures positively influence housing prices in the order of 2.9% and 0.7% respectively. Global estimates indicate a negative effect of protected zones, however this significance is lost when accounting for heterogeneity within these areas. We see that the designation of historic zones may counteract negative effects on property values of nearby neglected buildings in historic neighborhoods by setting additional regulations ensuring that dilapidated buildings do not damage the city’s beauty or erode its historic heritage. Further, our results from a geographically weighted regression specification indicate the presence of spatial non-stationarity in the effects of different historic amenities across the city of Lisbon with variation between historic and more modern areas.
Resumo:
This work project is based on the MIES (Map of Innovation and Social Entrepreneurship in Portugal) database and it aims to understand the characteristics of social business models in the context of the portuguese market, by determining whether they follow the proposed characteristics by John Elkington and Pamela Hartigan, and then adding to their matrix. Furthermore, it tries to determine success patterns by comparing a group of successful social ventures with a group of less successful ones, with the objective of increasing the knowledge of social entrepreneurship as it applies to Portugal and provide a framework for future study.
Resumo:
The purpose of this thesis is to investigate how far the education level of the second or third generation of publicly traded German family firms affects the post-succession firm performance. By conducting a correlational and regression design, the aim is to examine how several variables influence the performance of family firms. Performance measures, for example ROA and Tobin’s q and variables, like Education level and succession periods, examine analytically that a positive succession trend will occur. However, with the used model, only a less rigid model shows empirical evidence.
Resumo:
\The idea that social processes develop in a cyclical manner is somewhat like a `Lorelei'. Researchers are lured to it because of its theoretical promise, only to become entangled in (if not wrecked by) messy problems of empirical inference. The reasoning leading to hypotheses of some kind of cycle is often elegant enough, yet the data from repeated observations rarely display the supposed cyclical pattern. (...) In addition, various `schools' seem to exist which frequently arrive at di erent conclusions on the basis of the same data." (van der Eijk and Weber 1987:271). Much of the empirical controversies around these issues arise because of three distinct problems: the coexistence of cycles of di erent periodicities, the possibility of transient cycles and the existence of cycles without xed periodicity. In some cases, there are no reasons to expect any of these phenomena to be relevant. Seasonality caused by Christmas is one such example (Wen 2002). In such cases, researchers mostly rely on spectral analysis and Auto-Regressive Moving-Average (ARMA) models to estimate the periodicity of cycles.1 However, and this is particularly true in social sciences, sometimes there are good theoretical reasons to expect irregular cycles. In such cases, \the identi cation of periodic movement in something like the vote is a daunting task all by itself. When a pendulum swings with an irregular beat (frequency), and the extent of the swing (amplitude) is not constant, mathematical functions like sine-waves are of no use."(Lebo and Norpoth 2007:73) In the past, this di culty has led to two di erent approaches. On the one hand, some researchers dismissed these methods altogether, relying on informal alternatives that do not meet rigorous standards of statistical inference. Goldstein (1985 and 1988), studying the severity of Great power wars is one such example. On the other hand, there are authors who transfer the assumptions of spectral analysis (and ARMA models) into fundamental assumptions about the nature of social phenomena. This type of argument was produced by Beck (1991) who, in a reply to Goldstein (1988), claimed that only \ xed period models are meaningful models of cyclic phenomena".We argue that wavelet analysis|a mathematical framework developed in the mid-1980s (Grossman and Morlet 1984; Goupillaud et al. 1984) | is a very viable alternative to study cycles in political time-series. It has the advantage of staying close to the frequency domain approach of spectral analysis while addressing its main limitations. Its principal contribution comes from estimating the spectral characteristics of a time-series as a function of time, thus revealing how its di erent periodic components may change over time. The rest of article proceeds as follows. In the section \Time-frequency Analysis", we study in some detail the continuous wavelet transform and compare its time-frequency properties with the more standard tool for that purpose, the windowed Fourier transform. In the section \The British Political Pendulum", we apply wavelet analysis to essentially the same data analyzed by Lebo and Norpoth (2007) and Merrill, Grofman and Brunell (2011) and try to provide a more nuanced answer to the same question discussed by these authors: do British electoral politics exhibit cycles? Finally, in the last section, we present a concise list of future directions.
Resumo:
This paper aims at developing a collision prediction model for three-leg junctions located in national roads (NR) in Northern Portugal. The focus is to identify factors that contribute for collision type crashes in those locations, mainly factors related to road geometric consistency, since literature is scarce on those, and to research the impact of three modeling methods: generalized estimating equations, random-effects negative binomial models and random-parameters negative binomial models, on the factors of those models. The database used included data published between 2008 and 2010 of 177 three-leg junctions. It was split in three groups of contributing factors which were tested sequentially for each of the adopted models: at first only traffic, then, traffic and the geometric characteristics of the junctions within their area of influence; and, lastly, factors which show the difference between the geometric characteristics of the segments boarding the junctionsâ area of influence and the segment included in that area were added. The choice of the best modeling technique was supported by the result of a cross validation made to ascertain the best model for the three sets of researched contributing factors. The models fitted with random-parameters negative binomial models had the best performance in the process. In the best models obtained for every modeling technique, the characteristics of the road environment, including proxy measures for the geometric consistency, along with traffic volume, contribute significantly to the number of collisions. Both the variables concerning junctions and the various national highway segments in their area of influence, as well as variations from those characteristics concerning roadway segments which border the already mentioned area of influence have proven their relevance and, therefore, there is a rightful need to incorporate the effect of geometric consistency in the three-leg junctions safety studies.