967 resultados para Models, Theoretical


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calving has been studied for glaciers ranging from slow polar glaciers that calve on dry land, such as on Deception Island (63.0-degrees-S, 60.6-degrees-W) in Antarctica, through temperate Alaskan tide-water glaciers, to fast outlet glaciers that float in fiords and calve in deep water, such as Jakobshavns Isbrae (69.2-degrees-N, 49.9-degrees-W) in Greenland. Calving from grounded ice walls and floating ice shelves is the main ablation mechanism for the Antarctic and Greenland ice sheets, as it was along marine and lacustrine margins of former Pleistocene ice sheets, and is for tide-water and polar glaciers. Yet, the theory of ice calving is underdeveloped because of inherent dangers in obtaining field data to test and constrain calving models. An attempt is made to develop a calving theory for ice walls grounded in water of variable depth, and to relate slab calving from ice walls to tabular calving from ice shelves. A calving law is derived in which calving rates from ice walls are controled by bending creep behind the ice wall, and depend on wall height h, forward bending angle-theta, crevasse distance c behind the ice wall and depth d of water in front of the ice wall. Reasonable agreement with calving rates reported by Brown and others (1982) for Alaskan tide-water glaciers is obtained when c depends on wall height, wall height above water and water depth. More data are needed to determine which of these dependencies is correct. A calving ratio c/h is introduced to understand the transition from slab calving to tabular calving as water deepens and the calving glacier becomes afloat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research has shown repeatedly that the feeling better effect of exercise is far more moderate than generally claimed. Examinations of subgroups in secondary analyses also indicate that numerous further variables influence this relationship. One reason for inconsistencies in this research field is the lack of adequate theoretical analyses. Well-being output variables frequently possess no construct definition, and little attention is paid to moderating and mediating variables. This article integrates the main models in an overview and analyzes how secondary analyses define well-being and which areas of the construct they focus on. It then applies a moderator and/or mediator framework to examine which person and environmental variables can be found in the existing explanatory approaches in sport science and how they specify the influence of these moderating and mediating variables. Results show that the broad understanding of well-being in many secondary analyses makes findings difficult to interpret. Moreover, physiological explanatory approaches focus more on affective changes in well-being, whereas psychological approaches also include cognitive changes. The approaches focus mostly on either physical or psychological person variables and rarely combine the two, as in, for example, the dual-mode model. Whereas environmental variables specifying the treatment more closely (e.g., its intensity) are comparatively frequent, only the social support model formulates variables such as the framework in which exercise is presented. The majority of explanatory approaches use simple moderator and/or mediator models such as the basic mediated (e.g., distraction hypothesis) or multiple mediated (e.g., monoamine hypotheses) model. The discussion draws conclusions for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the context of exoplanetary atmospheres, we present a comprehensive linear analysis of forced, damped, magnetized shallow water systems, exploring the effects of dimensionality, geometry (Cartesian, pseudo-spherical, and spherical), rotation, magnetic tension, and hydrodynamic and magnetic sources of friction. Across a broad range of conditions, we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical, even when forcing (stellar irradiation), sources of friction (molecular viscosity, Rayleigh drag, and magnetic drag), and magnetic tension are included. The global atmospheric structure is largely controlled by a single key parameter that involves the Rossby and Prandtl numbers. This near-universality breaks down when either molecular viscosity or magnetic drag acts non-uniformly across latitude or a poloidal magnetic field is present, suggesting that these effects will introduce qualitative changes to the familiar chevron-shaped feature witnessed in simulations of atmospheric circulation. We also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways, implying that using Rayleigh drag to mimic magnetic drag is inaccurate. We exhaustively lay down the theoretical formalism (dispersion relations, governing equations, and time-dependent wave solutions) for a broad suite of models. In all situations, we derive the steady state of an atmosphere, which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres. We elucidate a pinching effect that confines the atmospheric structure to be near the equator. Our suite of analytical models may be used to develop decisively physical intuition and as a reference point for three-dimensional magnetohydrodynamic simulations of atmospheric circulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Medial open wedge high tibial osteotomy is a well-established procedure for the treatment of unicompartmental osteoarthritis and symptomatic varus malalignment. We hypothesized that different fixation devices generate different fixation stability profiles for the various wedge sizes in a finite element (FE) analysis. METHODS Four types of fixation were compared: 1) first and 2) second generation Puddu plates, and 3) TomoFix plate with and 4) without bone graft. Cortical and cancellous bone was modelled and five different opening wedge sizes were studied for each model. Outcome measures included: 1) stresses in bone, 2) relative displacement of the proximal and distal tibial fragments, 3) stresses in the plates, 4) stresses on the upper and lower screw surfaces in the screw channels. RESULTS The highest load for all fixation types occurred in the plate axis. For the vast majority of the wedge sizes and fixation types the shear stress (von Mises stress) was dominating in the bone independent of fixation type. The relative displacements of the tibial fragments were low (in m range). With an increasing wedge size this displacement tended to increase for both Puddu plates and the TomoFix plate with bone graft. For the TomoFix plate without bone graft a rather opposite trend was observed.For all fixation types the occurring stresses at the screw-bone contact areas pulled at the screws and exceeded the allowable threshold of 1.2 MPa for at least one screw surface. Of the six screw surfaces that were studied, the TomoFix plate with bone graft showed a stress excess of one out of twelve and without bone graft, five out of twelve. With the Puddu plates, an excess stress occurred in the majority of screw surfaces. CONCLUSIONS The different fixation devices generate different fixation stability profiles for different opening wedge sizes. Based on the computational simulations, none of the studied osteosynthesis fixation types warranted an intransigent full weight bearing per se. The highest fixation stability was observed for the TomoFix plates and the lowest for the first generation Puddu plate. These findings were revealed in theoretical models and need to be validated in controlled clinical settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a derivation and, based on it, an extension of a model originally proposed by V.G. Niziev to describe continuous wave laser cutting of metals. Starting from a local energy balance and by incorporating heat removal through heat conduction to the bulk material, we find a differential equation for the cutting profile. This equation is solved numerically and yields, besides the cutting profiles, the maximum cutting speed, the absorptivity profiles, and other relevant quantities. Our main goal is to demonstrate the models capability to explain some of the experimentally observed differences between laser cutting at around 1 and 10 m wavelengths. To compare our numerical results to experimental observations, we perform simulations for exactly the same material and laser beam parameters as those used in a recent comparative experimental study. Generally, we find good agreement between theoretical and experimental results and show that the main differences between laser cutting with 1- and 10-m beams arise from the different absorptivity profiles and absorbed intensities. Especially the latter suggests that the energy transfer, and thus the laser cutting process, is more efficient in the case of laser cutting with 1-m beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the workers quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the workers type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the models qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Severe pincer impingement (acetabular protrusio) is an established cause of hip pain and osteoarthritis. The proposed underlying pathomechanism is a dynamic pathological contact of the prominent acetabular rim with the femoral head-neck junction. However, this cannot explain the classically described medial osteoarthritis in these hips. We therefore asked: (1) Does an overload exist in the medial aspect of the protrusio joint? and (2) What is the influence of three contemporary joint-preserving procedures on load distribution in protrusio hips? In vivo force and motion data for walking and standing to sitting were applied to six 3D finite element models (normal, dysplasia, protrusio, acetabular rim trimming, acetabular reorientation, and combined reorientation/rim trimming). Compared with dysplasia, the protrusio joint resulted in opposite patterns of von Mises stress and contact pressure during walking. In protrusio hips, we found an overload at the medial margin of the lunate surface (54% higher than normal). Isolated rim trimming further increased the medial overload (up to 28% higher than protrusio), whereas acetabular reorientation with/without rim trimming reduced stresses by up to 25%. Our results can be used as an adjunct for surgical decision making in the treatment of acetabular protrusio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In applied work economists often seek to relate a given response variable y to some causal parameter mu* associated with it. This parameter usually represents a summarization based on some explanatory variables of the distribution of y, such as a regression function, and treating it as a conditional expectation is central to its identification and estimation. However, the interpretation of mu* as a conditional expectation breaks down if some or all of the explanatory variables are endogenous. This is not a problem when mu* is modelled as a parametric function of explanatory variables because it is well known how instrumental variables techniques can be used to identify and estimate mu*. In contrast, handling endogenous regressors in nonparametric models, where mu* is regarded as fully unknown, presents dicult theoretical and practical challenges. In this paper we consider an endogenous nonparametric model based on a conditional moment restriction. We investigate identification related properties of this model when the unknown function mu* belongs to a linear space. We also investigate underidentification of mu* along with the identification of its linear functionals. Several examples are provided in order to develop intuition about identification and estimation for endogenous nonparametric regression and related models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides new sufficient conditions for the existence, computation via successive approximations, and stability of Markovian equilibrium decision processes for a large class of OLG models with stochastic nonclassical production. Our notion of stability is existence of stationary Markovian equilibrium. With a nonclassical production, our economies encompass a large class of OLG models with public policy, valued fiat money, production externalities, and Markov shocks to production. Our approach combines aspects of both topological and order theoretic fixed point theory, and provides the basis of globally stable numerical iteration procedures for computing extremal Markovian equilibrium objects. In addition to new theoretical results on existence and computation, we provide some monotone comparative statics results on the space of economies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies on the relationship between psychosocial determinants and HIV risk behaviors have produced little evidence to support hypotheses based on theoretical relationships. One limitation inherent in many articles in the literature is the method of measurement of the determinants and the analytic approach selected. ^ To reduce the misclassification associated with unit scaling of measures specific to internalized homonegativity, I evaluated the psychometric properties of the Reactions to Homosexuality scale in a confirmatory factor analytic framework. In addition, I assessed the measurement invariance of the scale across racial/ethnic classifications in a sample of men who have sex with men. The resulting measure contained eight items loading on three first-order factors. Invariance assessment identified metric and partial strong invariance between racial/ethnic groups in the sample. ^ Application of the updated measure to a structural model allowed for the exploration of direct and indirect effects of internalized homonegativity on unprotected anal intercourse. Pathways identified in the model show that drug and alcohol use at last sexual encounter, the number of sexual partners in the previous three months and sexual compulsivity all contribute directly to risk behavior. Internalized homonegativity reduced the likelihood of exposure to drugs, alcohol or higher numbers of partners. For men who developed compulsive sexual behavior as a coping strategy for internalized homonegativity, there was an increase in the prevalence odds of risk behavior. ^ In the final stage of the analysis, I conducted a latent profile analysis of the items in the updated Reactions to Homosexuality scale. This analysis identified five distinct profiles, which suggested that the construct was not homogeneous in samples of men who have sex with men. Lack of prior consideration of these distinct manifestations of internalized homonegativity may have contributed to the analytic difficulty in identifying a relationship between the trait and high-risk sexual practices. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows how an Armington-Krugman-Melitz encompassing module based on Dixon and Rimmer (2012) can be calibrated, and clarifies the choice of initial levels for two kinds of number of firms, or parameter values for two kinds of fixed costs, that enter a Melitz-type specification can be set freely to any preferred value, just as the cases we derive quantities from given value data assuming some of the initial prices to be unity. In consequence, only one kind of additional information, which is on the shape parameter related to productivity, just is required in order to incorporate Melitz-type monopolistic competition and heterogeneous firms into a standard applied general equilibrium model. To be a Krugman-type, nothing is needed. This enables model builders in applied economics to fully enjoy the featured properties of the theoretical models invented by Krugman (1980) and Melitz (2003) in practical policy simulations at low cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the dynamic response of a wind turbine structure subjected to theoretical seismic motions, taking into account the rotational component of ground shaking. Models are generated for a shallow moderate crustal earthquake in the Madrid Region (Spain). Synthetic translational and rotational time histories are computed using the Discrete Wavenumber Method, assuming a point source and a horizontal layered earth structure. These are used to analyze the dynamic response of a wind turbine, represented by a simple finite element model. Von Mises stress values at different heights of the tower are used to study the dynamical structural response to a set of synthetic ground motion time histories