950 resultados para General allocation model
Resumo:
Aims: We performed a randomised controlled trial in children of both gender and different pubertal stages to determine whether a school-based physical activity (PA) program during a full schoolyear influences bone mineral content (BMC) and whether there are differences in response for boys and girls before and during puberty. Methods: Twenty-eight 1st and 5th grade classes were cluster randomised to an intervention (INT, 16 classes, n=297) and control (CON; 12 classes, n=205) group. The intervention consisted of a multi-component PA intervention including daily physical education during a full school year. Each lesson was predetermined, included about ten minutes of jumping or strength training exercises of various intensity and was the same for all children. Measurements included anthropometry (height and weight), tanner stages (by self-assessment), PA (by accelerometry) and BMC for total body, femoral neck, total hip and lumbar spine using dualenergy X-ray absorptiometry (DXA). Bone parameters were normalized for gender and tanner stage (pre- vs. puberty). Analyses were performed by a regression model adjusted for gender, baseline height, baseline weight, baseline PA, post-intervention tanner stage, baseline BMC, and cluster. Researchers were blinded to group allocation. Children in the control group did not know about the intervention arm. Results: 217 (57%) of 380 children who initially agreed to have DXA measurements had also post-intervention DXA and PA data. Mean age of prepubertal and pubertal children at baseline was 9.0±2.1 and 11.2±0.6 years, respectively. 47/114 girls and 68/103 boys were prepubertal at the end of the intervention. Compared to CON, children in INT showed statistically significant increases in BMC of total body (adjusted z-score differences: 0.123; 95%>CI 0.035 to 0.212), femoral neck (0.155; 95%>CI 0.007 to 0.302), and lumbar spine (0.127; 95%>CI 0.026 to 0.228). Importantly, there was no gender*group, but a tanner*group interaction consistently favoring prepubertal children. Conclusions: Our findings show that a general, but stringent school-based PA intervention can improve BMC in elementary school children. Pubertal stage, but not gender seems to determine bone sensitivity to physical activity loading.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
A model of anisotropic fluid with three perfect fluid components in interaction is studied. Each fluid component obeys the stiff matter equation of state and is irrotational. The interaction is chosen to reproduce an integrable system of equations similar to the one associated to self-dual SU(2) gauge fields. An extension of the BelinskyZakharov version of the inverse scattering transform is presented and used to find soliton solutions to the coupled Einstein equations. A particular class of solutions that can be interpreted as lumps of matter propagating in empty space-time is examined.
Resumo:
PURPOSE: The longitudinal relaxation rate (R1 ) measured in vivo depends on the local microstructural properties of the tissue, such as macromolecular, iron, and water content. Here, we use whole brain multiparametric in vivo data and a general linear relaxometry model to describe the dependence of R1 on these components. We explore a) the validity of having a single fixed set of model coefficients for the whole brain and b) the stability of the model coefficients in a large cohort. METHODS: Maps of magnetization transfer (MT) and effective transverse relaxation rate (R2 *) were used as surrogates for macromolecular and iron content, respectively. Spatial variations in these parameters reflected variations in underlying tissue microstructure. A linear model was applied to the whole brain, including gray/white matter and deep brain structures, to determine the global model coefficients. Synthetic R1 values were then calculated using these coefficients and compared with the measured R1 maps. RESULTS: The model's validity was demonstrated by correspondence between the synthetic and measured R1 values and by high stability of the model coefficients across a large cohort. CONCLUSION: A single set of global coefficients can be used to relate R1 , MT, and R2 * across the whole brain. Our population study demonstrates the robustness and stability of the model. Magn Reson Med, 2014. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. Magn Reson Med 73:1309-1314, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
[cat] En aquest article es considera un problema de cooperació entre agents on cada agent realitza una contribució (diners, capital, treball, esforç) per tal d'obtenir un benefici comú a repartir. El repartiment proporcional respecte a les contribucions és una distribució que pertany al nucli del joc cooperatiu associat. A partir d'aquest model bàsic s'introdueix un agent extern que pot realitzar una determinada aportació que serveix per avaluar el potencial benefici de cada subcoalició d'agents si aquest nou agent finalment entrés. Aquesta anàlisi pot produir que el poder relatiu dels agents hagi variat. en concret s'avalua si la distribució proporcional és encara robusta des del punt de vista de la seva pertinença al conjunt de negociació. Amb aquest objectiu, analitzem el problema utilitzant el model de joc cooperatius amb estructura de coalició. Donat que, en general, la distribució proporcional, no pertany al conjunt de negociació, s'estudia una condició suficient per a que així sigui. També enunciem una condició necessària, i finalment es proposa una condició suficient que garanteix que el repartiment proporcional és la única distribució existent dins del conjunt de negociació.
Resumo:
[cat] En aquest article es considera un problema de cooperació entre agents on cada agent realitza una contribució (diners, capital, treball, esforç) per tal d'obtenir un benefici comú a repartir. El repartiment proporcional respecte a les contribucions és una distribució que pertany al nucli del joc cooperatiu associat. A partir d'aquest model bàsic s'introdueix un agent extern que pot realitzar una determinada aportació que serveix per avaluar el potencial benefici de cada subcoalició d'agents si aquest nou agent finalment entrés. Aquesta anàlisi pot produir que el poder relatiu dels agents hagi variat. en concret s'avalua si la distribució proporcional és encara robusta des del punt de vista de la seva pertinença al conjunt de negociació. Amb aquest objectiu, analitzem el problema utilitzant el model de joc cooperatius amb estructura de coalició. Donat que, en general, la distribució proporcional, no pertany al conjunt de negociació, s'estudia una condició suficient per a que així sigui. També enunciem una condició necessària, i finalment es proposa una condició suficient que garanteix que el repartiment proporcional és la única distribució existent dins del conjunt de negociació.
Resumo:
This research examines the impacts of the Swiss reform of the allocation of tasks which was accepted in 2004 and implemented in 2008 to "re-assign" the responsibilities between the federal government and the cantons. The public tasks were redistributed, according to the leading and fundamental principle of subsidiarity. Seven tasks came under exclusive federal responsibility; ten came under the control of the cantons; and twenty-two "common tasks" were allocated to both the Confederation and the cantons. For these common tasks it wasn't possible to separate the management and the implementation. In order to deal with nineteen of them, the reform introduced the conventions-programs (CPs), which are public law contracts signed by the Confederation with each canton. These CPs are generally valid for periods of four years (2008-11, 2012-15 and 2016-19, respectively). The third period is currently being prepared. By using the principal-agent theory I examine how contracts can improve political relations between a principal (Confederation) and an agent (canton). I also provide a first qualitative analysis by examining the impacts of these contracts on the vertical cooperation and on the implication of different actors by focusing my study on five CPs - protection of cultural heritage and conservation of historic monuments, encouragement of the integration of foreigners, economic development, protection against noise and protection of the nature and landscape - applied in five cantons, which represents twenty-five cases studies.
Resumo:
This thesis discusses the basic problem of the modern portfolio theory about how to optimise the perfect allocation for an investment portfolio. The theory provides a solution for an efficient portfolio, which minimises the risk of the portfolio with respect to the expected return. A central feature for all the portfolios on the efficient frontier is that the investor needs to provide the expected return for each asset. Market anomalies are persistent patterns seen in the financial markets, which cannot be explained with the current asset pricing theory. The goal of this thesis is to study whether these anomalies can be observed among different asset classes. Finally, if persistent patterns are found, it is investigated whether the anomalies hold valuable information for determining the expected returns used in the portfolio optimization Market anomalies and investment strategies based on them are studied with a rolling estimation window, where the return for the following period is always based on historical information. This is also crucial when rebalancing the portfolio. The anomalies investigated within this thesis are value, momentum, reversal, and idiosyncratic volatility. The research data includes price series of country level stock indices, government bonds, currencies, and commodities. The modern portfolio theory and the views given by the anomalies are combined by utilising the Black-Litterman model. This makes it possible to optimise the portfolio so that investor’s views are taken into account. When constructing the portfolios, the goal is to maximise the Sharpe ratio. Significance of the results is studied by assessing if the strategy yields excess returns in a relation to those explained by the threefactormodel. The most outstanding finding is that anomaly based factors include valuable information to enhance efficient portfolio diversification. When the highest Sharpe ratios for each asset class are picked from the test factors and applied to the Black−Litterman model, the final portfolio results in superior riskreturn combination. The highest Sharpe ratios are provided by momentum strategy for stocks and long-term reversal for the rest of the asset classes. Additionally, a strategy based on the value effect was highly appealing, and it basically performs as well as the previously mentioned Sharpe strategy. When studying the anomalies, it is found, that 12-month momentum is the strongest effect, especially for stock indices. In addition, a high idiosyncratic volatility seems to be positively correlated with country indices on stocks.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.
Resumo:
In a linear production model, we characterize the class of efficient and strategy-proof allocation functions, and the class of efficient and coalition strategy-proof allocation functions. In the former class, requiring equal treatment of equals allows us to identify a unique allocation function. This function is also the unique member of the latter class which satisfies uniform treatment of uniforms.
Resumo:
We study a simple model of assigning indivisible objects (e.g., houses, jobs, offices, etc.) to agents. Each agent receives at most one object and monetary compensations are not possible. We completely describe all rules satisfying efficiency and resource-monotonicity. The characterized rules assign the objects in a sequence of steps such that at each step there is either a dictator or two agents who “trade” objects from their hierarchically specified “endowments.”