954 resultados para dynamic factor models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: The American Society of Clinical Oncology and US Institute of Medicine emphasize the need to trial novel models of posttreatment care, and disseminate findings. In 2011, the Victorian State Government (Australia) established the Victorian Cancer Survivorship Program (VCSP), funding six 2-year demonstration projects, targeting end of initial cancer treatment. Projects considered various models, enrolling people of differing cancer types, age and residential areas. We sought to determine common enablers of success, as well as challenges/barriers. METHODS: Throughout the duration of the projects, a formal "community of practice" met regularly to share experiences. Projects provided regular formal progress reports. An analysis framework was developed to synthesize key themes and identify critical enablers and challenges. Two external reviewers examined final project reports. Discussion with project teams clarified content. RESULTS: Survivors reported interventions to be acceptable, appropriate and effective. Strong clinical leadership was identified as a critical success factor. Workforce education was recognized as important. Partnerships with consumers, primary care and community organizations; risk stratified pathways with rapid re-access to specialist care; and early preparation for survivorship, self-management and shared care models supported positive project outcomes. Tailoring care to individual needs and predicted risks was supported. Challenges included: lack of valid assessment and prediction tools; limited evidence to support novel care models; workforce redesign; and effective engagement with community-based care and issues around survivorship terminology. CONCLUSION: The VCSP project outcomes have added to growing evidence around posttreatment care. Future projects should consider the identified enablers and challenges when designing and implementing survivorship care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is urgent need to consider energy consumption when measuring total-factor productivity in the construction industry. This paper adopts the Malmquist index method to investigate the factors affecting the energy productivity of the Australian construction industry and compares them with those decomposed from the total-factor productivity. An input-oriented distance function and a contemporaneous benchmark technology are employed to develop the data envelopment analysis models. The Malmquist productivity index is decomposed into the technological change, pure technical efficiency change and activity effect to gain comprehensive insights into changes of construction productivity in the Australian states and territories over the past two decades. Research results show that both energy productivity and total-factor productivity improved in Australia, particularly related to technological development. The pure technical efficiency and activity indices changed slightly over time or across regions. This study demonstrates that there exists a linkage between energy productivity and total-factor productivity through their technological and technical efficiency changes. The Australian construction industry could enhance these two productivities by introducing advanced technologies and implementing them efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors in space, models patient health state trajectories through explicit memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timed events by moderating the forgetting and consolidation of memory cells. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden -- diabetes and mental health -- the results show improved modeling and risk prediction accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DDoS attack source traceback is an open and challenging problem. Deterministic packet marking (DPM) is a simple and effective traceback mechanism, but the current DPM based traceback schemes are not practical due to their scalability constraint. We noticed a factor that only a limited number of computers and routers are involved in an attack session. Therefore, we only need to mark these involved nodes for traceback purpose, rather than marking every node of the Internet as the existing schemes doing. Based on this finding, we propose a novel marking on demand (MOD) traceback scheme based on the DPM mechanism. In order to traceback to involved attack source, what we need to do is to mark these involved ingress routers using the traditional DPM strategy. Similar to existing schemes, we require participated routers to install a traffic monitor. When a monitor notices a surge of suspicious network flows, it will request a unique mark from a globally shared MOD server, and mark the suspicious flows with the unique marks. At the same time, the MOD server records the information of the marks and their related requesting IP addresses. Once a DDoS attack is confirmed, the victim can obtain the attack sources by requesting the MOD server with the marks extracted from attack packets. Moreover, we use the marking space in a round-robin style, which essentially addresses the scalability problem of the existing DPM based traceback schemes. We establish a mathematical model for the proposed traceback scheme, and thoroughly analyze the system. Theoretical analysis and extensive real-world data experiments demonstrate that the proposed traceback method is feasible and effective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several works in the shopping-time and in the human-capital literature, due to the nonconcavity of the underlying Hamiltonian, use Örst-order conditions in dynamic optimization to characterize necessity, but not su¢ ciency, in intertemporal problems. In this work I choose one paper in each one of these two areas and show that optimality can be characterized by means of a simple aplication of Arrowís (1968) su¢ ciency theorem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper confronts the Capital Asset Pricing Model - CAPM - and the 3-Factor Fama-French - FF - model using both Brazilian and US stock market data for the same Sample period (1999-2007). The US data will serve only as a benchmark for comparative purposes. We use two competing econometric methods, the Generalized Method of Moments (GMM) by (Hansen, 1982) and the Iterative Nonlinear Seemingly Unrelated Regression Estimation (ITNLSUR) by Burmeister and McElroy (1988). Both methods nest other options based on the procedure by Fama-MacBeth (1973). The estimations show that the FF model fits the Brazilian data better than CAPM, however it is imprecise compared with the US analog. We argue that this is a consequence of an absence of clear-cut anomalies in Brazilian data, specially those related to firm size. The tests on the efficiency of the models - nullity of intercepts and fitting of the cross-sectional regressions - presented mixed conclusions. The tests on intercept failed to rejected the CAPM when Brazilian value-premium-wise portfolios were used, contrasting with US data, a very well documented conclusion. The ITNLSUR has estimated an economically reasonable and statistically significant market risk premium for Brazil around 6.5% per year without resorting to any particular data set aggregation. However, we could not find the same for the US data during identical period or even using a larger data set. Este estudo procura contribuir com a literatura empírica brasileira de modelos de apreçamento de ativos. Dois dos principais modelos de apreçamento são Infrontados, os modelos Capital Asset Pricing Model (CAPM)e de 3 fatores de Fama-French. São aplicadas ferramentas econométricas pouco exploradas na literatura nacional na estimação de equações de apreçamento: os métodos de GMM e ITNLSUR. Comparam-se as estimativas com as obtidas de dados americanos para o mesmo período e conclui-se que no Brasil o sucesso do modelo de Fama e French é limitado. Como subproduto da análise, (i) testa-se a presença das chamadas anomalias nos retornos, e (ii) calcula-se o prêmio de risco implícito nos retornos das ações. Os dados revelam a presença de um prêmio de valor, porém não de um prêmio de tamanho. Utilizando o método de ITNLSUR, o prêmio de risco de mercado é positivo e significativo, ao redor de 6,5% ao ano.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation proposes a bivariate markov switching dynamic conditional correlation model for estimating the optimal hedge ratio between spot and futures contracts. It considers the cointegration between series and allows to capture the leverage efect in return equation. The model is applied using daily data of future and spot prices of Bovespa Index and R$/US$ exchange rate. The results in terms of variance reduction and utility show that the bivariate markov switching model outperforms the strategies based ordinary least squares and error correction models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral dissertation analyzes two novels by the American novelist Robert Coover as examples of hypertextual writing on the book bound page, as tokens of hyperfiction. The complexity displayed in the novels, John's Wife and The Adventures of Lucky Pierre, integrates the cultural elements that characterize the contemporary condition of capitalism and technologized practices that have fostered a different subjectivity evidenced in hypertextual writing and reading, the posthuman subjectivity. The models that account for the complexity of each novel are drawn from the concept of strange attractors in Chaos Theory and from the concept of rhizome in Nomadology. The transformations the characters undergo in the degree of their corporeality sets the plane on which to discuss turbulence and posthumanity. The notions of dynamic patterns and strange attractors, along with the concept of the Body without Organs and Rhizome are interpreted, leading to the revision of narratology and to analytical categories appropriate to the study of the novels. The reading exercised throughout this dissertation enacts Daniel Punday's corporeal reading. The changes in the characters' degree of materiality are associated with the stages of order, turbulence and chaos in the story, bearing on the constitution of subjectivity within and along the reading process. Coover's inscription of planes of consistency to counter linearity and accommodate hypertextual features to the paper supported narratives describes the characters' trajectory as rhizomatic. The study led to the conclusion that narrative today stands more as a regime in a rhizomatic relation with other regimes in cultural practice than as an exclusively literary form and genre. Besides this, posthuman subjectivity emerges as class identity, holding hypertextual novels as their literary form of choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that Judd (1982)’s method can be applied to any finite system, contrary to what he claimed in 1987. An example shows how to employ the technic to study monetary models in presence of capital accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Em modelos de competição de preços, somente um custo de procura positivo por parte do consumidor não gera equilíbrio com dispersão de preços. Já modelos dinâmicos de switching cost consistentemente geram este fenômeno bastante documentado para preços no varejo. Embora ambas as literaturas sejam vastas, poucos modelos tentaram combinar as duas fricções em um só modelo. Este trabalho apresenta um modelo dinâmico de competição de preços em que consumidores idênticos enfrentam custos de procura e de switching. O equilíbrio gera dispersão nos preços. Ainda, como os consumidores são obrigados a se comprometer com uma amostra fixa de firmas antes dos preços serem definidos, somente dois preços serão considerados antes de cada compra. Este resultado independe do tamanho do custo de procura individual do consumidor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frente às mudanças que estão ocorrendo ao longo dos últimos anos, onde a perspectiva da economia passa da industrial para a do conhecimento, nos defrontamos com um novo cenário onde o capital intelectual do individuo é percebido como fator de fundamental importância para o desenvolvimento e crescimento da organização. No entanto, para que esta evolução ocorra, o conhecimento tácito do individuo precisa ser disseminado e compartilhado com os demais integrantes da organização. A preocupação nas empresas se volta, então, para a elaboração de estratégias que corroborem no aperfeiçoamento de seus processos. Além disso, é preciso, também, gerenciar toda esta dinâmica de construção e desenvolvimento do conhecimento, de maneira adequada e eficaz proporcionando, assim, o surgimento de novos valores e de vantagem competitiva. Inumeros modelos para auxiliar no processo de aprendizagem organizacional foram desenvolvidos por diversos autores e estudiosos, dentre eles destacamos o sistema de lições aprendidas, que é construida a partir de experiências, positivas ou negativas, vivenciadas dentro de um contexto, sob ordenação de padrões culturais próprios, com impacto real e significativo. Baseado em processos e procedimentos estabelecidos pela coordenação de um dos produtos ofertados pela FGV e sua rede de distribuição, este trabalho tem como objetivo analisar, à luz da teoria da gestão do conhecimento e, mais especificamente, da gestão das lições aprendidas, como o gereciamento do conhecimento está sendo efetuado no projeto Melhores Práticas, criado pela referida coordenação. Espera-se, também, entender se as fases da aquisição, do desenvolvimento e da disseminação, neste cenário, estão sendo realizados de forma eficaz e, se os resultados alcançados podem servir como uma base para avaliação do efetivo compartilhamento do conhecimento.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our main goal is to investigate the question of which interest-rate options valuation models are better suited to support the management of interest-rate risk. We use the German market to test seven spot-rate and forward-rate models with one and two factors for interest-rate warrants for the period from 1990 to 1993. We identify a one-factor forward-rate model and two spot-rate models with two faetors that are not significant1y outperformed by any of the other four models. Further rankings are possible if additional cri teria are applied.