982 resultados para reflective versus formative models
Resumo:
A szerző a tisztán elméleti célokra kifejlesztett Neumann-modellt és a gyakorlati alkalmazások céljára kifejlesztett Leontief-modellt veti össze. A Neumann-modell és a Leontief-modell éves termelési periódust feltételező, zárt, stacionárius változatának hasonló matematikai struktúrája azt a feltételezést sugallja, hogy az utóbbi a Neumann-modell sajátos eseteként értelmezhető. Az egyes modellek közgazdasági tartalmát és feltevéseit részletesen kibontva és egymással összevetve a szerző megmutatja, hogy a fenti következtetés félrevezető, két merőben különböző modellről van szó, nem lehet az egyikből a másikat levezetni. Az ikertermelés és technológiai választék lehetősége a Neumann-modell elengedhetetlen feltevése, az éves termelési periódus feltevése pedig kizárja folyam jellegű kibocsátások explicit figyelembevételét. Mindezek feltevések ugyanakkor idegenek a Leontief-modelltől. A két modell valójában egy általánosabb állomány–folyam jellegű zárt, stacionárius modell sajátos esete, méghozzá azok folyamváltozókra redukált alakja. _____ The paper compares the basic assumptions and methodology of the Von Neumann model, developed for purely abstract theoretical purposes, and those of the Leontief model, designed originally for practical applications. Study of the similar mathematical structures of the Von Neumann model and the closed, stationary Leontief model, with a unit length of production period, often leads to the false conclusion that the latter is just a simplified version of the former. It is argued that the economic assumptions of the two models are quite different, which makes such an assertion unfounded. Technical choice and joint production are indispensable features of the Von Neumann model, and the assumption of unitary length of production period excludes the possibility of taking service flows explicitly into account. All these features are completely alien to the Leontief model, however. It is shown that the two models are in fact special cases of a more general stock-flow stationary model, reduced to forms containing only flow variables.
Resumo:
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Traditionally the basal ganglia have been implicated in motor behavior, as they are involved in both the execution of automatic actions and the modification of ongoing actions in novel contexts. Corresponding to cognition, the role of the basal ganglia has not been defined as explicitly. Relative to linguistic processes, contemporary theories of subcortical participation in language have endorsed a role for the globus pallidus internus (GPi) in the control of lexical-semantic operations. However, attempts to empirically validate these postulates have been largely limited to neuropsychological investigations of verbal fluency abilities subsequent to pallidotomy. We evaluated the impact of bilateral posteroventral pallidotomy (BPVP) on language function across a range of general and high-level linguistic abilities, and validated/extended working theories of pallidal participation in language. Comprehensive linguistic profiles were compiled up to 1 month before and 3 months after BPVP in 6 subjects with Parkinson's disease (PD). Commensurate linguistic profiles were also gathered over a 3-month period for a nonsurgical control cohort of 16 subjects with PD and a group of 16 non-neurologically impaired controls (NC). Nonparametric between-groups comparisons were conducted and reliable change indices calculated, relative to baseline/3-month follow-up difference scores. Group-wise statistical comparisons between the three groups failed to reveal significant postoperative changes in language performance. Case-by-case data analysis relative to clinically consequential change indices revealed reliable alterations in performance across several language variables as a consequence of BPVP. These findings lend support to models of subcortical participation in language, which promote a role for the GPi in lexical-semantic manipulation mechanisms. Concomitant improvements and decrements in postoperative performance were interpreted within the context of additive and subtractive postlesional effects. Relative to parkinsonian cohorts, clinically reliable versus statistically significant changes on a case by case basis may provide the most accurate method of characterizing the way in which pathophysiologically divergent basal ganglia linguistic circuits respond to BPVP.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Background: This study evaluated the impact of 2 models of educational intervention on rates of central venous catheter-associated bloodstream infections (CVC-BSIs). Methods: This was a prospective observational study conducted between January 2005 and June 2007 in 2 medical intensive care units (designated ICU A and ICU B) in a large teaching hospital. The study was divided into in 3 periods: baseline (only rates were evaluated), preintervention (questionnaire to evaluate knowledge of health care workers [HCWs] and observation of CVC care in both ICUs), and intervention (in ICU A, tailored, continuous intervention; in ICU B, a single lecture). The preintervention and intervention periods for each ICU were compared. Results: During the preintervention period, 940 CVC-days were evaluated in ICUA and 843 CVC-days were evaluated in ICU B. During the intervention period, 2175 CVC-days were evaluated in ICUA and 1694 CVC-days were evaluated in ICU B. Questions regarding CVC insertion, disinfection during catheter manipulation, and use of an alcohol-based product during dressing application were answered correctly by 70%-100% HCWs. Nevertheless, HCWs` adherence to these practices in the preintervention period was low for CVC handling and dressing, hand hygiene (6%-35%), and catheter hub disinfection (45%-68%). During the intervention period, HCWs` adherence to hand hygiene was 48%-98%, and adherence to hub disinfection was 82%-97%. CVC-BSI rates declined in both units. In ICUA, this decrease was progressive and sustained, from 12CVC-BSIs/1000 CVC-days at baseline to 0 after 9 months. In ICU B, the rate initially dropped from 16.2 to 0 CVC-BSIs/1000 CVC-days, but then increased to 13.7 CVC-BSIs/1000 CVC-days. Conclusion: Personal customized, continuous intervention seems to develop a ""culture of prevention"" and is more effective than single intervention, leading to a sustained reduction of infection rates.
Resumo:
Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.
Resumo:
Computer Science is a subject which has difficulty in marketing itself. Further, pinning down a standard curriculum is difficult-there are many preferences which are hard to accommodate. This paper argues the case that part of the problem is the fact that, unlike more established disciplines, the subject does not clearly distinguish the study of principles from the study of artifacts. This point was raised in Curriculum 2001 discussions, and debate needs to start in good time for the next curriculum standard. This paper provides a starting point for debate, by outlining a process by which principles and artifacts may be separated, and presents a sample curriculum to illustrate the possibilities. This sample curriculum has some positive points, though these positive points are incidental to the need to start debating the issue. Other models, with a less rigorous ordering of principles before artifacts, would still gain from making it clearer whether a specific concept was fundamental, or a property of a specific technology. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
Oriêntador: Mestre Carlos Pedro
Resumo:
Mestrado em Controlo e Gestão dos Negócios
Resumo:
The development of biopharmaceutical manufacturing processes presents critical constraints, with the major constraint being that living cells synthesize these molecules, presenting inherent behavior variability due to their high sensitivity to small fluctuations in the cultivation environment. To speed up the development process and to control this critical manufacturing step, it is relevant to develop high-throughput and in situ monitoring techniques, respectively. Here, high-throughput mid-infrared (MIR) spectral analysis of dehydrated cell pellets and in situ near-infrared (NIR) spectral analysis of the whole culture broth were compared to monitor plasmid production in recombinant Escherichia coil cultures. Good partial least squares (PLS) regression models were built, either based on MIR or NIR spectral data, yielding high coefficients of determination (R-2) and low predictive errors (root mean square error, or RMSE) to estimate host cell growth, plasmid production, carbon source consumption (glucose and glycerol), and by-product acetate production and consumption. The predictive errors for biomass, plasmid, glucose, glycerol, and acetate based on MIR data were 0.7 g/L, 9 mg/L, 0.3 g/L, 0.4 g/L, and 0.4 g/L, respectively, whereas for NIR data the predictive errors obtained were 0.4 g/L, 8 mg/L, 0.3 g/L, 0.2 g/L, and 0.4 g/L, respectively. The models obtained are robust as they are valid for cultivations conducted with different media compositions and with different cultivation strategies (batch and fed-batch). Besides being conducted in situ with a sterilized fiber optic probe, NIR spectroscopy allows building PLS models for estimating plasmid, glucose, and acetate that are as accurate as those obtained from the high-throughput MIR setup, and better models for estimating biomass and glycerol, yielding a decrease in 57 and 50% of the RMSE, respectively, compared to the MIR setup. However, MIR spectroscopy could be a valid alternative in the case of optimization protocols, due to possible space constraints or high costs associated with the use of multi-fiber optic probes for multi-bioreactors. In this case, MIR could be conducted in a high-throughput manner, analyzing hundreds of culture samples in a rapid and automatic mode.
Resumo:
SUMÁRIO - O desafio atual da Saúde Pública é assegurar a sustentabilidade financeira do sistema de saúde. Em ambiente de recursos escassos, as análises económicas aplicadas à prestação dos cuidados de saúde são um contributo para a tomada de decisão que visa a maximização do bem-estar social sujeita a restrição orçamental. Portugal é um país com 10,6 milhões de habitantes (2011) com uma incidência e prevalência elevadas de doença renal crónica estadio 5 (DRC5), respetivamente, 234 doentes por milhão de habitantes (pmh) e 1.600 doentes/pmh. O crescimento de doenças associadas às causas de DRC, nomeadamente, diabetes Mellitus e hipertensão arterial, antecipam uma tendência para o aumento do número de doentes. Em 2011, dos 17.553 doentes em tratamento substitutivo renal, 59% encontrava-se em programa de hemodiálise (Hd) em centros de diálise extra-hospitalares, 37% viviam com um enxerto renal funcionante e 4% estavam em diálise peritoneal (SPN, 2011). A lista ativa para transplante (Tx) renal registava 2.500 doentes (SPN 2009). O Tx renal é a melhor modalidade terapêutica pela melhoria da sobrevida, qualidade de vida e relação custo-efetividade, mas a elegibilidade para Tx e a oferta de órgãos condicionam esta opção. Esta investigação desenvolveu-se em duas vertentes: i) determinar o rácio custo-utilidade incremental do Tx renal comparado com a Hd; ii) avaliar a capacidade máxima de dadores de cadáver em Portugal, as características e as causas de morte dos dadores potenciais a nível nacional, por hospital e por Gabinete Coordenador de Colheita e Transplantação (GCCT), e analisar o desempenho da rede de colheita de órgãos para Tx. Realizou-se um estudo observacional/não interventivo, prospetivo e analítico que incidiu sobre uma coorte de doentes em Hd que foi submetida a Tx renal. O tempo de seguimento mínimo foi de um ano e máximo de três anos. No início do estudo, colheram-se dados sociodemográficos e clínicos em 386 doentes em Hd, elegíveis para Tx renal. A qualidade de vida relacionada com a saúde (QVRS) foi avaliada nos doentes em Hd (tempo 0) e nos transplantados, aos três, seis, 12 meses, e depois, anualmente. Incluíram-se os doentes que por falência do enxerto renal transitaram para Hd. Na sua medição, utilizou-se um instrumento baseado em preferências da população, o EuroQol-5D, que permite o posterior cálculo dos QALY. Num grupo de 82 doentes, a QVRS em Hd foi avaliada em dois tempos de resposta o que permitiu a análise da sua evolução. Realizou-se uma análise custo-utilidade do Tx renal comparado com a Hd na perspetiva da sociedade. Identificaram-se os custos diretos, médicos e não médicos, e as alterações de produtividade em Hd e Tx renal. Incluíram-se os custos da colheita de órgãos, seleção dos candidatos a Tx renal e follow-up dos dadores vivos. Cada doente transplantado foi utilizado como controle de si próprio em diálise. Avaliou-se o custo médio anual em programa de Hd crónica relativo ao ano anterior à Tx renal. Os custos do Tx foram avaliados prospetivamente. Considerou-se como horizonte temporal o ciclo de vida nas duas modalidades. Usaram-se taxas de atualização de 0%, 3% e 5% na atualização dos custos e QALY e efetuaram-se análises de sensibilidade one way. Entre 2008 e 2010, 65 doentes foram submetidos a Tx renal. Registaram-se, prospetivamente, os resultados em saúde incluíndo os internamentos e os efeitos adversos da imunossupressão, e o consumo dos recursos em saúde. Utilizaram-se modelos de medidas repetidas na avaliação da evolução da QVRS e modelos de regressão múltipla na análise da associação da QVRS e dos custos do transplante com as características basais dos doentes e os eventos clínicos. Comparativamente à Hd, observou-se melhoria da utilidade ao 3º mês de Tx e a qualidade de vida aferida pela escala EQ-VAS melhorou em todos os tempos de observação após o Tx renal. O custo médio da Hd foi de 32.567,57€, considerado uniforme ao longo do tempo. O custo médio do Tx renal foi de 60.210,09€ no 1º ano e 12.956,77€ nos anos seguintes. O rácio custo-utilidade do Tx renal vs Hd crónica foi de 2.004,75€/QALY. A partir de uma sobrevivência do enxerto de dois anos e cinco meses, o Tx associou-se a poupança dos custos. Utilizaram-se os dados nacionais dos Grupos de Diagnóstico Homogéneos e realizou-se um estudo retrospectivo que abrangeu as mortes ocorridas em 34 hospitais com colheita de órgãos, em 2006. Considerou-se como dador potencial o indivíduo com idade entre 1-70 anos cuja morte ocorrera a nível hospitalar, e que apresentasse critérios de adequação à doação de rim. Analisou-se a associação dos dadores potenciais com características populacionais e hospitalares. O desempenho das organizações de colheita de órgãos foi avaliado pela taxa de conversão (rácio entre os dadores potenciais e efetivos) e pelo número de dadores potenciais por milhão de habitantes a nível nacional, regional e por Gabinete Coordenador de Colheita e Transplantação (GCCT). Identificaram-se 3.838 dadores potenciais dos quais 608 apresentaram códigos da Classificação Internacional de Doenças, 9.ª Revisão, Modificações Clínicas (CID- 9-MC) que, com maior frequência, evoluem para a morte cerebral. O modelo logit para dados agrupados identificou a idade, o rácio da lotação em Unidades de Cuidados Intensivos e lotação de agudos, existência de GCCT e de Unidade de Transplantação, e mortalidade por acidente de trabalho como fatores preditivos da conversão dum dador potencial em efetivo e através das estimativas do modelo logit quantificou-se a probabilidade dessa conversão. A doação de órgãos deve ser assumida como uma prioridade e as autoridades em saúde devem assegurar o financiamento dos hospitais com programas de doação, evitando o desperdício de órgãos para transplantação, enquanto um bem público e escasso. A colheita de órgãos deve ser considerada uma opção estratégica da atividade hospitalar orientada para a organização e planeamento de serviços que maximizem a conversão de dadores potenciais em efetivos incluindo esse critério como medida de qualidade e efetividade do desempenho hospitalar. Os resultados deste estudo demonstram que: 1) o Tx renal proporciona ganhos em saúde, aumento da sobrevida e qualidade de vida, e poupança de custos; 2) em Portugal, a taxa máxima de eficácia da conversão dos dadores cadavéricos em dadores potenciais está longe de ser atingida. O investimento na rede de colheita de órgãos para Tx é essencial para assegurar a sustentabilidade financeira e promover a qualidade, eficiência e equidade dos cuidados em saúde prestados na DRC5.
Resumo:
Despite the extensive literature in finding new models to replace the Markowitz model or trying to increase the accuracy of its input estimations, there is less studies about the impact on the results of using different optimization algorithms. This paper aims to add some research to this field by comparing the performance of two optimization algorithms in drawing the Markowitz Efficient Frontier and in real world investment strategies. Second order cone programming is a faster algorithm, appears to be more efficient, but is impossible to assert which algorithm is better. Quadratic Programming often shows superior performance in real investment strategies.
Resumo:
Introduction: Although diuretics are mainly used for the treatment of acute decompensated heart failure (ADHF), inadequate responses and complications have led to the use of extracorporeal ultrafiltration (UF) as an alternative strategy for reducing volume overloads in patients with ADHF. Objective: The aim of our study is to perform meta-analysis of the results obtained from studies on extracorporeal venous ultrafiltration and compare them with those of standard diuretic treatment for overload volume reduction in acute decompensated heart failure. Methods: MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials databases were systematically searched using a pre‑specified criterion. Pooled estimates of outcomes after 48 h (weight change, serum creatinine level, and all-cause mortality) were computed using random effect models. Pooled weighted mean differences were calculated for weight loss and change in creatinine level, whereas a pooled risk ratio was used for the analysis of binary all-cause mortality outcome. Results: A total of nine studies, involving 613 patients, met the eligibility criteria. The mean weight loss in patients who underwent UF therapy was 1.78 kg [95% Confidence Interval (CI): −2.65 to −0.91 kg; p < 0.001) more than those who received standard diuretic therapy. The post-intervention creatinine level, however, was not significantly different (mean change = −0.25 mg/dL; 95% CI: −0.56 to 0.06 mg/dL; p = 0.112). The risk of all-cause mortality persisted in patients treated with UF compared with patients treated with standard diuretics (Pooled RR = 1.00; 95% CI: 0.64–1.56; p = 0.993). Conclusion: Compared with standard diuretic therapy, UF treatment for overload volume reduction in individuals suffering from ADHF, resulted in significant reduction of body weight within 48 h. However, no significant decrease of serum creatinine level or reduction of all-cause mortality was observed.
Resumo:
The value of the elasticity of substitution of capital for resources is a crucial element in the debate over whether continual growth is possible. It is generally held that the elasticity has to be at least one to permit continual growth and that there is no way of estimating this outside the range of the data. This paper presents a model in which the elasticity is determined endogenously and may converge to one. It is concluded that the general opinion is wrong: that the possibility of continual growth does not depend on the exogenously given value of the elasticity and that the value of the elasticity outside the range of the data can be studied by econometric methods.