968 resultados para Ordinary Least Squares


Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: Depression is the predominant psychosocial and suicide burden in bipolar disorder, yet there is a paucity of evidence-based treatments for bipolar depression. METHODS: This post hoc subgroup analysis of data pooled from two 3-week, randomized, placebo- and olanzapine-controlled trials (December 2004-April 2006, N = 489 and November 2004-April 2006, N = 488) examined a subgroup of patients meeting criteria for moderate-to-severe mixed major depressive episodes, defined using DSM-IV-TR criteria for mixed episodes (mania and major depression simultaneously) with a baseline Montgomery-Asberg Depression Rating Scale (MADRS) total score ≥ 20. RESULTS: Decreases in MADRS scores (least squares mean [SE]), the a priori primary outcome, were significantly greater in the asenapine group than in the placebo group from baseline to day 7 (-11.02 [1.82] vs -4.78 [1.89]; P = .0195), day 21 (-14.03 [2.01] vs -7.43 [2.09]; P = .0264), and endpoint (-10.71 [1.76] vs -5.19 [1.98]; P = .039). Decreases in MADRS scores with asenapine were significantly greater than with olanzapine from baseline to day 7 (-6.26 [1.47]; P = .0436). Decreases in Young Mania Rating Scale mean total score were greater with asenapine than with placebo or olanzapine at all time points assessed. A significantly greater reduction from baseline to day 21 in the Short Form-36 mental component summary score was observed with asenapine, but not olanzapine, compared with placebo (16.57 vs 5.97; P = .0093). Asenapine was generally well tolerated. CONCLUSIONS: These data provide support for the potential efficacy of asenapine in mixed major depressive episodes; however, these data cannot be linearly extrapolated to nonmixed major depression.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be ‘threshold concepts’. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-oftopic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces an approach to cancer classification through gene expression profiles by designing supervised learning hidden Markov models (HMMs). Gene expression of each tumor type is modelled by an HMM, which maximizes the likelihood of the data. Prominent discriminant genes are selected by a novel method based on a modification of the analytic hierarchy process (AHP). Unlike conventional AHP, the modified AHP allows to process quantitative factors that are ranking outcomes of individual gene selection methods including t-test, entropy, receiver operating characteristic curve, Wilcoxon test and signal to noise ratio. The modified AHP aggregates ranking results of individual gene selection methods to form stable and robust gene subsets. Experimental results demonstrate the performance dominance of the HMM approach against six comparable classifiers. Results also show that gene subsets generated by modified AHP lead to greater accuracy and stability compared to competing gene selection methods, i.e. information gain, symmetrical uncertainty, Bhattacharyya distance, and ReliefF. The modified AHP improves the classification performance not only of the HMM but also of all other classifiers. Accordingly, the proposed combination between the modified AHP and HMM is a powerful tool for cancer classification and useful as a real clinical decision support system for medical practitioners.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article analyses the determinants of renewable energy consumption in six major emerging economies who are proactively accelerating the adoption of renewable energy. The long-run elasticities from both panel methods (fully modified ordinary least square and dynamic least square) and the time series method (autoregressive distributed lag) seem to be pretty consistent. For Brazil, China, India and Indonesia, in the long-run, renewable energy consumption is significantly determined by income and pollutant emission. However, for Philippines and Turkey, income seems to be the main driver for renewable energy consumption. In the short-run, for Brazil and China bi-directional causalities between renewable energy and income; and between renewable energy and pollutant emission are found. This research justifies the efforts undertaken by emerging countries to reduce the carbon intensity by increasing the energy efficiency and substantially increasing the share of renewable in the overall energy mix

Relevância:

80.00% 80.00%

Publicador:

Resumo:

First-differencing is generally taken to imply the loss of one observation, the first, or at least that the effect of ignoring this observation is asymptotically negligible. However, this is not always true, as in the case of generalized least squares (GLS) detrending. In order to illustrate this, the current article considers as an example the use of GLS detrended data when testing for a unit root. The results show that the treatment of the first observation is absolutely crucial for test performance, and that ignorance causes test break-down.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose - The purpose of this study is to examine the effects of social capital within a community on the adoption of consumer eco-behaviour or environmentally sustainable behaviour of consumers. The authors draw on the behavioural perspective model (BPM) of consumer behaviour and social capital theory in arguing that social capital shapes a consumer's knowledge of environmental issues and pro-environmental attitudes, which in turn influence a consumer's perceived capability to engage in eco-behaviour. Design/methodology/approach - This study uses partial least squares approach to structural equation modelling of survey data involving 1,044 consumers in the Philippines. It involves testing of a measurement model to examine the validity and reliability of the constructs used in the study. This is followed by testing of the structural models to test the hypothesised relationships of the constructs. Findings - The results suggest the substantive influence of social capital on environmental knowledge, pro-environmental attitudes and eco-capability. Both knowledge and attitudes have positive effects on eco-capability, which in turn positively shapes eco-behaviour. Research limitations/implications - Future studies can examine how social capital as a multi-dimensional construct impacts context-specific consumer behaviour. Practical implications - Social and environmental marketing may focus on social network activation to encourage eco-behaviours of consumers. Social implications - Findings highlight the role of social capital within one's community as a resource channel to encourage environmentally responsible consumer behaviour. Originality/value - The study extends the BPM by offering a social capital view as a more nuanced explanation of consumer eco-behaviour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AIM: To test the hypothesis that a 'basal plus' regimenadding once-daily main-meal fast-acting insulin to basal insulin once dailywould be non-inferior to biphasic insulin twice daily as assessed by glycated haemoglobin (HbA1c) concentration (predefined as ≤0.4%), but would provide superior treatment satisfaction. METHODS: This open-label trial enrolled adults to an 8- or 12-week run-in period, during which oral therapies except metformin were stopped and insulin glargine dose was titrated. Those with fasting glucose <7 mmol/l but HbA1c >7% (53 mmol/mol) were randomized to insulin glargine/glulisine once daily (n = 170) or insulin aspart/aspart protamine 30/70 twice daily (n = 165) for 24 weeks, with dose titration to glucose targets using standardized algorithms. RESULTS: For HbA1c, the basal plus regimen was non-inferior to biphasic insulin (least squares mean difference, 0.21%, upper 97.5% confidence limit 0.38%) meeting the predefined non-inferiority margin of 0.4%. Treatment satisfaction (Diabetes Treatment Satisfaction Questionnaire change version and Insulin Treatment Satisfaction Questionnaire total scores) significantly favoured basal plus. No difference was observed between the basal plus and the biphasic insulin groups in responders (HbA1c <7%, 20.6 vs 27.9%; p = 0.12), weight gain (2.06 vs 2.50 kg; p = 0.2), diabetes-specific quality of life (Audit of Diabetes-Dependent Quality of Life average weighted impact (AWI) score) and generic health status (five-dimension European Quality of Life questionnaire). Overall hypoglycaemia rates were similar between groups (15.3 vs 18.2 events/patient-year; p = 0.22); nocturnal hypoglycaemia was higher with the basal plus regimen (5.7 vs 3.6 events/patient-year; p = 0.02). CONCLUSION: In long-standing type 2 diabetes with suboptimal glycaemia despite oral therapies and basal insulin, the basal plus regimen was non-inferior to biphasic insulin for biomedical outcomes, with a similar overall hypoglycaemia rate but more nocturnal events.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Drinking water distribution networks risk exposure to malicious or accidental contamination. Several levels of responses are conceivable. One of them consists to install a sensor network to monitor the system on real time. Once a contamination has been detected, this is also important to take appropriate counter-measures. In the SMaRT-OnlineWDN project, this relies on modeling to predict both hydraulics and water quality. An online model use makes identification of the contaminant source and simulation of the contaminated area possible. The objective of this paper is to present SMaRT-OnlineWDN experience and research results for hydraulic state estimation with sampling frequency of few minutes. A least squares problem with bound constraints is formulated to adjust demand class coefficient to best fit the observed values at a given time. The criterion is a Huber function to limit the influence of outliers. A Tikhonov regularization is introduced for consideration of prior information on the parameter vector. Then the Levenberg-Marquardt algorithm is applied that use derivative information for limiting the number of iterations. Confidence intervals for the state prediction are also given. The results are presented and discussed on real networks in France and Germany.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho teve como objetivo a identificação de atributos relacionados à atratividade de clientes em clusters comerciais, na percepção de consumidores. Partindo-se da atratividade de clientes para lojas, desenvolveu-se um construto de avaliação de atratividade de clientes para clusters comerciais. Por meio de estudo descritivo-quantitativo junto a 240 consumidores, em dois reconhecidos clusters comerciais, utilizando-se a técnica de PLS-PM (Partial Least Squares Path Modeling), avaliou-se a relação entre a atratividade de clientes (variável reflexiva) e as dimensões do mix varejista de clusters comerciais (variáveis latentes), a partir do tratamento de indicadores de efeitos observáveis. Como principais resultados, observou-se que: (1) atratividade está associada significativamente às variáveis latentes, sugerindo robustez do modelo; (2) condições de compra e preços são dimensões com maior associação à atratividade de clientes, embora lojas, produtos e atendimento apresentem relevância; e (3) localização apresentou-se como dimensão menos correlacionada à atratividade de clientes para ambos os clusters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Estimating the parameters of the instantaneous spot interest rate process is of crucial importance for pricing fixed income derivative securities. This paper presents an estimation for the parameters of the Gaussian interest rate model for pricing fixed income derivatives based on the term structure of volatility. We estimate the term structure of volatility for US treasury rates for the period 1983 - 1995, based on a history of yield curves. We estimate both conditional and first differences term structures of volatility and subsequently estimate the implied parameters of the Gaussian model with non-linear least squares estimation. Results for bond options illustrate the effects of differing parameters in pricing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Empirical evidence suggests that real exchange rate is characterized by the presence of near-unity and additive outliers. Recent studeis have found evidence on favor PPP reversion by using the quasi-differencing (Elliott et al., 1996) unit root tests (ERS), which is more efficient against local alternatives but is still based on least squares estimation. Unit root tests basead on least saquares method usually tend to bias inference towards stationarity when additive out liers are present. In this paper, we incorporate quasi-differencing into M-estimation to construct a unit root test that is robust not only against near-unity root but also against nonGaussian behavior provoked by assitive outliers. We re-visit the PPP hypothesis and found less evidemce in favor PPP reversion when non-Gaussian behavior in real exchange rates is taken into account.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo desse trabalho é mostrar a importância das instituições orçamentárias quando se estuda o efeito da descentralização sobre o tamanho dos governos subnacionais. No caso do Brasil, observamos que as mudanças institucionais iniciaram-se com a descentralização, oriunda de alterações determinadas pela Nova Constituição Federal de 1988, e que por sua vez possibilitou um conjunto de mudanças que determinou como resultado final a alteração do tamanho dos governos estaduais ao longo do tempo. Embora essas fossem promovidas pelo governo federal, a grande maioria delas aconteceu no sentido de tornar a restrição orçamentária dos governos estaduais mais hard, ou seja, os governos subnacionais teriam cada vez menos canais para ampliar seu endividamento, e com esse limitado, eles teriam que se adequar a uma realidade orçamentária mais rígida, em que as despesas teriam que seguir o comportamento das receitas; se essas crescessem, as despesas poderiam crescer, mas se houvesse uma diminuição, as despesas teriam que ser adequadas a esse novo montante de recurso. Das quatro mudanças nas instituições orçamentárias encontradas na literatura, três delas se mostraram importantes empiricamente na determinação do tamanho dos governos subnacionais: A Nova Constituição implantada em 1988, a mudança na forma de fazer orçamento (Efeito-Bacha) e a Lei de Responsabilidade Fiscal. Os resultados demonstraram que a primeira seguiu na direção de aumentar o tamanho dos governos subnacionais através do aumento de recursos transferidos via Fundo de Participação dos Estados; a segunda provocou uma diminuição no tamanho por impor uma nova realidade orçamentária, em que os governos deveriam trabalhar com o orçamento em termos reais de acordo com o que fosse determinado 6 em termos nominais; com relação à Lei de Responsabilidade Fiscal que apresentou um comportamento no sentido de aumentar o tamanho dos governos, há a questão da falta de observações em número suficiente para que seu resultado seja robusto, contudo já se percebe o sentido da influência que essa mudança teve. No caso da Renegociação das dívidas ocorrida entre os governos estaduais e a União, seu resultado mostrou-se mais como um choque negativo, do que como uma alteração que provocasse uma mudança de nível no tamanho dos governos. Trabalhamos com vinte e seis estados e um Distrito Federal entre os anos de 1986 e 2003 usando o modelo de Least Squares Dummy Variable (LSDV).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho é caracterizar a Curva de Juros Mensal para o Brasil através de três fatores, comparando dois tipos de métodos de estimação: Através da Representação em Espaço de Estado é possível estimá-lo por dois Métodos: Filtro de Kalman e Mínimos Quadrados em Dois Passos. Os fatores têm sua dinâmica representada por um Modelo Autorregressivo Vetorial, VAR(1), e para o segundo método de estimação, atribui-se uma estrutura para a Variância Condicional. Para a comparação dos métodos empregados, propõe-se uma forma alternativa de compará-los: através de Processos de Markov que possam modelar conjuntamente o Fator de Inclinação da Curva de Juros, obtido pelos métodos empregados neste trabalho, e uma váriavel proxy para Desempenho Econômico, fornecendo alguma medida de previsão para os Ciclos Econômicos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Essa tese foca em diferentes perspectivas sobre CSF (Fatores Críticos de Sucess) em implementações de ERP (Enterprise Resource Planning). A literatura atual foca nos CSF sob o ponto de vista da alta gerência da organização e classifica esses CSF baseado nessa visão. Essa tese irá apresentar a visão do time de implementação de ERP sob os principais CSF e irá utilizar um estudo de caso para avaliar se a alta gerência e o time de implementação compartilham a mesma visão. Além disso ess tese irá propor uma relação entre o sucesso na implementação de ERP e os CSF pesquisados, usando o método PLS (Partial Least Squares) para analisar as respostas do time de implementação a um questionário desenvolvido para medir sucesso na implementação de ERP.