947 resultados para Estimation methods
Resumo:
This paper examines the trade relationship between the Gulf Cooperation Council (GCC) and the European Union (EU). A simultaneous equation regression model is developed and estimated to assist with the analysis. The regression results, using both the two stage least squares (2SLS) and ordinary least squares (OLS) estimation methods, reveal the existence of feedback effects between the two economic integrations. The results also show that during times of slack in oil prices, the GCC income from its investments overseas helped to finance its imports from the EU.
Resumo:
Transporte de sedimentos em bacias hidrográficas está relacionado com geomorfologia, ecologia fluvial, estabilidade das estruturas de engenharia e condições de navegação, dentre outros aspectos importantes para o planejamento e o controle de recursos hídricos. Existência de inúmeras variáveis envolvidas na mecânica de transporte de sedimentos e complexidade nas interações de processos físicos tornam difícil o estabelecimento de metodologias indiretas para estimativa de transporte de sedimentos em rios. Desta forma, não existe ainda metodologia universalmente aceita. O principal objetivo do presente trabalho é a análise comparativa de diferentes métodos empíricos disponíveis na literatura para estimativa de descargas sólidas em rios, considerando suas características específicas e resultados de aplicações às bacias dos rios Santa Joana e Santa Maria do Doce e microbacias Sossego e Santa Júlia, inseridas na porção da bacia do rio Doce localizada no Espírito Santo. Foram aplicados os métodos de estimativa indireta de descarga sólida: Einstein Modificado por Colby e Hembree (1955); Colby Simplificado (1957); Engelund & Hansen (1967); Ackers & White (1973); Yang (1973), Karim (1998) e Cheng (2002). Considerando as médias das estimativas de descarga sólida de material de leito (Qsml) relativas às campanhas realizadas em período chuvoso para a seção transversal do rio Doce, de maior porte, o método de Karim (1998), seguido pelo método de Ackers & White (1973), apresentou o maior valor, enquanto que o método de Yang (1973) apresentou o menor. Para os rios Santa Maria do Doce e Santa Joana e os córregos Santa Júlia e Sossego, verificou-se que o método de Ackers & White (1973), seguido de Karim (1998) e Yang (1973) apresentaram as maiores médias, enquanto que os métodos de Cheng (2002) e Engelund & Hansen (1967) apresentaram as menores. O método Simplificado de Colby apresentou as maiores estimativas de descargas sólidas totais (Qst) para todas as seções transversais de todos os cursos d'água monitorados. Concluiu-se que os diferentes métodos indiretos podem resultar em grandes diferenças em estimativas de transporte de sedimento em rios e que, desta forma, resultados de aplicação destes métodos devem ser considerados com muita cautela. Os resultados mostraram a grande importância de realização de campanhas de medição sedimentométricas para avaliação do transporte de sedimentos e definição dos melhores métodos indiretos de estimativa para cursos d'água específicos.
Resumo:
Interest rate risk is one of the major financial risks faced by banks due to the very nature of the banking business. The most common approach in the literature has been to estimate the impact of interest rate risk on banks using a simple linear regression model. However, the relationship between interest rate changes and bank stock returns does not need to be exclusively linear. This article provides a comprehensive analysis of the interest rate exposure of the Spanish banking industry employing both parametric and non parametric estimation methods. Its main contribution is to use, for the first time in the context of banks’ interest rate risk, a nonparametric regression technique that avoids the assumption of a specific functional form. One the one hand, it is found that the Spanish banking sector exhibits a remarkable degree of interest rate exposure, although the impact of interest rate changes on bank stock returns has significantly declined following the introduction of the euro. Further, a pattern of positive exposure emerges during the post-euro period. On the other hand, the results corresponding to the nonparametric model support the expansion of the conventional linear model in an attempt to gain a greater insight into the actual degree of exposure.
Resumo:
Este trabalho visa contribuir para o desenvolvimento de um sistema de visão multi-câmara para determinação da localização, atitude e seguimento de múltiplos objectos, para ser utilizado na unidade de robótica do INESCTEC, e resulta da necessidade de ter informação externa exacta que sirva de referência no estudo, caracterização e desenvolvimento de algoritmos de localização, navegação e controlo de vários sistemas autónomos. Com base na caracterização dos veículos autónomos existentes na unidade de robótica do INESCTEC e na análise dos seus cenários de operação, foi efectuado o levantamento de requisitos para o sistema a desenvolver. Foram estudados os fundamentos teóricos, necessários ao desenvolvimento do sistema, em temas relacionados com visão computacional, métodos de estimação e associação de dados para problemas de seguimento de múltiplos objectos . Foi proposta uma arquitectura para o sistema global que endereça os vários requisitos identi cados, permitindo a utilização de múltiplas câmaras e suportando o seguimento de múltiplos objectos, com ou sem marcadores. Foram implementados e validados componentes da arquitectura proposta e integrados num sistema para validação, focando na localização e seguimento de múltiplos objectos com marcadores luminosos à base de Light-Emitting Diodes (LEDs). Nomeadamente, os módulos para a identi cação dos pontos de interesse na imagem, técnicas para agrupar os vários pontos de interesse de cada objecto e efectuar a correspondência das medidas obtidas pelas várias câmaras, método para a determinação da posição e atitude dos objectos, ltro para seguimento de múltiplos objectos. Foram realizados testes para validação e a nação do sistema implementado que demonstram que a solução encontrada vai de encontro aos requisitos, e foram identi cadas as linhas de trabalho para a continuação do desenvolvimento do sistema global.
Resumo:
A utilização de juntas adesivas em aplicações industriais tem vindo a aumentar, em detrimento dos métodos tradicionais tais como a soldadura, brasagem e ligações aparafusadas e rebitadas. Este facto deve-se às vantagens que estas oferecem, como o facto de serem mais leves, comportarem-se bem sob cargas cíclicas ou de fadiga, a ligação de materiais diferentes e menores concentrações de tensões. Para aumentar a confiança no projeto de estruturas adesivas, é importante conseguir prever com precisão a sua resistência mecânica e respetivas propriedades de fratura (taxa crítica de libertação de energia de deformação à tração, GIC, e corte, GIIC). Estas propriedades estão diretamente relacionadas com a Mecânica da Fratura e são estimadas através de uma análise energética. Para este efeito, distinguem-se três tipos de modelos: modelos que necessitam da medição do comprimento de fenda durante a propagação do dano, modelos que utilizam um comprimento de fenda equivalente e métodos baseados no integral J. Como na maioria dos casos as solicitações ocorrem em modo misto (combinação de tração com corte), é de grande importância a perceção da fratura nesta condições, nomeadamente das taxas de libertação de energia relativamente a diferentes critérios ou envelopes de fratura. Esta comparação permite, por exemplo, averiguar qual o melhor critério energético de rotura a utilizar em modelos numéricos baseados em Modelos de Dano Coesivo. Neste trabalho é realizado um estudo experimental utilizando o ensaio Single-Leg Bending (SLB) em provetes colados com três tipos de adesivos, de forma a estudar e comparar as suas propriedades de fratura. Para tal, são aplicados alguns modelos de redução da taxa de libertação de energia de deformação à tração, GI, e corte, GII, enquadrados nos modelos que necessitam da medição do comprimento de fenda e nos modelos que utilizam um comprimento de fenda equivalente. Numa fase posterior, procedeu-se à análise e comparação dos resultados adquiridos durante a fase experimental de GI e GII de cada adesivo. A discussão de resultados foi também feita através da análise dos valores obtidos em diversos envelopes de fratura, no sentido de averiguar qual o critério de rotura mais adequado a considerar para cada adesivo. Foi obtida uma concordância bastante boa entre métodos de determinação de GI e GII, com exceção do adesivo mais dúctil, para o qual o método baseado no comprimento de fenda equivalente apresentou resultados ligeiramente superiores.
Resumo:
This paper approaches issues related to frame problems and nonresponse in surveys. These nonsampling errors affect the accuracy of the estimates whereas the estimators became biased and less precise. We analyse some estimation methods that deal with those problems and give an especial focus to post-stratification procedures.
Resumo:
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Mat`ern models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.
Resumo:
Ao longo destes últimos anos as ligações adesivas têm vindo a verificar um aumento progressivo em aplicações estruturais em detrimento das ligações mecânicas convencionais. Esta alteração de paradigma deve-se às vantagens que as juntas adesivas possuem relativamente aos outros métodos de ligação. A mecânica da fratura e os Modelos de Dano Coesivo (MDC) são critérios comuns para prever a resistência em juntas adesivas e usam como parâmetros fundamentais as taxas de libertação de energia. Pelo facto do ensaio 4-Point End Notched Flexure (4-ENF), aplicado em juntas adesivas, ainda estar pouco estudado é de grande relevância um estudo acerca da sua viabilidade para a determinação da taxa crítica de libertação de energia de deformação ao corte (GIIc). Esta dissertação tem como objetivo principal efetuar uma comparação entre os métodos End- Notched Flexure (ENF) e 4-ENF na determinação de GIIc em juntas adesivas. Para tal foram utilizados 3 adesivos: Araldite® AV138, Araldite® 2015 e SikaForce® 7752. O trabalho experimental passou pela conceção e fabrico de uma ferramenta para realização do ensaio 4-ENF, seguindo-se o fabrico e a preparação dos provetes para os ensaios. Pelo facto do ensaio 4-ENF ainda se encontrar pouco divulgado em juntas adesivas, e não se encontrar normalizado, uma parte importante do trabalho passou pela pesquisa e análise em trabalhos de investigação e artigos científicos. A análise dos resultados foi realizada por comparação direta dos valores de GIIc com os resultados obtidos no ensaio ENF, sendo realizada por série de adesivo, através da comparação das curvas P-δ e curvas-R. Como resultado verificou-se que o ensaio 4-ENF em ligações adesivas não é o mais versátil para a determinação do valor de GIIc, e que apenas um método de obtenção de GIIc é viável. Este método é baseado na medição do comprimento de fenda (a). Ficou evidenciado que o ensaio ENF, devido a ser um ensaio normalizado, por apresentar um setup mais simples e por apresentar uma maior disponibilidade de métodos para a determinação do valor de GIIc, é o mais recomendado. Conclui-se assim que o ensaio 4-ENF, embora sendo uma alternativa ao ensaio ENF, tem aplicação mais limitada.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics
Resumo:
The aim of this paper is to compare three different methods for counting white blood cells [WBC] (Natt and Herrick method, estimation with 1,000 and 2,000 erythrocytes) and three methods for counting total thrombocytes [TT] (Wojtaszek method, estimation with 1,000 and 2,000 erythrocytes) in a South American freshwater turtle species, Podocnemis expansa, Schweigger 1812 (Reptilia, Pelomedusidae). Direct WBC counts using the Natt and Herrick method showed limitations, which are discussed here. The WBC and TT counts using 1,000 erythrocytes from blood smears are not recommended for Amazon turtles nor other reptilian species, since wide variation in counts can be observed. Estimation methods for determining WBC and TT based on 2,000 erythrocytes of blood smears were most acceptable because they allow a differentiation between leukocytes and thrombocytes and also had a smaller variation. The methods investigated here for the Amazon turtle, which have been widely used in other reptile species, provided evidence that the most acceptable method is not that of using diluted stains and a hemocytometer.
Resumo:
The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).
Resumo:
Aim: Duration of untreated psychosis (DUP) refers to the time elapsing between psychosis onset and treatment initiation. Despite a certain degree of consensus regarding the definition of psychosis onset, the definition of treatment commencement varies greatly between studies and DUP may be underestimated due to lack of agreement. In the present study, three sets of criteria to define the end of the untreated period were applied in a first-episode psychosis cohort to assess the impact of the choice of definition on DUP estimation. Methods: The DUP of 117 patients admitted in the Treatment and Early Intervention in Psychosis Program Psychosis in Lausanne was measured using the following sets of criteria to define treatment onset: (i) initiation of antipsychotic medication; (ii) entry into a specialized programme; and (iii) entry into a specialized programme and adequate medication with a good compliance. Results: DUP varied greatly according to definitions, the most restrictive criteria leading to the longest DUP (median DUP1 = 2.2 months, DUP2 = 7.4 months and DUP3 = 13.6 months). A percentage of 19.7 of the patients who did not meet these restrictive criteria had poorer premorbid functioning and were more likely to use cannabis. Longer DUP3 was associated with poorer premorbid functioning and with younger age at onset of psychosis. Conclusion: These results underline the need for a unique and standardized definition of the end of DUP. We suggest that the most restrictive definition of treatment should be used when using the DUP concept in future research.
Resumo:
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services
Resumo:
Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.
Resumo:
Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.