966 resultados para hybrid weak form


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many industries, such as petroleum production, and the petrochemical, metal, food and cosmetics industries, wastewaters containing an emulsion of oil in water are often produced. The emulsions consist of water (up to 90%), oils (mineral, animal, vegetable and synthetic), surfactants and other contaminates. In view of its toxic nature and its deleterious effects on the surrounding environment (soil, water) such wastewater needs to be treated before release into natural water ways. Membrane-based processes have successfully been applied in industrial applications and are considered as possible candidates for the treatment of oily wastewaters. Easy operation, lower cost, and in some cases, the ability to reduce contaminants below existing pollution limits are the main advantages of these systems. The main drawback of membranes is flux decline due tofouling and concentration polarisation. The complexity of oil-containing systems demands complementary studies on issues related to the mitigation of fouling and concentration polarisation in membranebased ultrafiltration. In this thesis the effect of different operating conditions (factors) on ultrafiltration of oily water is studied. Important factors are normally correlated and, therefore, their effect should be studied simultaneously. This work uses a novel approach to study different operating conditions, like pressure, flow velocity, and temperature, and solution properties, like oil concentration (cutting oil, diesel, kerosene), pH, and salt concentration (CaCl2 and NaCl)) in the ultrafiltration of oily water, simultaneously and in a systematic way using an experimental design approach. A hypothesis is developed to describe the interaction between the oil drops, salt and the membrane surface. The optimum conditions for ultrafiltration and the contribution of each factor in the ultrafiltration of oily water are evaluated. It is found that the effect on permeate flux of the various factors studied strongly depended on the type of oil, the type of membrane and the amount of salts. The thesis demonstrates that a system containing oil is very complex, and that fouling and flux decline can be observed even at very low pressures. This means that only the weak form of the critical flux exists for such systems. The cleaning of the fouled membranes and the influence of different parameters (flow velocity, temperature, time, pressure, and chemical concentration (SDS, NaOH)) were evaluated in this study. It was observed that fouling, and consequently cleaning, behaved differently for the studied membranes. Of the membranes studied, the membrane with the lowest propensity for fouling and the most easily cleaned was the regenerated cellulose membrane (C100H). In order to get more information about the interaction between the membrane and the components of the emulsion, a streaming potential study was performed on the membrane. The experiments were carried out at different pH and oil concentration. It was seen that oily water changed the surface charge of the membrane significantly. The surface charge and the streaming potential during different stages of filtration were measured and analysed being a new method for fouling of oil in this thesis. The surface charge varied in different stages of filtration. It was found that the surface charge of a cleaned membrane was not the same as initially; however, the permeability was equal to that of a virgin membrane. The effect of filtration mode was studied by performing the filtration in both cross-flow and deadend mode. The effect of salt on performance was considered in both studies. It was found that salt decreased the permeate flux even at low concentration. To test the effect of hydrophilicity change, the commercial membranes used in this thesis were modified by grafting (PNIPAAm) on their surfaces. A new technique (corona treatment) was used for this modification. The effect of modification on permeate flux and retention was evaluated. The modified membranes changed their pore size around 33oC resulting in different retention and permeability. The obtained results in this thesis can be applied to optimise the operation of a membrane plant under normal or shock conditions or to modify the process such that it becomes more efficient or effective.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of the thesis is to analyze whether the returns of general stock market indices of Estonia, Latvia and Lithuania follow the random walk hypothesis (RWH), and in addition, whether they are consistent with the weak-form efficiency criterion. Also the existence of the day-of-the-week anomaly is examined in the same regional markets. The data consists of daily closing quotes of the OMX Tallinn, Riga and Vilnius total return indices for the sample period from January 3, 2000 to August 28, 2009. Moreover, the full sample period is also divided into two sub-periods. The RWH is tested by applying three quantitative methods (i.e. the Augmented Dickey-Fuller unit root test, serial correlation test and non-parametric runs test). Ordinary Least Squares (OLS) regression with dummy variables is employed to detect the day-of-the-week anomalies. The random walk hypothesis (RWH) is rejected in the Estonian and Lithuanian stock markets. The Latvian stock market exhibits more efficient behaviour, although some evidence of inefficiency is also found, mostly during the first sub-period from 2000 to 2004. Day-of-the-week anomalies are detected on every stock market examined, though no longer during the later sub-period.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

he best operating conditions, using the critical flux concept during ultrafiltration of skimmed milk, were evaluated for tubular membranes. It was found that irreversible fouling was greatly reduced by operating at or below the critical flux, but was not totally eliminated. The critical flux of skimmed milk was found to be the weak form. The critical flux at cross flow velocity 3.4 in s(-1) for MWCO 200 kDa membrane was 56.9 kg m(-2) h(-1) while for MWCO 25 kDa membranes it was 45 kg m(2) h(-1) suggesting that membrane pore size influenced the flux. The critical flux increased with increasing wall shear stress and decreased with increasing protein concentration. Empirical equations, for predicting the critical flux (J(crit)) for skimmed milk with a protein concentration (c(b)) in the range 3-7% w/w and wall shear stress (tau(w)) in the range 7-60 Pa for MWCO 200 kDa and 25 kDa membranes were J(crit) = 5.1 (tau(w)/c(b)) and J(crit) = 4.0 (tau(w)/c(b)) respectively. In general, the rejections of protein and lactose at the critical flux were not affected by protein concentration, wall shear stress and membrane used, and they were similar to those found when operating at the limiting flux.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis considers Participatory Crop Improvement (PCI) methodologies and examines the reasons behind their continued contestation and limited mainstreaming in conventional modes of crop improvement research within National Agricultural Research Systems (NARS). In particular, it traces the experiences of a long-established research network with over 20 years of experience in developing and implementing PCI methods across South Asia, and specifically considers its engagement with the Indian NARS and associated state-level agricultural research systems. In order to address the issues surrounding PCI institutionalisation processes, a novel conceptual framework was derived from a synthesis of the literatures on Strategic Niche Management (SNM) and Learning-based Development Approaches (LBDA) to analyse the socio-technical processes and structures which constitute the PCI ‘niche’ and NARS ‘regime’. In examining the niche and regime according to their socio-technical characteristics, the framework provides explanatory power for understanding the nature of their interactions and the opportunities and barriers that exist with respect to the translation of lessons and ideas between niche and regime organisations. The research shows that in trying to institutionalise PCI methods and principles within NARS in the Indian context, PCI proponents have encountered a number of constraints related to the rigid and hierarchical structure of the regime organisations; the contractual mode of most conventional research, which inhibits collaboration with a wider group of stakeholders; and the time-limited nature of PCI projects themselves, which limits investment and hinders scaling up of the innovations. It also reveals that while the niche projects may be able to induce a ‘weakform of PCI institutionalisation within the Indian NARS by helping to alter their institutional culture to be more supportive of participatory plant breeding approaches and future collaboration with PCI researchers, a ‘strong’ form of PCI institutionalisation, in which NARS organisations adopt participatory methodologies to address all their crop improvement agenda, is likely to remain outside of the capacity of PCI development projects to deliver.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho é examinar se a análise técnica agrega valor às decisões de investimentos. Através da elaboração de intervalos de confiança, construídos através da técnica de Bootstrap de inferência amostral, e consistentes com a hipótese nula de eficiência de mercado na sua forma fraca, foram testados 4 sistemas técnicos de trading. Mais especificamente, obteve-se os resultados de cada sistema aplicado às series originais dos ativos. Então, comparou-se esses resultados com a média dos resultados obtidos quando os mesmos sistemas foram aplicados a 1000 séries simuladas, segundo um random walk, de cada ativo. Caso os mercados sejam eficientes em sua forma fraca, não haveria razão para os resultados das séries originais serem superiores aos das séries simuladas. Os resultados empíricos encontrados sugeriram que os sistemas testados não foram capazes de antecipar o futuro utilizando-se apenas de dados passados. Porém, alguns deles geraram retornos expressivos

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivados pelo debate envolvendo modelos estruturais e na forma reduzida, propomos nesse artigo uma abordagem empírica com o objetivo de ver se a imposição de restrições estruturais melhoram o poder de previsibilade vis-a-vis modelos irrestritos ou parcialmente restritos. Para respondermos nossa pergunta, realizamos previsões utilizando dados agregados de preços e dividendos de ações dos EUA. Nesse intuito, exploramos as restrições de cointegração, de ciclo comum em sua forma fraca e sobre os parâmetros do VECM impostas pelo modelo de Valor Presente. Utilizamos o teste de igualdade condicional de habilidade de previsão de Giacomini e White (2006) para comparar as previsões feitas por esse modelo com outros menos restritos. No geral, encontramos que os modelos com restrições parciais apresentaram os melhores resultados, enquanto o modelo totalmente restrito de VP não obteve o mesmo sucesso.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho é realizar procedimento de back-test da Magic Formula na Bovespa, reunindo evidências sobre violações da Hipótese do Mercado Eficiente no mercado brasileiro. Desenvolvida por Joel Greenblatt, a Magic Formula é uma metodologia de formação de carteiras que consiste em escolher ações com altos ROICs e Earnings Yields, seguindo a filosofia de Value Investing. Diversas carteiras foram montadas no período de dezembro de 2002 a maio de 2014 utilizando diferentes combinações de número de ativos por carteira e períodos de permanência. Todas as carteiras, independentemente do número de ativos ou período de permanência, apresentaram retornos superiores ao Ibovespa. As diferenças entre os CAGRs das carteiras e o do Ibovespa foram significativas, sendo que a carteira com pior desempenho apresentou CAGR de 27,7% contra 14,1% do Ibovespa. As carteiras também obtiveram resultados positivos após serem ajustadas pelo risco. A pior razão retorno-volatilidade foi de 1,2, comparado a 0,6 do Ibovespa. As carteiras com pior pontuação também apresentaram bons resultados na maioria dos cenários, contrariando as expectativas iniciais e os resultados observados em outros trabalhos. Adicionalmente foram realizadas simulações para diversos períodos de 5 anos com objetivo de analisar a robustez dos resultados. Todas as carteiras apresentaram CAGR maior que o do Ibovespa em todos os períodos simulados, independentemente do número de ativos incluídos ou dos períodos de permanência. Estes resultados indicam ser possível alcançar retornos acima do mercado no Brasil utilizando apenas dados públicos históricos. Esta é uma violação da forma fraca da Hipótese do Mercado Eficiente.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho tem como objetivo verificar se o mercado de opções da Petrobras PN (PETR4) é ineficiente na forma fraca, ou seja, se as informações públicas estão ou não refletidas nos preços dos ativos. Para isso, tenta-se obter lucro sistemático por meio da estratégia Delta-Gama-Neutra que utiliza a ação preferencial e as opções de compra da empresa. Essa ação foi escolhida, uma vez que as suas opções tinham alto grau de liquidez durante todo o período estudado (01/10/2012 a 31/03/2013). Para a realização do estudo, foram consideradas as ordens de compra e venda enviadas tanto para o ativo-objeto quanto para as opções de forma a chegar ao livro de ofertas (book) real de todos os instrumentos a cada cinco minutos. A estratégia foi utilizada quando distorções entre a Volatilidade Implícita, calculada pelo modelo Black & Scholes, e a volatilidade calculada por alisamento exponencial (EWMA – Exponentially Weighted Moving Average) foram observadas. Os resultados obtidos mostraram que o mercado de opções de Petrobras não é eficiente em sua forma fraca, já que em 371 operações realizadas durante esse período, 85% delas foram lucrativas, com resultado médio de 0,49% e o tempo médio de duração de cada operação sendo pouco menor que uma hora e treze minutos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Drei-Spektrometer-Anlage am Mainzer Institut für Kernphysik wurde um ein zusätzliches Spektrometer ergänzt, welches sich durch seine kurze Baulänge auszeichnet und deshalb Short-Orbit-Spektrometer (SOS) genannt wird. Beim nominellen Abstand des SOS vom Target (66 cm) legen die nachzuweisenden Teilchen zwischen Reaktionsort und Detektor eine mittlere Bahnlänge von 165 cm zurück. Für die schwellennahe Pionproduktion erhöht sich dadurch im Vergleich zu den großen Spektrometern die Überlebenswahrscheinlichkeit geladener Pionen mit Impuls 100 MeV/c von 15% auf 73%. Demzufolge verringert sich der systematische Fehler ("Myon-Kontamination"), etwa bei der geplanten Messung der schwachen Formfaktoren G_A(Q²) und G_P(Q²), signifikant. Den Schwerpunkt der vorliegenden Arbeit bildet die Driftkammer des SOS. Ihre niedrige Massenbelegung (0,03% X_0) zur Reduzierung der Kleinwinkelstreuung ist auf den Nachweis niederenergetischer Pionen hin optimiert. Aufgrund der neuartigen Geometrie des Detektors musste eine eigene Software zur Spurrekonstruktion, Effizienzbestimmung etc. entwickelt werden. Eine komfortable Möglichkeit zur Eichung der Driftweg-Driftzeit-Relation, die durch kubische Splines dargestellt wird, wurde implementiert. Das Auflösungsvermögen des Spurdetektors liegt in der dispersiven Ebene bei 76 µm für die Orts- und 0,23° für die Winkelkoordinate (wahrscheinlichster Fehler) sowie entsprechend in der nicht-dispersiven Ebene bei 110 µm bzw. 0,29°. Zur Rückrechnung der Detektorkoordinaten auf den Reaktionsort wurde die inverse Transfermatrix des Spektrometers bestimmt. Hierzu wurden an Protonen im ¹²C-Kern quasielastisch gestreute Elektronen verwendet, deren Startwinkel durch einen Lochkollimator definiert wurden. Daraus ergeben sich experimentelle Werte für die mittlere Winkelauflösung am Target von sigma_phi = 1,3 mrad bzw. sigma_theta = 10,6 mrad. Da die Impulseichung des SOS nur mittels quasielastischer Streuung (Zweiarmexperiment) durchgeführt werden kann, muss man den Beitrag des Protonarms zur Breite des Piks der fehlenden Masse in einer Monte-Carlo-Simulation abschätzen und herausfalten. Zunächst lässt sich nur abschätzen, dass die Impulsauflösung sicher besser als 1% ist.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Der Vergleich der deutschen und der schweizerischen Rundfunkordnung unter dem Aspekt des Dualismus 1.Einleitung: Bedeutung und Grundlagen des „Dualismus“ 2.Das „duale System“ in der deutschen Rundfunkordnung 2.1 Die Genese des „dualen Systems“ - Historische und rechtliche Rahmenbedingungen 2.2 Die aktuelle Ausgestaltung des „dualen Systems“ 2.3 Das „duale System“ im europäischen Raum – europarechtliche Einflüsse und Vorgaben 3. Das „duale System“ in der schweizerischen Rundfunkordnung 3.1 Die Genese des „dualen Systems“ - Historische und rechtliche Rahmenbedingungen 3.2 Die aktuelle Ausgestaltung des „dualen Systems“ 3.3 Vergleichende Betrachtung unterschiedlicher Ausprägungen des „dualen Systems“ im Rahmen der Revision des RTVG 4. Vergleichende Betrachtung der „dualen Systeme“ 4.1 Historische und gesetzliche Rahmenbedingungen 4.2 Die spezifischen Besonderheiten des schweizerischen Rundfunkmarktes 4.3 Die einzelnen Elemente der Rundfunkordnung 5. Endergebnis Duale Systeme im Bereich des Rundfunkrechtes bedeuten Koexistenz von privaten und öffentlich-rechtlichen Rundfunkveranstaltern. Die in der Verfassung der Bundesrepublik Deutschland angelegte Rundfunkordnung ist im wesentlichen durch die Rechtsprechung des Bundesverfassungsgerichts geprägt worden. Das aufgrund dieser Vorgaben gewachsene duale System besteht aus einem starken öffentlich-rechtlichen Rundfunk, dessen Position durch die vorrangige Finanzierung aus Gebühren privilegiert wird. Im Gegenzug wird ihm die zentrale Aufgabe zur Sicherung der Grundversorgung zugewiesen. Daneben bestehen die privaten Rundfunkveranstalter, die sich aus Werbeeinnahmen und Nutzungsentgelten finanzieren und insoweit dem Wettbewerb im Markt in höherem Maße ausgeliefert sind. Im europäischen Bereich fällt der Schutz von Pluralismus und Meinungsvielfalt in erster Linie in den Zuständigkeitsbereich der Mitgliedstaaten. Die Medienlandschaften der Mitgliedstaaten sind durch vielfältige Eigenheiten und Traditionen geprägt, die gerade erhalten bleiben sollen. Die Ausgestaltung des dualen Systems im europäischen Rahmen wirft mithin Bedenken allein im Hinblick auf die Finanzierung der öffentlich-rechtlichen Veranstalter aus öffentlichen Ressourcen und die darauf basierende Wettbewerbsverzerrung auf. Mit dem Radio- und Fernsehgesetz von 1991 wurde in der Schweiz ein duales Rundfunksystem eingeführt. Das Treuhandmodell wurde ergänzt durch das Marktmodell. Allerdings galt das duale System für Rundfunk und Fernsehen in der Schweiz nur in der abgeschwächten Form eines staatlich geordneten Wettbewerbs. Es bestand ein Drei-Ebenen-Modell, das eine direkte Konkurrenz zwischen der nationalen Dachorganisation SRG (Schweizerische Rundfunkgesellschaft) und privaten Unternehmen weitestgehend vermied. Die Hauptverpflichtung des Service public oblag der SRG, die auch die Gebühren erhielt. Daneben wurden allerdings alle Veranstalter zu Service-public-Leistungen verpflichtet. Im Gegenzug dazu sah der Gesetzgeber in marktschwachen Regionen ein Gebührensplitting vor. Mit dem neuen RTVG soll dem Service Public eine Bestands- und Entwicklungsgarantie zugesichert werden. Anstelle einer scharfen Trennung zwischen gebühren- und werbefinanzierten Anbietern mit entsprechend unterschiedlichen Funktionen im Mediensystem sollen allerdings die elektronischen Medien in der Schweiz großflächig subventioniert und vermehrt mit Leistungsaufträgen gesteuert werden. Gerade auf lokaler Ebene wird eine Ausweitung des Gebührensplittings vorgesehen. Nicht nur einer, sondern eine Vielzahl von Veranstaltern soll künftig mit der Grundversorgung beauftragt werden. Insbesondere der Service public régional soll von privaten Anbietern und der SRG erbracht werden. Eine Inpflichtnahme sämtlicher privater Rundfunkveranstalter wird indes nicht vorgesehen. Anhand dieser Masterarbeit sollen weiterhin die Unterschiede herausgearbeitet werden, die einzelne nationale Rundfunksysteme aufweisen können und damit auch die rundfunkpolitischen Modelle trotz des gleich bleibenden Grundgedankens, hier des Dualismus. Die Modelle sind stets in ihrem spezifischen politischen und kulturellen Kontext zu sehen, woraus sie historisch gewachsen sind. Durch den Vergleich sollen auf der einen Seite die Probleme der Rundfunkmodelle dargelegt werden, die diesen unabhängig von ihrer Ausgestaltung in mehr oder minder ausgeprägter Form generell innewohnen (Definition der Grundversorgung - des Service public/ Ressourcenknappheit/ Krisen des dualen Systems). Andererseits sollen die spezifischen Probleme der Schweiz aufgrund ihrer mehrsprachigen, kleinstaatlichen Struktur verdeutlicht werden (Hoher Marktanteil an ausländischen, überwiegend deutschsprachigen Programmen an der Fernsehnutzung; Mehrsprachigkeit; Kleinräumigkeit von Zuschauer- und Zuhörermärkten sowie der Werbemärkte).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper aims to examine the market efficiency of the commodity futures market in India, which has been growing phenomenally for the last few years. We estimate the long-run equilibrium relationship between the multi-commodity futures and spot prices and then test for market efficiency in a weak form sense by applying both the DOLS and the FMOLS methods. The entire sample period is from 2 January 2006 to 31 March 2011. The results indicate that a cointegrating relationship is found between these indices and that the commodity futures market seems to be efficient only during the more recent sub-sample period since July 2009.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La presente Tesis Doctoral aborda la introducción de la Partición de Unidad de Bernstein en la forma débil de Galerkin para la resolución de problemas de condiciones de contorno en el ámbito del análisis estructural. La familia de funciones base de Bernstein conforma un sistema generador del espacio de funciones polinómicas que permite construir aproximaciones numéricas para las que no se requiere la existencia de malla: las funciones de forma, de soporte global, dependen únicamente del orden de aproximación elegido y de la parametrización o mapping del dominio, estando las posiciones nodales implícitamente definidas. El desarrollo de la formulación está precedido por una revisión bibliográfica que, con su punto de partida en el Método de Elementos Finitos, recorre las principales técnicas de resolución sin malla de Ecuaciones Diferenciales en Derivadas Parciales, incluyendo los conocidos como Métodos Meshless y los métodos espectrales. En este contexto, en la Tesis se somete la aproximación Bernstein-Galerkin a validación en tests uni y bidimensionales clásicos de la Mecánica Estructural. Se estudian aspectos de la implementación tales como la consistencia, la capacidad de reproducción, la naturaleza no interpolante en la frontera, el planteamiento con refinamiento h-p o el acoplamiento con otras aproximaciones numéricas. Un bloque importante de la investigación se dedica al análisis de estrategias de optimización computacional, especialmente en lo referente a la reducción del tiempo de máquina asociado a la generación y operación con matrices llenas. Finalmente, se realiza aplicación a dos casos de referencia de estructuras aeronáuticas, el análisis de esfuerzos en un angular de material anisotrópico y la evaluación de factores de intensidad de esfuerzos de la Mecánica de Fractura mediante un modelo con Partición de Unidad de Bernstein acoplada a una malla de elementos finitos. ABSTRACT This Doctoral Thesis deals with the introduction of Bernstein Partition of Unity into Galerkin weak form to solve boundary value problems in the field of structural analysis. The family of Bernstein basis functions constitutes a spanning set of the space of polynomial functions that allows the construction of numerical approximations that do not require the presence of a mesh: the shape functions, which are globally-supported, are determined only by the selected approximation order and the parametrization or mapping of the domain, being the nodal positions implicitly defined. The exposition of the formulation is preceded by a revision of bibliography which begins with the review of the Finite Element Method and covers the main techniques to solve Partial Differential Equations without the use of mesh, including the so-called Meshless Methods and the spectral methods. In this context, in the Thesis the Bernstein-Galerkin approximation is subjected to validation in one- and two-dimensional classic benchmarks of Structural Mechanics. Implementation aspects such as consistency, reproduction capability, non-interpolating nature at boundaries, h-p refinement strategy or coupling with other numerical approximations are studied. An important part of the investigation focuses on the analysis and optimization of computational efficiency, mainly regarding the reduction of the CPU cost associated with the generation and handling of full matrices. Finally, application to two reference cases of aeronautic structures is performed: the stress analysis in an anisotropic angle part and the evaluation of stress intensity factors of Fracture Mechanics by means of a coupled Bernstein Partition of Unity - finite element mesh model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A non-local gradient-based damage formulation within a geometrically non-linear setting is presented. The hyperelastic constitutive response at local material point level is governed by a strain energy which is additively composed of an isotropic matrix and of an anisotropic fibre-reinforced material, respectively. The inelastic constitutive response is governed by a scalar [1–d]-type damage formulation, where only the anisotropic elastic part is assumed to be affected by the damage. Following the concept in Dimitrijević and Hackl [28], the local free energy function is enhanced by a gradient-term. This term essentially contains the gradient of the non-local damage variable which, itself, is introduced as an additional independent variable. In order to guarantee the equivalence between the local and non-local damage variable, a penalisation term is incorporated within the free energy function. Based on the principle of minimum total potential energy, a coupled system of Euler–Lagrange equations, i.e., the balance of linear momentum and the balance of the non-local damage field, is obtained and solved in weak form. The resulting coupled, highly non-linear system of equations is symmetric and can conveniently be solved by a standard incremental-iterative Newton–Raphson-type solution scheme. Several three-dimensional displacement- and force-driven boundary value problems—partially motivated by biomechanical application—highlight the mesh-objective characteristics and constitutive properties of the model and illustratively underline the capabilities of the formulation proposed