895 resultados para Galerkin weak form


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena on selvittää Venäjän, Slovakian, Tsekin, Romanian, Bulgarian, Unkarin ja Puolan osakemarkkinoiden heikkojen ehtojen tehokkuutta. Tämä tutkielma on kvantitatiivinen tutkimus ja päiväkohtaiset indeksin sulkemisarvot kerättiin Datastreamin tietokannasta. Data kerättiin pörssien ensimmäisestä kaupankäyntipäivästä aina vuoden 2006 elokuun loppuun saakka. Analysoinnin tehostamiseksi dataa tutkittiin koko aineistolla, sekä kahdella aliperiodilla. Osakemarkkinoiden tehokkuutta on testattu neljällä tilastollisella metodilla, mukaan lukien autokorrelaatiotesti ja epäparametrinen runs-testi. Tavoitteena on myös selvittääesiintyykö kyseisillä markkinoilla viikonpäiväanomalia. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Viikonpäiväanomalia on löydettävissä kaikilta edellä mainituilta osakemarkkinoilta paitsi Tsekin markkinoilta. Merkittävää, positiivista tai negatiivista autokorrelaatiota, on löydettävissä kaikilta osakemarkkinoilta, myös Ljung-Box testi osoittaa kaikkien markkinoiden tehottomuutta täydellä periodilla. Osakemarkkinoiden satunnaiskulku hylätään runs-testin perusteella kaikilta muilta paitsi Slovakian osakemarkkinoilla, ainakin tarkastellessa koko aineistoa tai ensimmäistä aliperiodia. Aineisto ei myöskään ole normaalijakautunut minkään indeksin tai aikajakson kohdalla. Nämä havainnot osoittavat, että kyseessä olevat markkinat eivät ole heikkojen ehtojen mukaan tehokkaita

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena on tarkastella Kiinan osakemarkkinoiden tehokkuutta ja random walk -hypoteesin voimassaoloa. Tavoitteena on myös selvittää esiintyykö viikonpäiväanomalia Kiinan osakemarkkinoilla. Tutkimusaineistona käytetään Shanghain osakepörssin A-sarjan,B-sarjan ja yhdistelmä-sarjan ja Shenzhenin yhdistelmä-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 21.2.1992-30.12.2005 sekä Shenzhenin osakepörssin A-sarjan ja B-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 5.10.1992-30.12.2005. Tutkimusmenetelminä käytetään neljä tilastollista menetelmää, mukaan lukien autokorrelaatiotestiä, epäparametrista runs-testiä, varianssisuhdetestiä sekä Augmented Dickey-Fullerin yksikköjuuritestiä. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Testejä tehdään sekä koko aineistolla että kolmella erillisellä ajanjaksolla. Tämän tutkielman empiiriset tulokset tukevat aikaisempia tutkimuksia Kiinan osakemarkkinoiden tehottomuudesta. Lukuun ottamatta yksikköjuuritestien saatuja tuloksia, autokorrelaatio-, runs- ja varianssisuhdetestien perusteella random walk-hypoteesi hylättiin molempien Kiinan osakemarkkinoiden kohdalla. Tutkimustulokset osoittavat, että molemmilla osakepörssillä B-sarjan indeksien käyttäytyminenon ollut huomattavasti enemmän random walk -hypoteesin vastainen kuin A-sarjan indeksit. Paitsi B-sarjan markkinat, molempien Kiinan osakemarkkinoiden tehokkuus näytti myös paranevan vuoden 2001 markkinabuumin jälkeen. Tutkimustulokset osoittavat myös viikonpäiväanomalian esiintyvän Shanghain osakepörssillä, muttei kuitenkaan Shenzhenin osakepörssillä koko tarkasteluajanjaksolla.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study is to demonstrate using weak form partial differential equation (PDE) method for a finite-element (FE) modeling of a new constitutive relation without the need of user subroutine programming. The viscoelastic asphalt mixtures were modeled by the weak form PDE-based FE method as the examples in the paper. A solid-like generalized Maxwell model was used to represent the deforming mechanism of a viscoelastic material, the constitutive relations of which were derived and implemented in the weak form PDE module of Comsol Multiphysics, a commercial FE program. The weak form PDE modeling of viscoelasticity was verified by comparing Comsol and Abaqus simulations, which employed the same loading configurations and material property inputs in virtual laboratory test simulations. Both produced identical results in terms of axial and radial strain responses. The weak form PDE modeling of viscoelasticity was further validated by comparing the weak form PDE predictions with real laboratory test results of six types of asphalt mixtures with two air void contents and three aging periods. The viscoelastic material properties such as the coefficients of a Prony series model for the relaxation modulus were obtained by converting from the master curves of dynamic modulus and phase angle. Strain responses of compressive creep tests at three temperatures and cyclic load tests were predicted using the weak form PDE modeling and found to be comparable with the measurements of the real laboratory tests. It was demonstrated that the weak form PDE-based FE modeling can serve as an efficient method to implement new constitutive models and can free engineers from user subroutine programming.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La presente Tesis Doctoral aborda la introducción de la Partición de Unidad de Bernstein en la forma débil de Galerkin para la resolución de problemas de condiciones de contorno en el ámbito del análisis estructural. La familia de funciones base de Bernstein conforma un sistema generador del espacio de funciones polinómicas que permite construir aproximaciones numéricas para las que no se requiere la existencia de malla: las funciones de forma, de soporte global, dependen únicamente del orden de aproximación elegido y de la parametrización o mapping del dominio, estando las posiciones nodales implícitamente definidas. El desarrollo de la formulación está precedido por una revisión bibliográfica que, con su punto de partida en el Método de Elementos Finitos, recorre las principales técnicas de resolución sin malla de Ecuaciones Diferenciales en Derivadas Parciales, incluyendo los conocidos como Métodos Meshless y los métodos espectrales. En este contexto, en la Tesis se somete la aproximación Bernstein-Galerkin a validación en tests uni y bidimensionales clásicos de la Mecánica Estructural. Se estudian aspectos de la implementación tales como la consistencia, la capacidad de reproducción, la naturaleza no interpolante en la frontera, el planteamiento con refinamiento h-p o el acoplamiento con otras aproximaciones numéricas. Un bloque importante de la investigación se dedica al análisis de estrategias de optimización computacional, especialmente en lo referente a la reducción del tiempo de máquina asociado a la generación y operación con matrices llenas. Finalmente, se realiza aplicación a dos casos de referencia de estructuras aeronáuticas, el análisis de esfuerzos en un angular de material anisotrópico y la evaluación de factores de intensidad de esfuerzos de la Mecánica de Fractura mediante un modelo con Partición de Unidad de Bernstein acoplada a una malla de elementos finitos. ABSTRACT This Doctoral Thesis deals with the introduction of Bernstein Partition of Unity into Galerkin weak form to solve boundary value problems in the field of structural analysis. The family of Bernstein basis functions constitutes a spanning set of the space of polynomial functions that allows the construction of numerical approximations that do not require the presence of a mesh: the shape functions, which are globally-supported, are determined only by the selected approximation order and the parametrization or mapping of the domain, being the nodal positions implicitly defined. The exposition of the formulation is preceded by a revision of bibliography which begins with the review of the Finite Element Method and covers the main techniques to solve Partial Differential Equations without the use of mesh, including the so-called Meshless Methods and the spectral methods. In this context, in the Thesis the Bernstein-Galerkin approximation is subjected to validation in one- and two-dimensional classic benchmarks of Structural Mechanics. Implementation aspects such as consistency, reproduction capability, non-interpolating nature at boundaries, h-p refinement strategy or coupling with other numerical approximations are studied. An important part of the investigation focuses on the analysis and optimization of computational efficiency, mainly regarding the reduction of the CPU cost associated with the generation and handling of full matrices. Finally, application to two reference cases of aeronautic structures is performed: the stress analysis in an anisotropic angle part and the evaluation of stress intensity factors of Fracture Mechanics by means of a coupled Bernstein Partition of Unity - finite element mesh model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The applicability of a meshfree approximation method, namely the EFG method, on fully geometrically exact analysis of plates is investigated. Based on a unified nonlinear theory of plates, which allows for arbitrarily large rotations and displacements, a Galerkin approximation via MLS functions is settled. A hybrid method of analysis is proposed, where the solution is obtained by the independent approximation of the generalized internal displacement fields and the generalized boundary tractions. A consistent linearization procedure is performed, resulting in a semi-definite generalized tangent stiffness matrix which, for hyperelastic materials and conservative loadings, is always symmetric (even for configurations far from the generalized equilibrium trajectory). Besides the total Lagrangian formulation, an updated version is also presented, which enables the treatment of rotations beyond the parameterization limit. An extension of the arc-length method that includes the generalized domain displacement fields, the generalized boundary tractions and the load parameter in the constraint equation of the hyper-ellipsis is proposed to solve the resulting nonlinear problem. Extending the hybrid-displacement formulation, a multi-region decomposition is proposed to handle complex geometries. A criterium for the classification of the equilibrium`s stability, based on the Bordered-Hessian matrix analysis, is suggested. Several numerical examples are presented, illustrating the effectiveness of the method. Differently from the standard finite element methods (FEM), the resulting solutions are (arbitrary) smooth generalized displacement and stress fields. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Following the approach developed for rods in Part 1 of this paper (Pimenta et al. in Comput. Mech. 42:715-732, 2008), this work presents a fully conserving algorithm for the integration of the equations of motion in nonlinear shell dynamics. We begin with a re-parameterization of the rotation field in terms of the so-called Rodrigues rotation vector, allowing for an extremely simple update of the rotational variables within the scheme. The weak form is constructed via non-orthogonal projection, the time-collocation of which ensures exact conservation of momentum and total energy in the absence of external forces. Appealing is the fact that general hyperelastic materials (and not only materials with quadratic potentials) are permitted in a totally consistent way. Spatial discretization is performed using the finite element method and the robust performance of the scheme is demonstrated by means of numerical examples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A fully conserving algorithm is developed in this paper for the integration of the equations of motion in nonlinear rod dynamics. The starting point is a re-parameterization of the rotation field in terms of the so-called Rodrigues rotation vector, which results in an extremely simple update of the rotational variables. The weak form is constructed with a non-orthogonal projection corresponding to the application of the virtual power theorem. Together with an appropriate time-collocation, it ensures exact conservation of momentum and total energy in the absence of external forces. Appealing is the fact that nonlinear hyperelastic materials (and not only materials with quadratic potentials) are permitted without any prejudice on the conservation properties. Spatial discretization is performed via the finite element method and the performance of the scheme is assessed by means of several numerical simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this article is to provide additional knowledge to the discussion of long-term memory, leaning over the behavior of the main Portuguese stock index. The first four moments are calculated using time windows of increasing size and sliding time windows of fixed size equal to 50 days and suggest that daily returns are non-ergodic and non-stationary. Seeming that the series is best described by a fractional Brownian motion approach, we use the rescaled-range analysis (R/S) and the detrended fluctuation analysis (DFA). The findings indicate evidence of long term memory in the form of persistence. This evidence of fractal structure suggests that the market is subject to greater predictability and contradicts the efficient market hypothesis in its weak form. This raises issues regarding theoretical modeling of asset pricing. In addition, we carried out a more localized (in time) study to identify the evolution of the degree of long-term dependency over time using windows 200-days and 400-days. The results show a switching feature in the index, from persistent to anti-persistent, quite evident from 2010.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article aims to contribute to the discussion of long-term dependence, focusing on the behavior of the main Belgian stock index. Non-parametric analyzes of the general characteristics of temporal frequency show that daily returns are non-ergodic and non-stationary. Therefore, we use the rescaled-range analysis (R/S) and the detrended fluctuation analysis (DFA), under the fractional Brownian motion approach, and we found slight evidence of long-term dependence. These results refute the random walk hypothesis with i.i.d. increments, which is the basis of the EMH in its weak form, and call into question some theoretical modeling of asset pricing. Other more localized complementary study, to identify the evolution of the degree of dependence over time windows, showed that the index has become less persistent from 2010. This may mean a maturing market by the extension of the effects of current financial crisis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a notion of upper semicontinuity, weak upper semicontinuity, and show that it, together with a weak form of payoff security, is enough to guarantee the existence of Nash equilibria in compact, quasiconcave normal form games. We show that our result generalizes the pure strategy existence theorem of Dasgupta and Maskin (1986) and that it is neither implied nor does it imply the existence theorems of Baye, Tian, and Zhou (1993) and Reny (1999). Furthermore, we show that an equilibrium may fail to exist when, while maintaining weak payoff security, weak upper semicontinuity is weakened to reciprocal upper semicontinuity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tutkielman tavoitteena oli analysoida tunnuslukuihin ja tuottohistoriaan perustuvien sijoitusstrategioiden toimivuutta ja sykliriippuvuutta HEX:iin listatuista yrityksistä koostuvalla aineistolla. Tutkitut sijoitusstrategiat perustuivat arvostuskertoimien, betan ja menneiden tuottojen käyttöön analysointivälineinä käytettyjen kvintiiliportfolioiden muodostamiskriteereinä. Kontribuutiota tutkimukseen pyrittiin luomaan tarkastelemalla ensimmäistä kertaa suhdannesyklin vaikutuksia edellä mainittujen sijoitusstrategioiden toimivuuteen tutkimusaineistolla, joka kattoi useita suhdannesyklejä (pisimmillään vuodet 1991 - 2002). Suhdannesyklien käänteiden määrittämiseen käytettiin ostopäälliköiden indeksiä (PMI-indeksi), jonka on todettu toimivan hyvin esimerkiksi pörssikurssien kehitystä ennakoivana indikaattorina. Tulokset osoittivat P/E-, P/B-, EV/EBIT-, EV/EBITDA-, beta- ja momentumanomalioiden esiintyneen myös suomalaisilla osakemarkkinoilla vuosina 1991 – 2002. Tutkimuksessa saatiin näyttöä myös tuottohistoriaan pohjautuvien momentum-strategian ja winner-loser –strategian toimivuudesta. Näistä etenkin jälkimmäinen oli voimakkaasti sykliriippuvaista. Näiden tulosten mukaan suomalaiset osakemarkkinat eivät olisi käytetyillä tarkasteluperiodilla olleet edes heikosti tehokkaat, ts. osakemarkkinoiden keskimääräinen tuottotaso olisi ollut mahdollista ylittää pelkkään kurssihistoriaan perustuvien sijoitusstrategioiden avulla.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tutkielmassa analysoitiin yhteensä 73:n teknisen analyysin menetelmävariaation ja samalta ajanjaksolta lasketun osta ja pidä -strategian tuottojen eroja aineistolla, joka koostui 43 Helsingin Arvopaperipörssin päälistalla vuodesta 1991 vuoteen 1998 noteeratun yhtiön osakkeiden päivän päätöskursseista. Empiiriset testit toteutettiin tutkielmaa varten laadituilla Pascal-ohjelmilla, joilla simuloitiin eri teknisen analyysin menetelmien mukaista päivittäistä kaupankäyntiä. Tulokset osoittivat, ettei teknisen analyysin menetelmien avulla olisi tarkasteluperiodilla päässyt osta ja pidä -strategian tuottotasolle, sillä ainoastaan yksi strategioista ylitti osta ja pidä -strategian tuottotason. Negatiivinen korrelaatio kunkin teknisen analyysin menetelmän tuottamien kauppojen lukumäärän ja strategian kannattavuuden välillä oli erittäin vahva; mitä suurempi signaaliherkkyys, sitä heikompi oli kyseisen strategian tulos. Tutkimustulokset tukivat siten markkinatehokkuuden heikkojen ehtojen hypoteesia, jonka mukaan mennyt hintainformaatio ei ole monetäärisesti hyödynnettävissä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanofiltration performance was studied with effluents from the pulp and paper industry and with model substances. The effect of filtration conditions and membrane properties on nanofiltration flux, retention, and fouling was investigated. Generally, the aim was to determine the parameters that influence nanofiltration efficiency and study how to carry out nanofiltration without fouling by controlling these parameters. The retentions of the nanofiltration membranes studied were considerably higher than those of tight ultrafiltration membranes, and the permeate fluxes obtained were approximately the same as those of tight ultrafiltration membranes. Generally, about 80% retentions of total carbon and conductivity were obtained during the nanofiltration experiments. Depending on the membrane and the filtration conditions, the retentions of monovalent ions (chloride) were between 80 and 95% in the nanofiltrations. An increase in pH improved retentions considerably and also the flux to some degree. An increase in pressure improved retention, whereas an increase in temperature decreased retention if the membrane retained the solute by the solution diffusion mechanism. In this study, more open membranes fouled more than tighter membranes due to higher concentration polarization and plugging of the membrane material. More irreversible fouling was measured for hydrophobic membranes. Electrostatic repulsion between the membrane and the components in the solution reduced fouling but did not completely prevent it with the hydrophobic membranes. Nanofiltration could be carried out without fouling, at least with the laboratory scale apparatus used here when the flux was below the critical flux. Model substances had a strong form of the critical flux, but the effluents had only a weak form of the critical flux. With the effluents, some fouling always occurred immediately when the filtration was started. However, if the flux was below the critical flux, further fouling was not observed. The flow velocity and pH were probably the most important parameters, along with the membrane properties, that influenced the critical flux. Precleaning of the membranes had only a small effect on the critical flux and retentions, but it improved the permeability of the membranes significantly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many industries, such as petroleum production, and the petrochemical, metal, food and cosmetics industries, wastewaters containing an emulsion of oil in water are often produced. The emulsions consist of water (up to 90%), oils (mineral, animal, vegetable and synthetic), surfactants and other contaminates. In view of its toxic nature and its deleterious effects on the surrounding environment (soil, water) such wastewater needs to be treated before release into natural water ways. Membrane-based processes have successfully been applied in industrial applications and are considered as possible candidates for the treatment of oily wastewaters. Easy operation, lower cost, and in some cases, the ability to reduce contaminants below existing pollution limits are the main advantages of these systems. The main drawback of membranes is flux decline due tofouling and concentration polarisation. The complexity of oil-containing systems demands complementary studies on issues related to the mitigation of fouling and concentration polarisation in membranebased ultrafiltration. In this thesis the effect of different operating conditions (factors) on ultrafiltration of oily water is studied. Important factors are normally correlated and, therefore, their effect should be studied simultaneously. This work uses a novel approach to study different operating conditions, like pressure, flow velocity, and temperature, and solution properties, like oil concentration (cutting oil, diesel, kerosene), pH, and salt concentration (CaCl2 and NaCl)) in the ultrafiltration of oily water, simultaneously and in a systematic way using an experimental design approach. A hypothesis is developed to describe the interaction between the oil drops, salt and the membrane surface. The optimum conditions for ultrafiltration and the contribution of each factor in the ultrafiltration of oily water are evaluated. It is found that the effect on permeate flux of the various factors studied strongly depended on the type of oil, the type of membrane and the amount of salts. The thesis demonstrates that a system containing oil is very complex, and that fouling and flux decline can be observed even at very low pressures. This means that only the weak form of the critical flux exists for such systems. The cleaning of the fouled membranes and the influence of different parameters (flow velocity, temperature, time, pressure, and chemical concentration (SDS, NaOH)) were evaluated in this study. It was observed that fouling, and consequently cleaning, behaved differently for the studied membranes. Of the membranes studied, the membrane with the lowest propensity for fouling and the most easily cleaned was the regenerated cellulose membrane (C100H). In order to get more information about the interaction between the membrane and the components of the emulsion, a streaming potential study was performed on the membrane. The experiments were carried out at different pH and oil concentration. It was seen that oily water changed the surface charge of the membrane significantly. The surface charge and the streaming potential during different stages of filtration were measured and analysed being a new method for fouling of oil in this thesis. The surface charge varied in different stages of filtration. It was found that the surface charge of a cleaned membrane was not the same as initially; however, the permeability was equal to that of a virgin membrane. The effect of filtration mode was studied by performing the filtration in both cross-flow and deadend mode. The effect of salt on performance was considered in both studies. It was found that salt decreased the permeate flux even at low concentration. To test the effect of hydrophilicity change, the commercial membranes used in this thesis were modified by grafting (PNIPAAm) on their surfaces. A new technique (corona treatment) was used for this modification. The effect of modification on permeate flux and retention was evaluated. The modified membranes changed their pore size around 33oC resulting in different retention and permeability. The obtained results in this thesis can be applied to optimise the operation of a membrane plant under normal or shock conditions or to modify the process such that it becomes more efficient or effective.