792 resultados para Time-varying Risk


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Toxicological information on nanomaterials (NMs) is of major importance for safety assessment, since they are already used in many consumer products and promise cutting-edge applications in the future. While the number of different NMs increases exponentially, new strategies for risk assessment are needed to cope with the safety issues, keeping pace with innovation. However, recent studies have suggested that even subtle differences in the physicochemical properties of NMs that are closely related may define different nano-bio interactions, thereby determining their toxic potential. Further research in this field is necessary to allow straightforward grouping strategies leading time-effective risk assessment to enable the safe use of the emerging NMs. In this presentation the case study of the in vitro toxicity testing of a set of multi-walled carbon nanotubes (MWCNTs) in two human cell lines from the respiratory tract will be described. Those MWCNT have been previously characterized in detail, and differ in thickness, length, aspect ratio and morphology. This comprehensive toxicological investigation undertaken in parallel with physicochemical characterization in the cellular moiety showed that the same NM did not display a consistent effect in different cell types, and that, within the same class of NM, different toxic effects could be observed. The correlation of the cytotoxic and genotoxic effects characterized in the two cell lines with their physicochemical properties will be presented and the relevance of considering the NMs properties in the biological context will be discussed. Overall, this case study suggests that nanotoxicity of closely related MWCNTs depends not only on their primary physicochemical properties, or combinations of these properties, but also on the cellular system, and its context. Challenges posed to toxicologists, risk assessors and regulators when addressing the safety assessment of NMs will be highlighted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A vida social de um indivíduo dependente do álcool é na maioria das vezes um factor de risco para continuar ou aumentar o consumo excessivo de bebidas alcoólicas. Um dos grandes fracassos do alcoólico é não cumprir adequadamente um papel social desejado, o que resulta em prejuízos para si mesmo e para os outros. O indivíduo que abusa no consumo, depressa perde a sua reputação junto de colegas, amigos e familiares, o que o deixa mais intolerante à frustração e aumenta o consumo. A mentira toma-se então sua aliada, pois através dela ele vai reduzindo a ansiedade causada pelo fracasso na vida social e que os outros teimam em deixar bem nítido. Identificar os problemas sociais dos quais o indivíduo padece, é fundamental para planear melhor uma estratégia de intervenção, quer seja ela de prevenção, de psicoterapia ou de reabilitação. Os programas de tratamento habitualmente propostos para a abordagem dos problemas derivados do consumo de álcool centram a sua atenção, quase exclusivamente, no comportamento aditivo como guia orientador da intervenção e como indicador objectivo do êxito do próprio programa Mas na maioria dos casos o comportamento aditivo é sim a manifestação mais objectiva de um profundo desajuste entre o sujeito consigo mesmo e com o seu meio ambiente. É por isso, objectivo dos processos de recuperação oferecer-lhe a possibilidade de recuperar a crença na palavra ou aprender o seu valor como meio de comunicação fundamental entre os homens. Para além de possibilitar aos sujeitos dependentes de álcool este valor, importa também incutir nos sujeitos o valor positivo de viver com limites; pois são especialistas em tentar sabotar a acção dos técnicos e em descobrir as suas debilidades para as utilizarem em seu interesse. Importa por isso, que aprendam o valor das leis e a utilidade, para todos, de cumpri-las (Kalina, 2001). Assim, o treino de habilidades sociais constitui uma parte importante dos tratamentos para os sujeitos com problemas de abuso de álcool e drogas. Foi nesse sentido que nos propusemos a identificar o nível de habilidades sociais em pessoas dependentes de álcool. O estudo que desenvolvemos é de carácter exploratório/descritivo, para o qual optámos por utilizar uma metodologia quantitativa. A amostra foi constituída por 229 indivíduos, do sexo masculino, dependentes de álcool, em instituições nacionais de referência na área da alcoologia O instrumento de recolha de dados é constituído por um Questionário de dados sócio demográficos, uma Escala de Habilidades Sociais e uma Escala de Auto-apreciação Pessoal. Constatamos que a amostra constituída por indivíduos dependentes de álcool apresenta uma pontuação média na Escala de Habilidades Sociais de 89.96, equivalente ao percentil 55 na tabela de parametrização de Gismero (2002). Este valor é claramente inferior ao conseguido por qualquer uma das outras amostras analisadas, seja a do estudo preliminar, seja a do estudo comparativo, constituída por indivíduos da população em geral e que conseguiram um percentil 70. ABSTRACT; The social life of a person dependent on alcohol is, most of the time, a risk factor to continue or increase the alcohol excessive consumption. One of the alcoholic failures is the fact that he is unable to perform an adequate social role, to the detriment of himself and others. A person, who abuses alcohol consumption, soon loses his reputation next to his colleagues, friends and relatives, which makes him intolerant of frustration and increases the alcohol consumption. To lie becomes his best ally, because it helps him to reduce the anxiety caused by the failure of his social life, what is promptly pointed out by others. To identify the individual social problems is essential to plan the best intervention strategy. This can be of prevention, psychotherapy or rehabilitation. The treatment programmers, usually proposed to deal with the problems caused by alcohol consumption, focus almost exclusively on the addictive behaviour, as a guide line for the intervention and as an objective indicator of the success of the programme itself. But, in most cases, the addictive behaviour is an objective manifestation of a deep break off of the individual with himself and with his environment. That is why the aim of the recuperation process is to offer the individual the possibility to recover their belief on the word or to learn its value as an essential means of communication for men. Besides getting the message trough, it is also important to make the individuals aware of the positive value of living within limits. These individuals are specialists on trying to sabotage the technicians’ actions, discovering their weaknesses so they can use them on their own behalf. That is why it is so important that they learn the value of rules and the importance of accomplishing them (Kalina, 2001). Therefore, the training of the social skills is an important part of the treatment of individuals with problems of alcohol or dugs addiction. So, we committed ourselves to identifying the level of social skills on people who have an alcohol addiction. The study we developed is exploratory/ descriptive and we chose to use a quantitative methodology. The sample was of 229 male alcohol dependent individuals, staying in national institutes of reference in the area of alcohol abuse and alcoholism. The means to collect data were a social demographic data questionnaire, a scale of social skills and a scale of personal self- assessment. We realized that the sample of alcohol dependent individuals presents an average score in social skills of 89.96, equivalent to a percentile of 55 in the parameterization of Gismero (2002). This is clearly a lower value than the one obtained by any other sample we analyzed, whether in the preliminary study or in the comparative study, constituted by individuals of the common population that achieved a percentile of 70.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação (mestrado)— Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The under-reporting of cases of infectious diseases is a substantial impediment to the control and management of infectious diseases in both epidemic and endemic contexts. Information about infectious disease dynamics can be recovered from sequence data using time-varying coalescent approaches, and phylodynamic models have been developed in order to reconstruct demographic changes of the numbers of infected hosts through time. In this study I have demonstrated the general concordance between empirically observed epidemiological incidence data and viral demography inferred through analysis of foot-and-mouth disease virus VP1 coding sequences belonging to the CATHAY topotype over large temporal and spatial scales. However a more precise and robust relationship between the effective population size (

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Classical regression analysis can be used to model time series. However, the assumption that model parameters are constant over time is not necessarily adapted to the data. In phytoplankton ecology, the relevance of time-varying parameter values has been shown using a dynamic linear regression model (DLRM). DLRMs, belonging to the class of Bayesian dynamic models, assume the existence of a non-observable time series of model parameters, which are estimated on-line, i.e. after each observation. The aim of this paper was to show how DLRM results could be used to explain variation of a time series of phytoplankton abundance. We applied DLRM to daily concentrations of Dinophysis cf. acuminata, determined in Antifer harbour (French coast of the English Channel), along with physical and chemical covariates (e.g. wind velocity, nutrient concentrations). A single model was built using 1989 and 1990 data, and then applied separately to each year. Equivalent static regression models were investigated for the purpose of comparison. Results showed that most of the Dinophysis cf. acuminata concentration variability was explained by the configuration of the sampling site, the wind regime and tide residual flow. Moreover, the relationships of these factors with the concentration of the microalga varied with time, a fact that could not be detected with static regression. Application of dynamic models to phytoplankton time series, especially in a monitoring context, is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Composite resins have been subjected to structural modifications aiming at improved optical and mechanical properties. The present study consisted in an in vitro evaluation of the staining behavior of two nanohybrid resins (NH1 and NH2), a nanoparticulated resin (NP) and a microhybrid resin (MH). Samples of these materials were prepared and immersed in commonly ingested drinks, i.e., coffee, red wine and acai berry for periods of time varying from 1 to 60 days. Cylindrical samples of each resin were shaped using a metallic die and polymerized during 30 s both on the bottom and top of its disk. All samples were polished and immersed in the staining solutions. After 24 hours, three samples of each resin immersed in each solution were removed and placed in a spectrofotome ter for analysis. To that end, the samples were previously diluted in HCl at 50%. Tukey tests were carried out in the statistical analysis of the results. The results revealed that there was a clear difference in the staining behavior of each material. The nanoparticulated resin did not show better color stability compared to the microhybrid resin. Moreover, all resins stained with time. The degree of staining decreased in the sequence nanoparticulated, microhybrid, nanohybrid MH2 and MH1. Wine was the most aggressive drink followed by coffee and acai berry. SEM and image analysis revealed significant porosity on the surface of MH resin and relatively large pores on a NP sample. The NH2 resin was characterized by homogeneous dispersion of particles and limited porosity. Finally, the NH1 resin depicted the lowest porosity level. The results revealed that staining is likely related to the concentration of inorganic pa rticles and surface porosity

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When a task must be executed in a remote or dangerous environment, teleoperation systems may be employed to extend the influence of the human operator. In the case of manipulation tasks, haptic feedback of the forces experienced by the remote (slave) system is often highly useful in improving an operator's ability to perform effectively. In many of these cases (especially teleoperation over the internet and ground-to-space teleoperation), substantial communication latency exists in the control loop and has the strong tendency to cause instability of the system. The first viable solution to this problem in the literature was based on a scattering/wave transformation from transmission line theory. This wave transformation requires the designer to select a wave impedance parameter appropriate to the teleoperation system. It is widely recognized that a small value of wave impedance is well suited to free motion and a large value is preferable for contact tasks. Beyond this basic observation, however, very little guidance exists in the literature regarding the selection of an appropriate value. Moreover, prior research on impedance selection generally fails to account for the fact that in any realistic contact task there will simultaneously exist contact considerations (perpendicular to the surface of contact) and quasi-free-motion considerations (parallel to the surface of contact). The primary contribution of the present work is to introduce an approximate linearized optimum for the choice of wave impedance and to apply this quasi-optimal choice to the Cartesian reality of such a contact task, in which it cannot be expected that a given joint will be either perfectly normal to or perfectly parallel to the motion constraint. The proposed scheme selects a wave impedance matrix that is appropriate to the conditions encountered by the manipulator. This choice may be implemented as a static wave impedance value or as a time-varying choice updated according to the instantaneous conditions encountered. A Lyapunov-like analysis is presented demonstrating that time variation in wave impedance will not violate the passivity of the system. Experimental trials, both in simulation and on a haptic feedback device, are presented validating the technique. Consideration is also given to the case of an uncertain environment, in which an a priori impedance choice may not be possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de dout. em Electrónica e Computação, Faculdade de Ciências e Tecnologia, Univ. do Algarve, 2004

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper empirically investigates volatility transmission among stock and foreign exchange markets in seven major world economies during the period July 1988 to January 2015. To this end, we first perform a static and dynamic analysis to measure the total volatility connectedness in the entire period (the system-wide approach) using a framework recently proposed by Diebold and Yilmaz (2014). Second, we make use of a dynamic analysis to evaluate the net directional connectedness for each market. To gain further insights, we examine the time-varying behaviour of net pair-wise directional connectedness during the financial turmoil periods experienced in the sample period Our results suggest that slightly more than half of the total variance of the forecast errors is explained by shocks across markets rather than by idiosyncratic shocks. Furthermore, we find that volatility connectedness varies over time, with a surge during periods of increasing economic and financial instability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.