939 resultados para Time-variable gravity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os Sistemas Embarcados Distribuídos (SEDs) estão, hoje em dia, muito difundidos em vastas áreas, desde a automação industrial, a automóveis, aviões, até à distribuição de energia e protecção do meio ambiente. Estes sistemas são, essencialmente, caracterizados pela integração distribuída de aplicações embarcadas, autónomas mas cooperantes, explorando potenciais vantagens em termos de modularidade, facilidade de manutenção, custos de instalação, tolerância a falhas, entre outros. Contudo, o ambiente operacional onde se inserem estes tipos de sistemas pode impor restrições temporais rigorosas, exigindo que o sistema de comunicação subjacente consiga transmitir mensagens com garantias temporais. Contudo, os SEDs apresentam uma crescente complexidade, uma vez que integram subsistemas cada vez mais heterogéneos, quer ao nível do tráfego gerado, quer dos seus requisitos temporais. Em particular, estes subsistemas operam de forma esporádica, isto é, suportam mudanças operacionais de acordo com estímulos exteriores. Estes subsistemas também se reconfiguram dinamicamente de acordo com a actualização dos seus requisitos e, ainda, têm lidar com um número variável de solicitações de outros subsistemas. Assim sendo, o nível de utilização de recursos pode variar e, desta forma, as políticas de alocação estática tornam-se muito ineficientes. Consequentemente, é necessário um sistema de comunicação capaz de suportar com eficácia reconfigurações e adaptações dinâmicas. A tecnologia Ethernet comutada tem vindo a emergir como uma solução sólida para fornecer comunicações de tempo-real no âmbito dos SEDs, como comprovado pelo número de protocolos de tempo-real que foram desenvolvidos na última década. No entanto, nenhum dos protocolos existentes reúne as características necessárias para fornecer uma eficiente utilização da largura de banda e, simultaneamente, para respeitar os requisitos impostos pelos SEDs. Nomeadamente, a capacidade para controlar e policiar tráfego de forma robusta, conjugada com suporte à reconfiguração e adaptação dinâmica, não comprometendo as garantias de tempo-real. Esta dissertação defende a tese de que, pelo melhoramento dos comutadores Ethernet para disponibilizarem mecanismos de reconfiguração e isolamento de tráfego, é possível suportar aplicações de tempo-real críticas, que são adaptáveis ao ambiente onde estão inseridas.Em particular, é mostrado que as técnicas de projecto, baseadas em componentes e apoiadas no escalonamento hierárquico de servidores de tráfego, podem ser integradas nos comutadores Ethernet para alcançar as propriedades desejadas. Como suporte, é fornecida, também, uma solução para instanciar uma hierarquia reconfigurável de servidores de tráfego dentro do comutador, bem como a análise adequada ao modelo de escalonamento. Esta última fornece um limite superior para o tempo de resposta que os pacotes podem sofrer dentro dos servidores de tráfego, com base unicamente no conhecimento de um dado servidor e na hierarquia actual, isto é, sem o conhecimento das especifidades do tráfego dentro dos outros servidores. Finalmente, no âmbito do projecto HaRTES foi construído um protótipo do comutador Ethernet, o qual é baseado no paradigma “Flexible Time-Triggered”, que permite uma junção flexível de uma fase síncrona para o tráfego controlado pelo comutador e uma fase assíncrona que implementa a estrutura hierárquica de servidores referidos anteriormente. Além disso, as várias experiências práticas realizadas permitiram validar as propriedades desejadas e, consequentemente, a tese que fundamenta esta dissertação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interest on using teams of mobile robots has been growing, due to their potential to cooperate for diverse purposes, such as rescue, de-mining, surveillance or even games such as robotic soccer. These applications require a real-time middleware and wireless communication protocol that can support an efficient and timely fusion of the perception data from different robots as well as the development of coordinated behaviours. Coordinating several autonomous robots towards achieving a common goal is currently a topic of high interest, which can be found in many application domains. Despite these different application domains, the technical problem of building an infrastructure to support the integration of the distributed perception and subsequent coordinated action is similar. This problem becomes tougher with stronger system dynamics, e.g., when the robots move faster or interact with fast objects, leading to tighter real-time constraints. This thesis work addressed computing architectures and wireless communication protocols to support efficient information sharing and coordination strategies taking into account the real-time nature of robot activities. The thesis makes two main claims. Firstly, we claim that despite the use of a wireless communication protocol that includes arbitration mechanisms, the self-organization of the team communications in a dynamic round that also accounts for variable team membership, effectively reduces collisions within the team, independently of its current composition, significantly improving the quality of the communications. We will validate this claim in terms of packet losses and communication latency. We show how such self-organization of the communications can be achieved in an efficient way with the Reconfigurable and Adaptive TDMA protocol. Secondly, we claim that the development of distributed perception, cooperation and coordinated action for teams of mobile robots can be simplified by using a shared memory middleware that replicates in each cooperating robot all necessary remote data, the Real-Time Database (RTDB) middleware. These remote data copies, which are updated in the background by the selforganizing communications protocol, are extended with age information automatically computed by the middleware and are locally accessible through fast primitives. We validate our claim showing a parsimonious use of the communication medium, improved timing information with respect to the shared data and the simplicity of use and effectiveness of the proposed middleware shown in several use cases, reinforced with a reasonable impact in the Middle Size League of RoboCup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well-understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modeling of neural circuits found in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modelling of neural circuits found in the brain. In recent years, much of the focus in neuron modelling has moved to the study of the connectivity of spiking neural networks. Spiking neural networks provide a vehicle to understand from a computational perspective, aspects of the brain’s neural circuitry. This understanding can then be used to tackle some of the historically intractable issues with artificial neurons, such as scalability and lack of variable binding. Current knowledge of feed-forward, lateral, and recurrent connectivity of spiking neurons, and the interplay between excitatory and inhibitory neurons is beginning to shed light on these issues, by improved understanding of the temporal processing capabilities and synchronous behaviour of biological neurons. This research topic aims to amalgamate current research aimed at tackling these phenomena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to determine if mental toughness moderated the occurrence of social loafing in cycle time-trial performance. Method: Twenty-seven men (Mage = 17.7 years, SD = 0.6) completed the Sport Mental Toughness Questionnaire prior to completing a 1-min cycling trial under 2 conditions: once with individual performance identified, and once in a group with individual performance not identified. Using a median split of the mental toughness index, participants were divided into high and low mental toughness groups. Cycling distance was compared using a 2 (trial) × 2 (high–low mental toughness) analysis of variance. We hypothesized that mentally tough participants would perform equally well under both conditions (i.e., no indication of social loafing) compared with low mentally tough participants, who would perform less well when their individual performance was not identifiable (i.e., demonstrating the anticipated social loafing effect). Results: The high mental toughness group demonstrated consistent performance across both conditions, while the low mental toughness group reduced their effort in the non-individually identifiable team condition. Conclusions: The results confirm that (a) clearly identifying individual effort/performance is an important situational variable that may impact team performance and (b) higher perceived mental toughness has the ability to negate the tendency to loaf.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded real-time applications increasingly present high computation requirements, which need to be completed within specific deadlines, but that present highly variable patterns, depending on the set of data available in a determined instant. The current trend to provide parallel processing in the embedded domain allows providing higher processing power; however, it does not address the variability in the processing pattern. Dimensioning each device for its worst-case scenario implies lower average utilization, and increased available, but unusable, processing in the overall system. A solution for this problem is to extend the parallel execution of the applications, allowing networked nodes to distribute the workload, on peak situations, to neighbour nodes. In this context, this report proposes a framework to develop parallel and distributed real-time embedded applications, transparently using OpenMP and Message Passing Interface (MPI), within a programming model based on OpenMP. The technical report also devises an integrated timing model, which enables the structured reasoning on the timing behaviour of these hybrid architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This manuscript analyses the data generated by a Zero Length Column (ZLC) diffusion experimental set-up, for 1,3 Di-isopropyl benzene in a 100% alumina matrix with variable particle size. The time evolution of the phenomena resembles those of fractional order systems, namely those with a fast initial transient followed by long and slow tails. The experimental measurements are best fitted with the Harris model revealing a power law behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Paper Studies Tests of Joint Hypotheses in Time Series Regression with a Unit Root in Which Weakly Dependent and Heterogeneously Distributed Innovations Are Allowed. We Consider Two Types of Regression: One with a Constant and Lagged Dependent Variable, and the Other with a Trend Added. the Statistics Studied Are the Regression \"F-Test\" Originally Analysed by Dickey and Fuller (1981) in a Less General Framework. the Limiting Distributions Are Found Using Functinal Central Limit Theory. New Test Statistics Are Proposed Which Require Only Already Tabulated Critical Values But Which Are Valid in a Quite General Framework (Including Finite Order Arma Models Generated by Gaussian Errors). This Study Extends the Results on Single Coefficients Derived in Phillips (1986A) and Phillips and Perron (1986).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pour répondre aux exigences du gouvernement fédéral quant aux temps d’attente pour les chirurgies de remplacement du genou et de la hanche, les établissements canadiens ont adopté des stratégies de gestion des listes d’attentes avec des niveaux de succès variables. Notre question de recherche visait à comprendre Quels facteurs ont permis de maintenir dans le temps un temps d’attente répondant aux exigences du gouvernement fédéral pendant au moins 6-12 mois? Nous avons développé un modèle possédant quatre facteurs, inspiré du modèle de Parsons (1977), afin d’analyser les facteurs comprenant la gouvernance, la culture, les ressources, et les outils. Trois études de cas ont été menées. En somme, le 1er cas a été capable d’obtenir les exigences pendant six mois mais incapable de les maintenir, le 2e cas a été capable de maintenir les exigences > 18 mois et le 3e cas a été incapable d’atteindre les objectifs. Des documents furent recueillis et des entrevues furent réalisées auprès des personnes impliquées dans la stratégie. Les résultats indiquent que l’hôpital qui a été en mesure de maintenir le temps d’attente possède certaines caractéristiques: réalisation exclusive de chirurgie de remplacement de la hanche et du genou, présence d’un personnel motivé, non distrait par d’autres préoccupations et un esprit d’équipe fort. Les deux autres cas ont eu à faire face à une culture médicale moins homogène et moins axés sur l’atteinte des cibles; des ressources dispersées et une politique intra-établissement imprécise. Le modèle d’hôpital factory est intéressant dans le cadre d’une chirurgie surspécialisée. Toutefois, les patients sont sélectionnés pour des chirurgies simples et dont le risque de complication est faible. Il ne peut donc pas être retenu comme le modèle durable par excellence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les étoiles naines blanches représentent la fin de l’évolution de 97% des étoiles de notre galaxie, dont notre Soleil. L’étude des propriétés globales de ces étoiles (distribution en température, distribution de masse, fonction de luminosité, etc.) requiert l’élaboration d’ensembles statistiquement complets et bien définis. Bien que plusieurs relevés d’étoiles naines blanches existent dans la littérature, la plupart de ceux-ci souffrent de biais statistiques importants pour ce genre d’analyse. L’échantillon le plus représentatif de la population d’étoiles naines blanches demeure à ce jour celui défini dans un volume complet, restreint à l’environnement immédiat du Soleil, soit à une distance de 20 pc (∼ 65 années-lumière) de celui-ci. Malheureusement, comme les naines blanches sont des étoiles intrinsèquement peu lumineuses, cet échantillon ne contient que ∼ 130 objets, compromettant ainsi toute étude statistique significative. Le but de notre étude est de recenser la population d’étoiles naines blanches dans le voisinage solaire a une distance de 40 pc, soit un volume huit fois plus grand. Nous avons ainsi entrepris de répertorier toutes les étoiles naines blanches à moins de 40 pc du Soleil à partir de SUPERBLINK, un vaste catalogue contenant le mouvement propre et les données photométriques de plus de 2 millions d’étoiles. Notre approche est basée sur la méthode des mouvements propres réduits qui permet d’isoler les étoiles naines blanches des autres populations stellaires. Les distances de toutes les candidates naines blanches sont estimées à l’aide de relations couleur-magnitude théoriques afin d’identifier les objets se situant à moins de 40 pc du Soleil, dans l’hémisphère nord. La confirmation spectroscopique du statut de naine blanche de nos ∼ 1100 candidates a ensuite requis 15 missions d’observations astronomiques sur trois grands télescopes à Kitt Peak en Arizona, ainsi qu’une soixantaine d’heures allouées sur les télescopes de 8 m des observatoires Gemini Nord et Sud. Nous avons ainsi découvert 322 nouvelles étoiles naines blanches de plusieurs types spectraux différents, dont 173 sont à moins de 40 pc, soit une augmentation de 40% du nombre de naines blanches connues à l’intérieur de ce volume. Parmi ces nouvelles naines blanches, 4 se trouvent probablement à moins de 20 pc du Soleil. De plus, nous démontrons que notre technique est très efficace pour identifier les étoiles naines blanches dans la région peuplée du plan de la Galaxie. Nous présentons ensuite une analyse spectroscopique et photométrique détaillée de notre échantillon à l’aide de modèles d’atmosphère afin de déterminer les propriétés physiques de ces étoiles, notamment la température, la gravité de surface et la composition chimique. Notre analyse statistique de ces propriétés, basée sur un échantillon presque trois fois plus grand que celui à 20 pc, révèle que nous avons identifié avec succès les étoiles les plus massives, et donc les moins lumineuses, de cette population qui sont souvent absentes de la plupart des relevés publiés. Nous avons également identifié plusieurs naines blanches très froides, et donc potentiellement très vieilles, qui nous permettent de mieux définir le côté froid de la fonction de luminosité, et éventuellement l’âge du disque de la Galaxie. Finalement, nous avons aussi découvert plusieurs objets d’intérêt astrophysique, dont deux nouvelles étoiles naines blanches variables de type ZZ Ceti, plusieurs naines blanches magnétiques, ainsi que de nombreux systèmes binaires non résolus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to study the variation in subduction zone geometry along and across the arc and the fault pattern within the subducting plate. Depth of penetration as well as the dip of the Benioff zone varies considerably along the arc which corresponds to the curvature of the fold- thrust belt which varies from concave to convex in different sectors of the arc. The entire arc is divided into 27 segments and depth sections thus prepared are utilized to investigate the average dip of the Benioff zone in the different parts of the entire arc, penetration depth of the subducting lithosphere, the subduction zone geometry underlying the trench, the arctrench gap, etc.The study also describes how different seismogenic sources are identified in the region, estimation of moment release rate and deformation pattern. The region is divided into broad seismogenic belts. Based on these previous studies and seismicity Pattern, we identified several broad distinct seismogenic belts/sources. These are l) the Outer arc region consisting of Andaman-Nicobar islands 2) the back-arc Andaman Sea 3)The Sumatran fault zone(SFZ)4)Java onshore region termed as Jave Fault Zone(JFZ)5)Sumatran fore arc silver plate consisting of Mentawai fault(MFZ)6) The offshore java fore arc region 7)The Sunda Strait region.As the Seismicity is variable,it is difficult to demarcate individual seismogenic sources.Hence, we employed a moving window method having a window length of 3—4° and with 50% overlapping starting from one end to the other. We succeeded in defining 4 sources each in the Andaman fore arc and Back arc region, 9 such sources (moving windows) in the Sumatran Fault zone (SFZ), 9 sources in the offshore SFZ region and 7 sources in the offshore Java region. Because of the low seismicity along JFZ, it is separated into three seismogenic sources namely West Java, Central Java and East Java. The Sunda strait is considered as a single seismogenic source.The deformation rates for each of the seismogenic zones have been computed. A detailed error analysis of velocity tensors using Monte—Carlo simulation method has been carried out in order to obtain uncertainties. The eigen values and the respective eigen vectors of the velocity tensor are computed to analyze the actual deformation pattem for different zones. The results obtained have been discussed in the light of regional tectonics, and their implications in terms of geodynamics have been enumerated.ln the light of recent major earthquakes (26th December 2004 and 28th March 2005 events) and the ongoing seismic activity, we have recalculated the variation in the crustal deformation rates prior and after these earthquakes in Andaman—Sumatra region including the data up to 2005 and the significant results has been presented.ln this chapter, the down going lithosphere along the subduction zone is modeled using the free air gravity data by taking into consideration the thickness of the crustal layer, the thickness of the subducting slab, sediment thickness, presence of volcanism, the proximity of the continental crust etc. Here a systematic and detailed gravity interpretation constrained by seismicity and seismic data in the Andaman arc and the Andaman Sea region in order to delineate the crustal structure and density heterogeneities a Io nagnd across the arc and its correlation with the seismogenic behaviour is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis Entitled Studies on Quasinormal modes and Late-time tails black hole spacetimes. In this thesis, the signature of these new theories are probed on the evolution of field perturbations on the black hole spacetimes in the theory. Chapter 1 gives a general introduction to black holes and its perturbation formalism. Various concepts in the area covered by the thesis are also elucidated in this chapter. Chapter 2 describes the evolution of massive, charged scalar field perturbations around a Reissner-Nordstrom black hole surrounded by a static and spherically symmetric quintessence. Chapter 3 comprises the evolution of massless scalar, electromagnetic and gravitational fields around spherically symmetric black hole whose asymptotes are defined by the quintessence, with special interest on the late-time behavior. Chapter 4 examines the evolution of Dirac field around a Schwarzschild black hole surrounded by quintessence. Detailed numerical simulations are done to analyze the nature of field on different surfaces of constant radius . Chapter 5is dedicated to the study of the evolution of massless fields around the black hole geometry in the HL gravity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.