915 resultados para Databases, Bibliographic
Resumo:
Introdução – Ao aumento exponencial de informação, sobretudo a científica, não corresponde obrigatoriamente a melhoria de qualidade na pesquisa e no uso da mesma. O conceito de literacia da informação ganha pertinência e destaque, na medida em que abarca competências que permitem reconhecer quando é necessária a informação e de atuar de forma eficiente e efetiva na sua obtenção e utilização. A biblioteca académica assume, neste contexto, o papel de parceiro privilegiado, preparando o momento em que o estudante se sente capaz de produzir e registar novo conhecimento através da escrita. Objectivo – A Biblioteca da ESTeSL reestruturou as sessões desenvolvidas desde o ano lectivo 2002/2003 e deu início a um projecto mais formal denominado «Saber usar a informação de forma eficiente e eficaz». Objectivos: a) promover a melhoria da qualidade dos trabalhos académicos e científicos; b) contribuir para a diminuição do risco de plágio; c) aumentar a confiança dos estudantes nas suas capacidades de utilização dos recursos de informação; d) incentivar uma participação mais ativa em sala de aulas; e) colaborar para a integração dos conteúdos pedagógicos e das várias fontes de informação. Método – Dinamizaram-se várias sessões de formação de curta duração, versando diferentes temas associados à literacia de informação, designadamente: 1) Pesquisa de informação com sessões dedicadas à MEDLINE, RCAAP, SciELO, B-ON e Scopus; 2) Factor de impacto das revistas científicas: Journal Citation Reports e SciMAGO; 3) Como fazer um resumo científico?; 4) Como estruturar o trabalho científico?; 5) Como fazer uma apresentação oral?; 6) Como evitar o plágio?; 7) Referenciação bibliográfica usando a norma de Vancouver; 8) Utilização de gestores de referências bibliográficas: ZOTERO (primeira abordagem para os estudantes de 1º ano de licenciatura) e a gestão de referências e rede académica de informação com o MENDELEY (direcionado para estudantes finalistas, mestrandos, docentes e investigadores). O projecto foi apresentado à comunidade académica no site da ESTeSL; cada sessão foi divulgada individualmente no site e por email. Em 2015, a divulgação investiu na nova página da Biblioteca (https://estesl.biblio.ipl.pt/), que alojava informações e recursos abordados nas formações. As inscrições eram feitas por email, sem custos associados ou limite mínimo ou máximo de sessões para participar. Resultados – Em 2014 registaram-se 87 inscrições. Constatou-se a presença de, pelo menos, um participante em cada sessão de formação. Em 2015, o total de inscrições foi de 190. Foram reagendadas novas sessões a pedido dos estudantes cujos horários não eram compatíveis com os inicialmente agendados. Foram então organizados dois dias de formação seguida (cerca de 4h em cada dia) com conteúdos selecionados pelos estudantes. Registou-se, nestas sessões, a presença contante de cerca de 30 estudantes em sala. No total, as sessões da literacia da informação contaram com estudantes de licenciatura de todos os anos, estudantes de mestrado, docentes e investigadores (internos e externos à ESTeSL). Conclusões – Constata-se a necessidade de introdução de novos conteúdos no projeto de literacia da informação. O tempo, os conteúdos e o interesse demonstrado por aqueles que dele usufruíram evidenciam que este é um projeto que está a ganhar o seu espaço na comunidade da ESTeSL e que a literacia da informação contribui de forma efetiva para a construção e para a produção de conhecimento no meio académico.
Resumo:
Introdução – A pesquisa de informação realizada pelos estudantes de ensino superior em recursos eletrónicos não corresponde necessariamente ao domínio de competências de pesquisa, análise, avaliação, seleção e bom uso da informação recuperada. O conceito de literacia da informação ganha pertinência e destaque, na medida em que abarca competências que permitem reconhecer quando é necessária a informação e de atuar de forma eficiente e efetiva na sua obtenção e utilização. Objetivo – A meta da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) foi a formação em competências de literacia da informação, fora da ESTeSL, de estudantes, professores e investigadores. Métodos – A formação foi integrada em projetos nacionais e internacionais, dependendo dos públicos-alvo, das temáticas, dos conteúdos, da carga horária e da solicitação da instituição parceira. A Fundação Calouste Gulbenkian foi o promotor financeiro privilegiado. Resultados – Decorreram várias intervenções em território nacional e internacional. Em 2010, em Angola, no Instituto Médio de Saúde do Bengo, formação de 10 bibliotecários sobre a construção e a gestão de uma biblioteca de saúde e introdução à literacia da informação (35h). Em 2014, decorrente do ERASMUS Intensive Programme, o OPTIMAX (Radiation Dose and Image Quality Optimisation in Medical Imaging) para 40 professores e estudantes de radiologia (oriundos de Portugal, Reino Unido, Noruega, Países Baixos e Suíça) sobre metodologia e pesquisa de informação na MEDLINE e na Web of Science e sobre o Mendeley, enquanto gestor de referências (4h). Os trabalhos finais deste curso foram publicados em formato de ebook (http://usir.salford.ac.uk/34439/1/Final%20complete%20version.pdf), cuja revisão editorial foi da responsabilidade dos bibliotecários. Ao longo de 2014, na Escola Superior de Educação, Escola Superior de Dança, Instituto Politécnico de Setúbal e Faculdade de Medicina de Lisboa e, ao longo de 2015, na Universidade Aberta, Escola Superior de Comunicação Social, Instituto Egas Moniz, Faculdade de Letras de Lisboa e Centro de Linguística da Universidade de Lisboa foram desenhados conteúdos sobre o uso do ZOTERO e do Mendeley para a gestão de referências bibliográficas e sobre uma nova forma de fazer investigação. Cada uma destas sessões (2,5h) envolveu cerca de 25 estudantes finalistas, mestrandos e professores. Em 2015, em Moçambique, no Instituto Superior de Ciências da Saúde, decorreu a formação de 5 bibliotecários e 46 estudantes e professores (70h). Os conteúdos ministrados foram: 1) gestão e organização de uma biblioteca de saúde (para bibliotecários); 2) literacia da informação: pesquisa de informação na MEDLINE, SciELO e RCAAP, gestores de referências e como evitar o plágio (para bibliotecários e estudantes finalistas de radiologia). A carga horária destinada aos estudantes incluiu a tutoria das monografias de licenciatura, em colaboração com mais duas professoras do projeto. Para 2016 está agendada formação noutras instituições de ensino superior nacionais. Perspetiva-se, ainda, formação similar em Timor-Leste, cujos conteúdos, datas e carga horária estão por agendar. Conclusões – Destas iniciativas beneficia a instituição (pela visibilidade), os bibliotecários (pelo evidenciar de competências) e os estudantes, professores e investigadores (pelo ganho de novas competências e pela autonomia adquirida). O projeto de literacia da informação da ESTeSL tem contribuído de forma efetiva para a construção e para a produção de conhecimento no meio académico, nacional e internacional, sendo a biblioteca o parceiro privilegiado nesta cultura de colaboração.
Resumo:
Os mercados financeiros têm um papel fundamental na dinamização das economias modernas. Às empresas cotadas oferece o capital necessário para impulsionar o seu crescimento e aos investidores individuais proporciona a diversificação das suas carteiras, usufruindo desta forma do crescimento e da vitalidade da economia mundial. A gestão de carteiras de ativos financeiros constitui uma área que procura apresentar mecanismos para a obtenção de uma relação ótima entre retorno e risco. Neste sentido, inúmeros estudos têm contribuído de forma significativa para a eficiência e para a prática desta técnica. Esta dissertação pretende analisar a metodologia desenvolvida por Elton-Gruber para a construção de carteiras otimizadas e aplicar as técnicas subjacentes ao mercado acionista português. Para o efeito, serão realizadas pesquisas em fontes bibliográficas da especialidade e serão consultadas bases de dados de cotações históricas das ações e do índice de mercado nacional. A aplicação incidiu sobre ações cotadas no índice PSI-20 durante o período compreendido entre 2010 e 2014. No intuito de melhorar a compreensão das séries de retornos das amostras, o estudo de caráter quantitativo também recorreu à análise estatística. As evidências mostram que a carteira otimizada, no período em análise, contém apenas as ações da empresa Portucel. Este resultado estará condicionado pelos efeitos da crise financeira que iniciou em 2008.
Resumo:
Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.
Resumo:
BACKGROUND: The synthesis of published research in systematic reviews is essential when providing evidence to inform clinical and health policy decision-making. However, the validity of systematic reviews is threatened if journal publications represent a biased selection of all studies that have been conducted (dissemination bias). To investigate the extent of dissemination bias we conducted a systematic review that determined the proportion of studies published as peer-reviewed journal articles and investigated factors associated with full publication in cohorts of studies (i) approved by research ethics committees (RECs) or (ii) included in trial registries. METHODS AND FINDINGS: Four bibliographic databases were searched for methodological research projects (MRPs) without limitations for publication year, language or study location. The searches were supplemented by handsearching the references of included MRPs. We estimated the proportion of studies published using prediction intervals (PI) and a random effects meta-analysis. Pooled odds ratios (OR) were used to express associations between study characteristics and journal publication. Seventeen MRPs (23 publications) evaluated cohorts of studies approved by RECs; the proportion of published studies had a PI between 22% and 72% and the weighted pooled proportion when combining estimates would be 46.2% (95% CI 40.2%-52.4%, I2 = 94.4%). Twenty-two MRPs (22 publications) evaluated cohorts of studies included in trial registries; the PI of the proportion published ranged from 13% to 90% and the weighted pooled proportion would be 54.2% (95% CI 42.0%-65.9%, I2 = 98.9%). REC-approved studies with statistically significant results (compared with those without statistically significant results) were more likely to be published (pooled OR 2.8; 95% CI 2.2-3.5). Phase-III trials were also more likely to be published than phase II trials (pooled OR 2.0; 95% CI 1.6-2.5). The probability of publication within two years after study completion ranged from 7% to 30%. CONCLUSIONS: A substantial part of the studies approved by RECs or included in trial registries remains unpublished. Due to the large heterogeneity a prediction of the publication probability for a future study is very uncertain. Non-publication of research is not a random process, e.g., it is associated with the direction of study findings. Our findings suggest that the dissemination of research findings is biased.
Resumo:
Expert curation and complete collection of mutations in genes that affect human health is essential for proper genetic healthcare and research. Expert curation is given by the curators of gene-specific mutation databases or locus-specific databases (LSDBs). While there are over 700 such databases, they vary in their content, completeness, time available for curation, and the expertise of the curator. Curation and LSDBs have been discussed, written about, and protocols have been provided for over 10 years, but there have been no formal recommendations for the ideal form of these entities. This work initiates a discussion on this topic to assist future efforts in human genetics. Further discussion is welcome.
Resumo:
The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB's Bioinformatics resource portal ExPASy features over 150 resources, including UniProtKB/Swiss-Prot, ENZYME, PROSITE, neXtProt, STRING, UniCarbKB, SugarBindDB, SwissRegulon, EPD, arrayMap, Bgee, SWISS-MODEL Repository, OMA, OrthoDB and other databases, which are briefly described in this article.
Resumo:
Classical relational databases lack proper ways to manage certain real-world situations including imprecise or uncertain data. Fuzzy databases overcome this limitation by allowing each entry in the table to be a fuzzy set where each element of the corresponding domain is assigned a membership degree from the real interval [0…1]. But this fuzzy mechanism becomes inappropriate in modelling scenarios where data might be incomparable. Therefore, we become interested in further generalization of fuzzy database into L-fuzzy database. In such a database, the characteristic function for a fuzzy set maps to an arbitrary complete Brouwerian lattice L. From the query language perspectives, the language of fuzzy database, FSQL extends the regular Structured Query Language (SQL) by adding fuzzy specific constructions. In addition to that, L-fuzzy query language LFSQL introduces appropriate linguistic operations to define and manipulate inexact data in an L-fuzzy database. This research mainly focuses on defining the semantics of LFSQL. However, it requires an abstract algebraic theory which can be used to prove all the properties of, and operations on, L-fuzzy relations. In our study, we show that the theory of arrow categories forms a suitable framework for that. Therefore, we define the semantics of LFSQL in the abstract notion of an arrow category. In addition, we implement the operations of L-fuzzy relations in Haskell and develop a parser that translates algebraic expressions into our implementation.
Resumo:
Département de linguistique et de traduction
Resumo:
Recent scientific advances and new technological developments, most notably the advent of bio-informatics, have led to the emergence of genetic databases with particular characteristics and structures. Paralleling these developments, there has been a proliferation of ethical and legal texts aimed at the regulation of this new form of genetic database.
Resumo:
Les filtres de recherche bibliographique optimisés visent à faciliter le repérage de l’information dans les bases de données bibliographiques qui sont presque toujours la source la plus abondante d’évidences scientifiques. Ils contribuent à soutenir la prise de décisions basée sur les évidences. La majorité des filtres disponibles dans la littérature sont des filtres méthodologiques. Mais pour donner tout leur potentiel, ils doivent être combinés à des filtres permettant de repérer les études couvrant un sujet particulier. Dans le champ de la sécurité des patients, il a été démontré qu’un repérage déficient de l’information peut avoir des conséquences tragiques. Des filtres de recherche optimisés couvrant le champ pourraient s’avérer très utiles. La présente étude a pour but de proposer des filtres de recherche bibliographique optimisés pour le champ de la sécurité des patients, d’évaluer leur validité, et de proposer un guide pour l’élaboration de filtres de recherche. Nous proposons des filtres optimisés permettant de repérer des articles portant sur la sécurité des patients dans les organisations de santé dans les bases de données Medline, Embase et CINAHL. Ces filtres réalisent de très bonnes performances et sont spécialement construits pour les articles dont le contenu est lié de façon explicite au champ de la sécurité des patients par leurs auteurs. La mesure dans laquelle on peut généraliser leur utilisation à d’autres contextes est liée à la définition des frontières du champ de la sécurité des patients.
On Implementing Joins, Aggregates and Universal Quantifier in Temporal Databases using SQL Standards
Resumo:
A feasible way of implementing a temporal database is by mapping temporal data model onto a conventional data model followed by a commercial database management system. Even though extensions were proposed to standard SQL for supporting temporal databases, such proposals have not yet come across standardization processes. This paper attempts to implement database operators such as aggregates and universal quantifier for temporal databases, implemented on top of relational database systems, using currently available SQL standards.
Resumo:
Information and communication technologies are the tools that underpin the emerging “Knowledge Society”. Exchange of information or knowledge between people and through networks of people has always taken place. But the ICT has radically changed the magnitude of this exchange, and thus factors such as timeliness of information and information dissemination patterns have become more important than ever.Since information and knowledge are so vital for the all round human development, libraries and institutions that manage these resources are indeed invaluable. So, the Library and Information Centres have a key role in the acquisition, processing, preservation and dissemination of information and knowledge. ln the modern context, library is providing service based on different types of documents such as manuscripts, printed, digital, etc. At the same time, acquisition, access, process, service etc. of these resources have become complicated now than ever before. The lCT made instrumental to extend libraries beyond the physical walls of a building and providing assistance in navigating and analyzing tremendous amounts of knowledge with a variety of digital tools. Thus, modern libraries are increasingly being re-defined as places to get unrestricted access to information in many formats and from many sources.The research was conducted in the university libraries in Kerala State, India. lt was identified that even though the information resources are flooding world over and several technologies have emerged to manage the situation for providing effective services to its clientele, most of the university libraries in Kerala were unable to exploit these technologies at maximum level. Though the libraries have automated many of their functions, wide gap prevails between the possible services and provided services. There are many good examples world over in the application of lCTs in libraries for the maximization of services and many such libraries have adopted the principles of reengineering and re-defining as a management strategy. Hence this study was targeted to look into how effectively adopted the modern lCTs in our libraries for maximizing the efficiency of operations and services and whether the principles of re-engineering and- redefining can be applied towards this.Data‘ was collected from library users, viz; student as well as faculty users; library ,professionals and university librarians, using structured questionnaires. This has been .supplemented by-observation of working of the libraries, discussions and interviews with the different types of users and staff, review of literature, etc. Personal observation of the organization set up, management practices, functions, facilities, resources, utilization of information resources and facilities by the users, etc. of the university libraries in Kerala have been made. Statistical techniques like percentage, mean, weighted mean, standard deviation, correlation, trend analysis, etc. have been used to analyse data.All the libraries could exploit only a very few possibilities of modern lCTs and hence they could not achieve effective Universal Bibliographic Control and desired efficiency and effectiveness in services. Because of this, the users as well as professionals are dissatisfied. Functional effectiveness in acquisition, access and process of information resources in various formats, development and maintenance of OPAC and WebOPAC, digital document delivery to remote users, Web based clearing of library counter services and resources, development of full-text databases, digital libraries and institutional repositories, consortia based operations for e-journals and databases, user education and information literacy, professional development with stress on lCTs, network administration and website maintenance, marketing of information, etc. are major areas need special attention to improve the situation. Finance, knowledge level on ICTs among library staff, professional dynamism and leadership, vision and support of the administrators and policy makers, prevailing educational set up and social environment in the state, etc. are some of the major hurdles in reaping the maximum possibilities of lCTs by the university libraries in Kerala. The principles of Business Process Re-engineering are found suitable to effectively apply to re-structure and redefine the operations and service system of the libraries. Most of the conventional departments or divisions prevailing in the university libraries were functioning as watertight compartments and their existing management system was more rigid to adopt the principles of change management. Hence, a thorough re-structuring of the divisions was indicated. Consortia based activities and pooling and sharing of information resources was advocated to meet the varied needs of the users in the main campuses and off campuses of the universities, affiliated colleges and remote stations. A uniform staff policy similar to that prevailing in CSIR, DRDO, ISRO, etc. has been proposed by the study not only in the university libraries in kerala but for the entire country.Restructuring of Lis education,integrated and Planned development of school,college,research and public library systems,etc.were also justified for reaping maximum benefits of the modern ICTs.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.