993 resultados para Relational databases -- Design
Resumo:
SUMMARYIn order to increase drug safety we must better understand how medication interacts with the body of our patients and this knowledge should be made easily available for the clinicians prescribing the medication. This thesis contributes to how the knowledge of some drug properties can increase and how to make information readily accessible for the medical professionals. Furthermore it investigates the use of Therapeutic drug monitoring, drug interaction databases and pharmacogenetic tests in pharmacovigilance.Two pharmacogenetic studies in the naturalistic setting of psychiatric in-patients clinics have been performed; one with the antidepressant mirtazapine, the other with the antipsychotic clozapine. Forty-five depressed patients have been treated with mirtazapine and were followed for 8 weeks. The therapeutic effect was as seen in other previous studies. Enantioselective analyses could confirm an influence of age, gender and smoking in the pharmacokinetics of mirtazapine; it showed a significant influence of the CYP2D6 genotype on the antidepressant effective S-enantiomer, and for the first time an influence of the CYP2B6 genotype on the plasma concentrations of the 8-OH metabolite was found. The CYP2B6*/*6 genotype was associated to better treatment response. A detailed hypothesis of the metabolic pathways of mirtazapine is proposed. In the second pharmacogenetic study, analyses of 75 schizophrenic patients treated with clozapine showed the influence of CYP450 and ABCB1 genotypes on its pharmacokinetics. For the first time we could demonstrate an in vivo effect of the CYP2C19 genotype and an influence of P-glycoprotein on the plasma concentrations of clozapine. Further we confirmed in vivo the prominent role of CYP1A2 in the metabolism of clozapine.Identifying risk factors for the occurrence of serious adverse drug reactions (SADR) would allow a more individualized and safer drug therapy. SADR are rare events and therefore difficult to study. We tested the feasibility of a nested matched case-control study to examine the influence of high drug plasma levels and CYP2D6 genotypes on the risk to experience an SADR. In our sample we compared 62 SADR cases with 82 controls; both groups were psychiatric patients from the in-patient clinic Königsfelden. Drug plasma levels of >120% of the upper recommended references could be identified as a risk factor with a statistically significant odds ratio of 3.5, a similar trend could be seen for CYP2D6 poor metaboliser. Although a matched case-control design seems a valid method, 100% matching is not easy to perform in a relative small cohort of one in-patient clinic. However, a nested case-control study is feasible.On the base of the experience gained in the AMSP+ study and the fact that we have today only sparse data indicating that routine drug plasma concentration monitoring and/or pharmacogenetic testing in psychiatry are justified to minimize the risk for ADR, we developed a test algorithm named "TDM plus" (TDM plus interaction checks plus pharmacogenetic testing).Pharmacovigilance programs such as the AMSP project (AMSP = Arzneimittelsicherheit in der Psychiatrie) survey psychiatric in-patients in order to collect SADR and to detect new safety signals. Case reports of such SADR are, although anecdotal, valuable to illustrate rare clinical events and sometimes confirm theoretical assumptions of e.g. drug interactions. Seven pharmacovigilance case reports are summarized in this thesis.To provide clinicians with meaningful information on the risk of drug combinations, during the course of this thesis the internet based drug interaction program mediQ.ch (in German) has been developed. Risk estimation is based on published clinical and pharmacological information of single drugs and alimentary products, including adverse drug reaction profiles. Information on risk factors such as renal and hepatic insufficiency and specific genotypes are given. More than 20'000 drug pairs have been described in detail. Over 2000 substances with their metabolic and transport pathways are included and all information is referenced with links to the published scientific literature or other information sources. Medical professionals of more than 100 hospitals and 300 individual practitioners do consult mediQ.ch regularly. Validations with comparisons to other drug interaction programs show good results.Finally, therapeutic drug monitoring, drug interaction programs and pharmacogenetic tests are helpful tools in pharmacovigilance and should, in absence of sufficient routine tests supporting data, be used as proposed in our TDM plus algorithm.RESUMEPour améliorer la sécurité d'emploi des médicaments il est important de mieux comprendre leurs interactions dans le corps des patients. Ensuite le clinicien qui prescrit une pharmacothérapie doit avoir un accès simple à ces informations. Entre autres, cette thèse contribue à mieux connaître les caractéristiques pharmacocinétiques de deux médicaments. Elle examine aussi l'utilisation de trois outils en pharmacovigilance : le monitorage thérapeutique des taux plasmatiques des médicaments (« therapeutic drug monitoring »), un programme informatisé d'estimation du risque de combinaisons médicamenteuses, et enfin des tests pharmacogénétiques.Deux études cliniques pharmacogénétiques ont été conduites dans le cadre habituel de clinique psychiatrique : l'une avec la mirtazapine (antidépresseur), l'autre avec la clozapine (antipsychotique). On a traité 45 patients dépressifs avec de la mirtazapine pendant 8 semaines. L'effet thérapeutique était semblable à celui des études précédentes. Nous avons confirmé l'influence de l'âge et du sexe sur la pharmacocinétique de la mirtazapine et la différence dans les concentrations plasmatiques entre fumeurs et non-fumeurs. Au moyen d'analyses énantiomères sélectives, nous avons pu montrer une influence significative du génotype CYP2D6 sur l'énantiomère S+, principalement responsable de l'effet antidépresseur. Pour la première fois, nous avons trouvé une influence du génotype CYP2B6 sur les taux plasmatiques de la 8-OH-mirtazapine. Par ailleurs, le génotype CYP2B6*6/*6 était associé à une meilleure réponse thérapeutique. Une hypothèse sur les voies métaboliques détaillées de la mirtazapine est proposée. Dans la deuxième étude, 75 patients schizophrènes traités avec de la clozapine ont été examinés pour étudier l'influence des génotypes des iso-enzymes CYP450 et de la protéine de transport ABCB1 sur la pharmacocinétique de cet antipsychotique. Pour la première fois, on a montré in vivo un effet des génotypes CYP2C19 et ABCB1 sur les taux plasmatiques de la clozapine. L'importance du CYP1A2 dans le métabolisme de la clozapine a été confirmée.L'identification de facteurs de risques dans la survenue d'effets secondaire graves permettrait une thérapie plus individualisée et plus sûre. Les effets secondaires graves sont rares. Dans une étude de faisabilité (« nested matched case-control design » = étude avec appariement) nous avons comparé des patients avec effets secondaires graves à des patients-contrôles prenant le même type de médicaments mais sans effets secondaires graves. Des taux plasmatiques supérieurs à 120% de la valeur de référence haute sont associés à un risque avec « odds ratio » significatif de 3.5. Une tendance similaire est apparue pour le génotype du CYP2D6. Le « nested matched case-control design » semble une méthode valide qui présente cependant une difficulté : trouver des patients-contrôles dans le cadre d'une seule clinique psychiatrique. Par contre la conduite d'une « nested case-control study » sans appariement est recommandable.Sur la base de notre expérience de l'étude AMSP+ et le fait que nous disposons que de peux de données justifiant des monitorings de taux plasmatiques et/ou de tests pharmacogénétiques de routine, nous avons développé un test algorithme nommé « TDMplus » (TDM + vérification d'interactions médicamenteuses + tests pharmacogénétique).Des programmes de pharmacovigilances comme celui de l'AMSP (Arzneimittelsicherheit in der Psychiatrie = pharmacovigilance en psychiatrie) collectent les effets secondaires graves chez les patients psychiatriques hospitalisés pour identifier des signaux d'alertes. La publication de certains de ces cas même anecdotiques est précieuse. Elle décrit des événements rares et quelques fois une hypothèse sur le potentiel d'une interaction médicamenteuse peut ainsi être confirmée. Sept publications de cas sont résumées ici.Dans le cadre de cette thèse, on a développé un programme informatisé sur internet (en allemand) - mediQ.ch - pour estimer le potentiel de risques d'une interaction médicamenteuse afin d'offrir en ligne ces informations utiles aux cliniciens. Les estimations de risques sont fondées sur des informations cliniques (y compris les profils d'effets secondaires) et pharmacologiques pour chaque médicament ou substance combinés. Le programme donne aussi des informations sur les facteurs de risques comme l'insuffisance rénale et hépatique et certains génotypes. Actuellement il décrit en détail les interactions potentielles de plus de 20'000 paires de médicaments, et celles de 2000 substances actives avec leurs voies de métabolisation et de transport. Chaque information mentionne sa source d'origine; un lien hypertexte permet d'y accéder. Le programme mediQ.ch est régulièrement consulté par les cliniciens de 100 hôpitaux et par 300 praticiens indépendants. Les premières validations et comparaisons avec d'autres programmes sur les interactions médicamenteuses montrent de bons résultats.En conclusion : le monitorage thérapeutique des médicaments, les programmes informatisés contenant l'information sur le potentiel d'interaction médicamenteuse et les tests pharmacogénétiques sont de précieux outils en pharmacovigilance. Nous proposons de les utiliser en respectant l'algorithme « TDM plus » que nous avons développé.
Resumo:
For well over 100 years, the Working Stress Design (WSD) approach has been the traditional basis for geotechnical design with regard to settlements or failure conditions. However, considerable effort has been put forth over the past couple of decades in relation to the adoption of the Load and Resistance Factor Design (LRFD) approach into geotechnical design. With the goal of producing engineered designs with consistent levels of reliability, the Federal Highway Administration (FHWA) issued a policy memorandum on June 28, 2000, requiring all new bridges initiated after October 1, 2007, to be designed according to the LRFD approach. Likewise, regionally calibrated LRFD resistance factors were permitted by the American Association of State Highway and Transportation Officials (AASHTO) to improve the economy of bridge foundation elements. Thus, projects TR-573, TR-583 and TR-584 were undertaken by a research team at Iowa State University’s Bridge Engineering Center with the goal of developing resistance factors for pile design using available pile static load test data. To accomplish this goal, the available data were first analyzed for reliability and then placed in a newly designed relational database management system termed PIle LOad Tests (PILOT), to which this first volume of the final report for project TR-573 is dedicated. PILOT is an amalgamated, electronic source of information consisting of both static and dynamic data for pile load tests conducted in the State of Iowa. The database, which includes historical data on pile load tests dating back to 1966, is intended for use in the establishment of LRFD resistance factors for design and construction control of driven pile foundations in Iowa. Although a considerable amount of geotechnical and pile load test data is available in literature as well as in various State Department of Transportation files, PILOT is one of the first regional databases to be exclusively used in the development of LRFD resistance factors for the design and construction control of driven pile foundations. Currently providing an electronically organized assimilation of geotechnical and pile load test data for 274 piles of various types (e.g., steel H-shaped, timber, pipe, Monotube, and concrete), PILOT (http://srg.cce.iastate.edu/lrfd/) is on par with such familiar national databases used in the calibration of LRFD resistance factors for pile foundations as the FHWA’s Deep Foundation Load Test Database. By narrowing geographical boundaries while maintaining a high number of pile load tests, PILOT exemplifies a model for effective regional LRFD calibration procedures.
Resumo:
This paper advocates the adoption of a mixed-methods research design to describe and analyze ego-centered social networks in transnational family research. Drawing on the experience of the Social Networks Influences on Family Formation project (2004-2005), I show how the combined use of network generators and semistructured interviews (N = 116) produces unique data on family configurations and their impact on life course choices. A mixed-methods network approach presents specific advantages for research on children in transnational families. On the one hand, quantitative analyses are crucial for reconstructing and measuring the potential and actual relational support available to children in a context where kin interactions may be hindered by temporary and prolonged periods of separation. On the other hand, qualitative analyses can address strategies and practices employed by families to maintain relationships across international borders and geographic distance, as well as the implications of those strategies for children's well-being.
Resumo:
Currently, individuals including designers, contractors, and owners learn about the project requirements by studying a combination of paper and electronic copies of the construction documents including the drawings, specifications (standard and supplemental), road and bridge standard drawings, design criteria, contracts, addenda, and change orders. This can be a tedious process since one needs to go back and forth between the various documents (paper or electronic) to obtain information about the entire project. Object-oriented computer-aided design (OO-CAD) is an innovative technology that can bring a change to this process by graphical portrayal of information. OO-CAD allows users to point and click on portions of an object-oriented drawing that are then linked to relevant databases of information (e.g., specifications, procurement status, and shop drawings). The vision of this study is to turn paper-based design standards and construction specifications into an object-oriented design and specification (OODAS) system or a visual electronic reference library (ERL). Individuals can use the system through a handheld wireless book-size laptop that includes all of the necessary software for operating in a 3D environment. All parties involved in transportation projects can access all of the standards and requirements simultaneously using a 3D graphical interface. By using this system, users will have all of the design elements and all of the specifications readily available without concerns of omissions. A prototype object-oriented model was created and demonstrated to potential users representing counties, cities, and the state. Findings suggest that a system like this could improve productivity to find information by as much as 75% and provide a greater sense of confidence that all relevant information had been identified. It was also apparent that this system would be used by more people in construction than in design. There was also concern related to the cost to develop and maintain the complete system. The future direction should focus on a project-based system that can help the contractors and DOT inspectors find information (e.g., road standards, specifications, instructional memorandums) more rapidly as it pertains to a specific project.
Resumo:
For well over 100 years, the Working Stress Design (WSD) approach has been the traditional basis for geotechnical design with regard to settlements or failure conditions. However, considerable effort has been put forth over the past couple of decades in relation to the adoption of the Load and Resistance Factor Design (LRFD) approach into geotechnical design. With the goal of producing engineered designs with consistent levels of reliability, the Federal Highway Administration (FHWA) issued a policy memorandum on June 28, 2000, requiring all new bridges initiated after October 1, 2007, to be designed according to the LRFD approach. Likewise, regionally calibrated LRFD resistance factors were permitted by the American Association of State Highway and Transportation Officials (AASHTO) to improve the economy of bridge foundation elements. Thus, projects TR-573, TR-583 and TR-584 were undertaken by a research team at Iowa State University’s Bridge Engineering Center with the goal of developing resistance factors for pile design using available pile static load test data. To accomplish this goal, the available data were first analyzed for reliability and then placed in a newly designed relational database management system termed PIle LOad Tests (PILOT), to which this first volume of the final report for project TR-573 is dedicated. PILOT is an amalgamated, electronic source of information consisting of both static and dynamic data for pile load tests conducted in the State of Iowa. The database, which includes historical data on pile load tests dating back to 1966, is intended for use in the establishment of LRFD resistance factors for design and construction control of driven pile foundations in Iowa. Although a considerable amount of geotechnical and pile load test data is available in literature as well as in various State Department of Transportation files, PILOT is one of the first regional databases to be exclusively used in the development of LRFD resistance factors for the design and construction control of driven pile foundations. Currently providing an electronically organized assimilation of geotechnical and pile load test data for 274 piles of various types (e.g., steel H-shaped, timber, pipe, Monotube, and concrete), PILOT (http://srg.cce.iastate.edu/lrfd/) is on par with such familiar national databases used in the calibration of LRFD resistance factors for pile foundations as the FHWA’s Deep Foundation Load Test Database. By narrowing geographical boundaries while maintaining a high number of pile load tests, PILOT exemplifies a model for effective regional LRFD calibration procedures.
Resumo:
Memoria de TFC en el que se analiza el estándar SQL:1999 y se compara con PostgreeSQL y Oracle.
Resumo:
Information concerning standard design practices and details for the Iowa Department of Transportation (IDOT) was provided to the research team. This was reviewed in detail so that the researchers would be familiar with the terminology and standard construction details. A comprehensive literature review was completed to gather information concerning constructability concepts applicable to bridges. It was determined that most of the literature deals with constructability as a general topic with only a limited amount of literature with specific concepts for bridge design and construction. Literature was also examined concerning the development of appropriate microcomputer databases. These activities represent completion of Task 1 as identified in the study.
Resumo:
This project proposes a preliminary architectural design for a control and data processing center, also known as 'ground segment', for Earth observation satellites.
Resumo:
Nokia Push To Talk järjestelmä tarjoaa uuden kommunikointimetodin tavallisen puhelun oheen. Yksi tärkeimmistä uuden järjestelmän ominaisuuksista on puhelunmuodostuksen nopeus. Lisäksi järjestelmän tulee olla telekommunikaatiojärjestelmien yleisten periaatteiden mukainen, mahdollisimman stabiili ja skaalautuva, jotta järjestelmä olisi mahdollisimman vikasietoinen ja laajennettavissa. Diplomityön päätavoite on esitellä "C++"-tietokantakirjastojen suunnittelua ja testausta. Aluksi tutkitaan tietokantajärjestelmien problematiikkaa alkaen tietokantajärjestelmän valinnasta ja huomioiden erityisesti nopeuskriteerit. Sitten esitellään kaksi teknistä toteutusta kahta "C++"-tietokantakirjastoa varten ja pohditaan joitakin vaihtoehtoisia toteutustapoja.
Resumo:
Polyphenols are a major class of bioactive phytochemicals whose consumption may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, type II diabetes and cancers. Phenol-Explorer, launched in 2009, is the only freely available web-based database on the content of polyphenols in food and their in vivo metabolism and pharmacokinetics. Here we report the third release of the database (Phenol-Explorer 3.0), which adds data on the effects of food processing on polyphenol contents in foods. Data on >100 foods, covering 161 polyphenols or groups of polyphenols before and after processing, were collected from 129 peer-reviewed publications and entered into new tables linked to the existing relational design. The effect of processing on polyphenol content is expressed in the form of retention factor coefficients, or the proportion of a given polyphenol retained after processing, adjusted for change in water content. The result is the first database on the effects of food processing on polyphenol content and, following the model initially defined for Phenol-Explorer, all data may be traced back to original sources. The new update will allow polyphenol scientists to more accurately estimate polyphenol exposure from dietary surveys. Database URL: http://www.phenol-explorer.eu
Resumo:
Tourism is one of the biggest industry branches with billions of tourists traveling every year around the world. Therefore, solutions providing tourist information have to be up to date with both changes in the industry and the world’s technological progress. The aim of this thesis is to present a design and a prototype of a tourist mobile service which is individual-oriented, cost-free for the end user, and secure. On the information providers’ side, the solution is implemented as a Webbased database. The end users access the information through a Bluetooth application on their mobile devices. The Bluetooth-based solution allows to avoid any costs for the end users, that is tourists. The study shows that, even with small data transfers, the tourists could save significantly when compared to possible roaming charges for data transfer. Also, the proposed mobile service is not intrusive, as it is provided through an application installed by tourists voluntarily on their mobile devices. Through design and implementation this work shows that it is possible to build a system which can be used to provide information services to tourists through mobile phones. The work achieved a successful ongoing synchronization between the client and the server databases. Implementation and usage were limited to smart phones only, as they provide better technological support for the solution having features like maps, GPS, Wi-Fi, Bluetooth and Databases. Moreover, the design of this system shows how Bluetooth technology can be used effectively as a means of communication while minimizing its shortcomings and risks, such as security, by bypassing Bluetooth server service discovery protocol (SDP) and connecting directly to the device. Apart from showing the design and implementation of the end-user costfree mobile information service, the results of this work also highlight the possible business opportunities to the provider of the service.
Resumo:
This work is aimed at building an adaptable frame-based system for processing Dravidian languages. There are about 17 languages in this family and they are spoken by the people of South India.Karaka relations are one of the most important features of Indian languages. They are the semabtuco-syntactic relations between verbs and other related constituents in a sentence. The karaka relations and surface case endings are analyzed for meaning extraction. This approach is comparable with the borad class of case based grammars.The efficiency of this approach is put into test in two applications. One is machine translation and the other is a natural language interface (NLI) for information retrieval from databases. The system mainly consists of a morphological analyzer, local word grouper, a parser for the source language and a sentence generator for the target language. This work make contributios like, it gives an elegant account of the relation between vibhakthi and karaka roles in Dravidian languages. This mapping is elegant and compact. The same basic thing also explains simple and complex sentence in these languages. This suggests that the solution is not just ad hoc but has a deeper underlying unity. This methodology could be extended to other free word order languages. Since the frame designed for meaning representation is general, they are adaptable to other languages coming in this group and to other applications.
On Implementing Joins, Aggregates and Universal Quantifier in Temporal Databases using SQL Standards
Resumo:
A feasible way of implementing a temporal database is by mapping temporal data model onto a conventional data model followed by a commercial database management system. Even though extensions were proposed to standard SQL for supporting temporal databases, such proposals have not yet come across standardization processes. This paper attempts to implement database operators such as aggregates and universal quantifier for temporal databases, implemented on top of relational database systems, using currently available SQL standards.
Resumo:
A GIS has been designed with limited Functionalities; but with a novel approach in Aits design. The spatial data model adopted in the design of KBGIS is the unlinked vector model. Each map entity is encoded separately in vector fonn, without referencing any of its neighbouring entities. Spatial relations, in other words, are not encoded. This approach is adequate for routine analysis of geographic data represented on a planar map, and their display (Pages 105-106). Even though spatial relations are not encoded explicitly, they can be extracted through the specially designed queries. This work was undertaken as an experiment to study the feasibility of developing a GIS using a knowledge base in place of a relational database. The source of input spatial data was accurate sheet maps that were manually digitised. Each identifiable geographic primitive was represented as a distinct object, with its spatial properties and attributes defined. Composite spatial objects, made up of primitive objects, were formulated, based on production rules defining such compositions. The facts and rules were then organised into a production system, using OPS5
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.