116 resultados para MySQL


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The search for more reliable security systems and information management of these systems is leading to a growing progress in new technologies investments that allow the implementation of equipment with a high level of reliability, but also have an agile and practical operation. This led people to turn increasingly looking for home automation systems, enterprise and industry for the automation and integration of their systems. The identification by radio frequency is very widespread today for ensuring both agility in handling records data, the reliability of their identification systems, which are increasingly advanced and less susceptible to fraud. Attached to this technology, the use of the database is always very important for the storage of information collected, the area where the MySQL platform is widely used. Using the open source Arduino platform for programming and manipulation of RFID module and LabVIEW software for the union of all these technologies and to develop a user-friendly interface, you can create a highly reliable access control and agility places a high turnover of people. This project aims to prove the advantages of using all these technologies working together, thus improving a flawed system effectively safety, cheaper and quicker

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analisi di prestazioni di un database costruito con MongoDB e uno con Mysql residenti su due macchine virtuali uguali configurate appositamente per i test di inserimento, interrogazione e eliminazione.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trasparencias y material para la clase sobre Catálogo de MySQL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ejercicios sobre el catálogo de MySQL

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Descripción de los distintos tipos de motores MySQL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transparencias de Gestión de Índices de MySQL

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the advantages of database-backed websites and describes the model for a library website implemented at the University of Nottingham using open source software, PHP and MySQL. As websites continue to grow in size and complexity it becomes increasingly important to introduce automation to help manage them. It is suggested that a database-backed website offers many advantages over one built from static HTML pages. These include a consistency of style and content, the ability to present different views of the same data, devolved editing and enhanced security. The University of Nottingham Library Services website is described and issues surrounding its design, technological implementation and management are explored.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As one of the first institutional repositories in Australia and the first in the world to have an institution-wide deposit mandate, QUT ePrints has great ‘brand recognition’ within the University (Queensland University of Technology) and beyond. The repository is managed by the library but, over the years, the Library’s repository team has worked closely with other departments (especially the Office of Research and IT Services) to ensure that QUT ePrints was embedded into the business processes and systems our academics use regularly. For example, the repository is the source of the publication information which displays on each academic’s Staff Profile page. The repository pulls in citation data from Scopus and Web of Science and displays the data in the publications records. Researchers can monitor their citations at a glance via the repository ‘View’ which displays all their publications. A trend in recent years has been to populate institutional repositories with publication details imported from the University’s research information system (RIS). The main advantage of the RIS to Repository workflow is that it requires little input from the academics as the publication details are often imported into the RIS from publisher databases. Sadly, this is also its main disadvantage. Generally, only the metadata is imported from the RIS and the lack of engagement by the academics results in very low proportions of records with open access full-texts. Consequently, while we could see the value of integrating the two systems, we were determined to make the repository the entry point for publication data. In 2011, the University funded a project to convert a number of paper-based processes into web-based workflows. This included a workflow to replace the paper forms academics used to complete to report new publications (which were later used by the data entry staff to input the details into the RIS). Publication details and full-text files are uploaded to the repository (by the academics or their nominees). Each night, the repository (QUT ePrints) pushes the metadata for new publications into a holding table. The data is checked by Office of Research staff the next day and then ‘imported’ into the RIS. Publication details (including the repository URLs) are pushed from the RIS to the Staff Profiles system. Previously, academics were required to supply the Office of research with photocopies of their publication (for verification/auditing purposes). The repository is now the source of verification information. Library staff verify the accuracy of the publication details and, where applicable, the peer review status of the work. The verification metadata is included in the information passed to the Office of Research. The RIS at QUT comprises two separate systems built on an Oracle database; a proprietary product (ResearchMaster) plus a locally produced system known as RAD (Research Activity Database). The repository platform is EPrints which is built on a MySQL database. This partly explains why the data is passed from one system to the other via a holding table. The new workflow went live in early April 2012. Tests of the technical integration have all been successful. At the end of the first 12 months, the impact of the new workflow on the proportion of full-texts deposited will be evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivation Extracellular vesicles (EVs) are spherical bilayered proteolipids, harboring various bioactive molecules. Due to the complexity of the vesicular nomenclatures and components, online searches for EV-related publications and vesicular components are currently challenging. Results We present an improved version of EVpedia, a public database for EVs research. This community web portal contains a database of publications and vesicular components, identification of orthologous vesicular components, bioinformatic tools and a personalized function. EVpedia includes 6879 publications, 172 080 vesicular components from 263 high-throughput datasets, and has been accessed more than 65 000 times from more than 750 cities. In addition, about 350 members from 73 international research groups have participated in developing EVpedia. This free web-based database might serve as a useful resource to stimulate the emerging field of EV research. Availability and implementation The web site was implemented in PHP, Java, MySQL and Apache, and is freely available at http://evpedia.info.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large-scale gene discovery has been performed for the grass fungal endophytes Neotyphodium coenophialum, Neotyphodium lolii, and Epichloë festucae. The resulting sequences have been annotated by comparison with public DNA and protein sequence databases and using intermediate gene ontology annotation tools. Endophyte sequences have also been analysed for the presence of simple sequence repeat and single nucleotide polymorphism molecular genetic markers. Sequences and annotation are maintained within a MySQL database that may be queried using a custom web interface. Two cDNA-based microarrays have been generated from this genome resource. They permit the interrogation of 3806 Neotyphodium genes (NchipTM microarray), and 4195 Neotyphodium and 920 Epichloë genes (EndoChipTM microarray), respectively. These microarrays provide tools for high-throughput transcriptome analysis, including genome-specific gene expression studies, profiling of novel endophyte genes, and investigation of the host grass–symbiont interaction. Comparative transcriptome analysis in Neotyphodium and Epichloë was performed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The role of lectins in mediating cancer metastasis, apoptosis as well as various other signaling events has been well established in the past few years. Data on various aspects of the role of lectins in cancer is being accumulated at a rapid pace. The data on lectins available in the literature is so diverse, that it becomes difficult and time-consuming, if not impossible to comprehend the advances in various areas and obtain the maximum benefit. Not only do the lectins vary significantly in their individual functional roles, but they are also diverse in their sequences, structures, binding site architectures, quaternary structures, carbohydrate affinities and specificities as well as their potential applications. An organization of these seemingly independent data into a common framework is essential in order to achieve effective use of all the data towards understanding the roles of different lectins in different aspects of cancer and any resulting applications. An integrated knowledge base (CancerLectinDB) together with appropriate analytical tools has therefore been developed for lectins relevant for any aspect of cancer, by collating and integrating diverse data. This database is unique in terms of providing sequence, structural, and functional annotations for lectins from all known sources in cancer and is expected to be a useful addition to the number of glycan related resources now available to the community. The database has been implemented using MySQL on a Linux platform and web-enabled using Perl-CGI and Java tools. Data for individual lectins pertain to taxonomic, biochemical, domain architecture, molecular sequence and structural details as well as carbohydrate specificities. Extensive links have also been provided for relevant bioinformatics resources and analytical tools. Availability of diverse data integrated into a common framework is expected to be of high value for various studies on lectin cancer biology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Haemophilus influenzae (H. Influenzae) is the causative agent of pneumonia, bacteraemia and meningitis. The organism is responsible for large number of deaths in both developed and developing countries. Even-though the first bacterial genome to be sequenced was that of H. Influenzae, there is no exclusive database dedicated for H. Influenzae. This prompted us to develop the Haemophilus influenzae Genome Database (HIGDB). Methods: All data of HIGDB are stored and managed in MySQL database. The HIGDB is hosted on Solaris server and developed using PERL modules. Ajax and JavaScript are used for the interface development. Results: The HIGDB contains detailed information on 42,741 proteins, 18,077 genes including 10 whole genome sequences and also 284 three dimensional structures of proteins of H. influenzae. In addition, the database provides ``Motif search'' and ``GBrowse''. The HIGDB is freely accessible through the URL:http://bioserverl.physicslisc.ernetin/HIGDB/. Discussion: The HIGDB will be a single point access for bacteriological, clinical, genomic and proteomic information of H. influenzae. The database can also be used to identify DNA motifs within H. influenzae genomes and to compare gene or protein sequences of a particular strain with other strains of H. influenzae. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Castellano. En este proyecto se desarrolla una aplicación web que permite la gestión de datos obtenidos en las pruebas que se realizan a los generadores síncronos en Banco de Pruebas. El objetivo de este proyecto es almacenar estos datos en una base de datos de forma que se encuentren centralizados para su posterior explotación.Para llevar a cabo esta tarea, se han utilizado tecnologías como PHP, MySQL, JavaScript y Ajax.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Desarrollo de un sistema de recuperación y almacenamiento de las noticias multilíngües que aparecen en el Europe Media Monitor