973 resultados para Programmazione, PHP, C, Web, Server, Database, MySQL
Resumo:
Introduction – Based on a previous project of University of Lisbon (UL) – a Bibliometric Benchmarking Analysis of University of Lisbon, for the period of 2000-2009 – a database was created to support research information (ULSR). However this system was not integrated with other existing systems at University, as the UL Libraries Integrated System (SIBUL) and the Repository of University of Lisbon (Repositório.UL). Since libraries were called to be part of the process, the Faculty of Pharmacy Library’ team felt that it was very important to get all systems connected or, at least, to use that data in the library systems. Objectives – The main goals were to centralize all the scientific research produced at Faculty of Pharmacy, made it available to the entire Faculty, involve researchers and library team, capitalize and reinforce team work with the integration of several distinct projects and reducing tasks’ redundancy. Methods – Our basis was the imported data collection from the ISI Web of Science (WoS), for the period of 2000-2009, into ULSR. All the researchers and indexed publications at WoS, were identified. A first validation to identify all the researchers and their affiliation (university, faculty, department and unit) was done. The final validation was done by each researcher. In a second round, concerning the same period, all Pharmacy Faculty researchers identified their published scientific work in other databases/resources (NOT WoS). To our strategy, it was important to get all the references and essential/critical to relate them with the correspondent digital objects. To each researcher previously identified, was requested to register all their references of the ‘NOT WoS’ published works, at ULSR. At the same time, they should submit all PDF files (for both WoS and NOT WoS works) in a personal area of the Web server. This effort enabled us to do a more reliable validation and prepare the data and metadata to be imported to Repository and to Library Catalogue. Results – 558 documents related with 122 researchers, were added into ULSR. 1378 bibliographic records (WoS + NOT WoS) were converted into UNIMARC and Dublin Core formats. All records were integrated in the catalogue and repository. Conclusions – Although different strategies could be adopted, according to each library team, we intend to share this experience and give some tips of what could be done and how Faculty of Pharmacy created and implemented her strategy.
Resumo:
The binding between antigenic peptides (epitopes) and the MHC molecule is a key step in the cellular immune response. Accurate in silico prediction of epitope-MHC binding affinity can greatly expedite epitope screening by reducing costs and experimental effort. Recently, we demonstrated the appealing performance of SVRMHC, an SVR-based quantitative modeling method for peptide-MHC interactions, when applied to three mouse class I MHC molecules. Subsequently, we have greatly extended the construction of SVRMHC models and have established such models for more than 40 class I and class II MHC molecules. Here we present the SVRMHC web server for predicting peptide-MHC binding affinities using these models. Benchmarked percentile scores are provided for all predictions. The larger number of SVRMHC models available allowed for an updated evaluation of the performance of the SVRMHC method compared to other well- known linear modeling methods. SVRMHC is an accurate and easy-to-use prediction server for epitope-MHC binding with significant coverage of MHC molecules. We believe it will prove to be a valuable resource for T cell epitope researchers.
Resumo:
Background: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. Identification of DNA-binding proteins is one of the major challenges in the field of genome annotation. There have been several computational methods proposed in the literature to deal with the DNA-binding protein identification. However, most of them can't provide an invaluable knowledge base for our understanding of DNA-protein interactions. Results: We firstly presented a new protein sequence encoding method called PSSM Distance Transformation, and then constructed a DNA-binding protein identification method (SVM-PSSM-DT) by combining PSSM Distance Transformation with support vector machine (SVM). First, the PSSM profiles are generated by using the PSI-BLAST program to search the non-redundant (NR) database. Next, the PSSM profiles are transformed into uniform numeric representations appropriately by distance transformation scheme. Lastly, the resulting uniform numeric representations are inputted into a SVM classifier for prediction. Thus whether a sequence can bind to DNA or not can be determined. In benchmark test on 525 DNA-binding and 550 non DNA-binding proteins using jackknife validation, the present model achieved an ACC of 79.96%, MCC of 0.622 and AUC of 86.50%. This performance is considerably better than most of the existing state-of-the-art predictive methods. When tested on a recently constructed independent dataset PDB186, SVM-PSSM-DT also achieved the best performance with ACC of 80.00%, MCC of 0.647 and AUC of 87.40%, and outperformed some existing state-of-the-art methods. Conclusions: The experiment results demonstrate that PSSM Distance Transformation is an available protein sequence encoding method and SVM-PSSM-DT is a useful tool for identifying the DNA-binding proteins. A user-friendly web-server of SVM-PSSM-DT was constructed, which is freely accessible to the public at the web-site on http://bioinformatics.hitsz.edu.cn/PSSM-DT/.
Resumo:
The primary purpose of this thesis was to design and develop a prototype e-commerce system where dynamic parameters are included in the decision-making process and execution of an online transaction. The system developed and implemented takes into account previous usage history, priority and associated engineering capabilities. The system was developed using three-tiered client server architecture. The interface was the Internet browser. The middle tiered web server was implemented using Active Server Pages, which form a link between the client system and other servers. A relational database management system formed the data component of the three-tiered architecture. It includes a capability for data warehousing which extracts needed information from the stored data of the customers as well as their orders. The system organizes and analyzes the data that is generated during a transaction to formulate a client's behavior model during and after a transaction. This is used for making decisions like pricing, order rescheduling during a client's forthcoming transaction. The system helps among other things to bring about predictability to a transaction execution process, which could be highly desirable in the current competitive scenario.
Resumo:
En los últimos años el presupuesto de las bibliotecas a nivel general se ha mantenido o inclusive ha tendido a disminuir, por lo que éstas se ven obligadas a operar con esos fondos decrecientes. Por otro lado, los costos de los servicios son cada vez más altos, haciendo que las bibliotecas deban adaptarse al nuevo entorno, tratando de limitar sus gastos. Para mejorar la gestión de la biblioteca adaptándose al presupuesto asignado es necesario conocer el costo real de sus procesos, así se puede tomar decisiones para mejorar o incrementar los servicios prestados. Para solventar este problema existen varias técnicas para la gestión de costos, y uno de los más avanzados al momento descrito en esta tesis es el Costeo Basado en Actividades manejadas por Tiempo (TD-ABC). Este modelo nos proporciona información esencial de las funciones de la biblioteca, nos ayuda a comprender los factores de costo relevantes así como el mejorar la asignación presupuestaria. El presente trabajo de titulación tiene como objetivo el desarrollo de un módulo que aplique la metodología TD-ABC para el manejo de procesos, el mismo que ha sido implementado siguiendo la misma estructura del sistema de Automatización de Bibliotecas y Centros de Documentación (ABCD) que se encuentra funcionando en el Centro de Documentación Regional "Juan Bautista Vázquez" (CDRJBV). El módulo de TD-ABC ha sido desarrollado bajo plataforma de Software Libre, utilizando el lenguaje PHP y base de datos MYSQL, además de herramientas para el desarrollo web HTML, CSS, AJAX, JAVASCRIPT y API GOOGLE.
Resumo:
The world of Computational Biology and Bioinformatics presently integrates many different expertise, including computer science and electronic engineering. A major aim in Data Science is the development and tuning of specific computational approaches to interpret the complexity of Biology. Molecular biologists and medical doctors heavily rely on an interdisciplinary expert capable of understanding the biological background to apply algorithms for finding optimal solutions to their problems. With this problem-solving orientation, I was involved in two basic research fields: Cancer Genomics and Enzyme Proteomics. For this reason, what I developed and implemented can be considered a general effort to help data analysis both in Cancer Genomics and in Enzyme Proteomics, focusing on enzymes which catalyse all the biochemical reactions in cells. Specifically, as to Cancer Genomics I contributed to the characterization of intratumoral immune microenvironment in gastrointestinal stromal tumours (GISTs) correlating immune cell population levels with tumour subtypes. I was involved in the setup of strategies for the evaluation and standardization of different approaches for fusion transcript detection in sarcomas that can be applied in routine diagnostic. This was part of a coordinated effort of the Sarcoma working group of "Alleanza Contro il Cancro". As to Enzyme Proteomics, I generated a derived database collecting all the human proteins and enzymes which are known to be associated to genetic disease. I curated the data search in freely available databases such as PDB, UniProt, Humsavar, Clinvar and I was responsible of searching, updating, and handling the information content, and computing statistics. I also developed a web server, BENZ, which allows researchers to annotate an enzyme sequence with the corresponding Enzyme Commission number, the important feature fully describing the catalysed reaction. More to this, I greatly contributed to the characterization of the enzyme-genetic disease association, for a better classification of the metabolic genetic diseases.
Resumo:
Background: There has been a proliferation of quality use of medicines activities in Australia since the 1990s. However, knowledge of the nature and extent of these activities was lacking. A mechanism was required to map the activities to enable their coordination. Aims: To develop a geographical mapping facility as an evaluative tool to assist the planning and implementation of Australia's policy on the quality use of medicines. Methods: A web-based database incorporating geographical mapping software was developed. Quality use of medicines projects implemented across the country was identified from project listings funded by the Quality Use of Medicines Evaluation Program, the National Health and Medical Research Council, Mental Health Strategy, Rural Health Support, Education and Training Program, the Healthy Seniors Initiative, the General Practice Evaluation Program and the Drug Utilisation Evaluation Network. In addition, projects were identified through direct mail to persons working in the field. Results: The Quality Use of Medicines Mapping Project (QUMMP) was developed, providing a Web-based database that can be continuously updated. This database showed the distribution of quality use of medicines activities by: (i) geographical region, (ii) project type, (iii) target group, (iv) stakeholder involvement, (v) funding body and (vi) evaluation method. At September 2001, the database included 901 projects. Sixty-two per cent of projects had been conducted in Australian capital cities, where approximately 63% of the population reside. Distribution of projects varied between States. In Western Australia and Queensland, 36 and 73 projects had been conducted, respectively, representing approximately two projects per 100 000 people. By comparison, in South Australia and Tasmania approximately seven projects per 100 000 people were recorded, with six per 100 000 people in Victoria and three per 100 000 people in New South Wales. Rural and remote areas of the country had more limited project activity. Conclusions: The mapping of projects by geographical location enabled easy identification of high and low activity areas. Analysis of the types of projects undertaken in each region enabled identification of target groups that had not been involved or services that had not yet been developed. This served as a powerful tool for policy planning and implementation and will be used to support the continued implementation of Australia's policy on the quality use of medicines.
Resumo:
This paper presents a low-cost scaled model of a silo for drying and airing cereal grains. It allows the control and monitor of several parameters associated to the silo's operation, through a remote accessible infrastructure. The scaled model consists of a 2.50 m wide × 2.10 m long plant with all control and monitor capacities provided by micro-Web servers. An application running on the micro-Web servers enables storing all parameters in a data basis for later analysis. The implemented model aims to support a remote experimentation facility for technological education, research-oriented tutorials, and industrial applications. Given the low-cost requirement, this remote facility can be easily replicated in other institutions to support a network of remote labs, which encompasses the concurrent access of several users (e.g. students).
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Redes de Comunicação e Multimédia
Resumo:
O projeto idealizado para a realização da presente tese de mestrado tem como finalidade o desenvolvimento de uma aplicação móvel para o sistema Android. Esta aplicação permitirá que, através da passagem do dispositivo móvel por um leitor NFC, seja possível realizar apostas no jogo Euromilhões da Santa Casa, onde os dados ficam gravados numa conta associada a cada jogador. Esta aplicação terá ainda muitas outras funcionalidades que permitirão criar chaves de jogo, gerir o cartão individual de cada jogador, consultar os prémios e outras informações do jogo. A realização deste projeto está dividida em três fases. A primeira fase consistiu na aquisição de todo o material necessário e estabelecimento da comunicação entre o dispositivo móvel e o desktop, por intermédio da tecnologia NFC. A segunda fase centrou-se no desenvolvimento da aplicação móvel e do servidor web, onde foram integradas as várias funcionalidades. Estabeleceu-se também a comunicação entre estes dois sistemas. Na terceira e última fase, foi realizada a criação da aplicação desktop, capaz de interagir por intermédio da tecnologia NFC, com a aplicação móvel, possibilitando a comunicação entre os dois sistemas.
Resumo:
En la empresa Unit4 se dispone de un Web Server codificado en Visual Basic que ha quedado desfasado y obsoleto de forma que lo que se desea es migrarlo a un lenguaje de programación actual y potente y eliminar restricciones de software que tiene ahora, además de mejorar el rendimiento. Este proyecto se refiere al desarrollo de este nuevo servidor.
Resumo:
La cerca de similituds als codis genètics de dos espècies, ens permet obtenir molta informació de la evolució dels seus genomes. Aquesta informació afavoreix el descobriment de gens que es conserven amb la mateixa funcionalitat a diferents espècies. També té importants aplicacions mèdiques i ens permet entendre els processos evolutius que han portat a la diversitat d'espècies de l'actualitat. El present treball té l'objectiu d'automatitzar una sèrie de processos d'un servidor d'aplicacions web: http://platypus.uab.cat, que realitzin de forma òptima i eficient, la comparació dels genomes eucariotes, tots amb tots, conforme aquests genomes siguin seqüenciats. Així aquestes comparacions entre genomes de organismes superiors podran ser consultades via web.
Resumo:
Java 2 Plataform Enterprise J2EE, amb utilització de Spring Web, Hibernate, JavaServer Faces i ICEfaces.
Resumo:
Disseny d'una base de dades per a gestionar les amonestacions i sancions imposades als alumnes de tots els centres d'ensenyament del Departament d'Ensenyament.
Resumo:
Comparación de los sistemas operativos Windows 7 Profesional, Ubuntu 10.10 Desktop y Mac OS X Snow Leopard Desktop respecto al rendimiento que ofrecen sobre el servidorweb Apache instalado en cada uno de ellos.