945 resultados para Database System for Alumni Tracking
Resumo:
Constraints are widely present in the flight control problems: actuators saturations or flight envelope limitations are only some examples of that. The ability of Model Predictive Control (MPC) of dealing with the constraints joined with the increased computational power of modern calculators makes this approach attractive also for fast dynamics systems such as agile air vehicles. This PhD thesis presents the results, achieved at the Aerospace Engineering Department of the University of Bologna in collaboration with the Dutch National Aerospace Laboratories (NLR), concerning the development of a model predictive control system for small scale rotorcraft UAS. Several different predictive architectures have been evaluated and tested by means of simulation, as a result of this analysis the most promising one has been used to implement three different control systems: a Stability and Control Augmentation System, a trajectory tracking and a path following system. The systems have been compared with a corresponding baseline controller and showed several advantages in terms of performance, stability and robustness.
Resumo:
Much research has focused on desertification and land degradation assessments without putting sufficient emphasis on prevention and mitigation, although the concept of sustainable land management (SLM) is increasingly being acknowledged. A variety of SLM measures have already been applied at the local level, but they are rarely adequately recognised, evaluated, shared or used for decision support. WOCAT (World Overview of Technologies and Approaches) has developed an internationally recognised, standardised methodology to document and evaluate SLM technologies and approaches, including spatial distribution, allowing the sharing of SLM knowledge worldwide. The recent methodological integration into a participatory process allows now analysing and using this knowledge for decision support at the local and national level. The use of the WOCAT tools stimulates evaluation (self-evaluation as well as learning from comparing experiences) within SLM initiatives where all too often there is not only insufficient monitoring but also a lack of critical analysis. The comprehensive questionnaires and database system facilitate to document, evaluate and disseminate local experiences of SLM technologies and their implementation approaches. This evaluation process - in a team of experts and together with land users - greatly enhances understanding of the reasons behind successful (or failed) local practices. It has now been integrated into a new methodology for appraising and selecting SLM options. The methodology combines a local collective learning and decision approach with the use of the evaluated global best practices from WOCAT in a concise three step process: i) identifying land degradation and locally applied solutions in a stakeholder learning workshop; ii) assessing local solutions with the standardised WOCAT tool; iii) jointly selecting promising strategies for implementation with the help of a decision support tool. The methodology has been implemented in various countries and study sites around the world mainly within the FAO LADA (Land Degradation Assessment Project) and the EU-funded DESIRE project. Investments in SLM must be carefully assessed and planned on the basis of properly documented experiences and evaluated impacts and benefits: concerted efforts are needed and sufficient resources must be mobilised to tap the wealth of knowledge and learn from SLM successes.
Resumo:
A tandem mass spectral database system consists of a library of reference spectra and a search program. State-of-the-art search programs show a high tolerance for variability in compound-specific fragmentation patterns produced by collision-induced decomposition and enable sensitive and specific 'identity search'. In this communication, performance characteristics of two search algorithms combined with the 'Wiley Registry of Tandem Mass Spectral Data, MSforID' (Wiley Registry MSMS, John Wiley and Sons, Hoboken, NJ, USA) were evaluated. The search algorithms tested were the MSMS search algorithm implemented in the NIST MS Search program 2.0g (NIST, Gaithersburg, MD, USA) and the MSforID algorithm (John Wiley and Sons, Hoboken, NJ, USA). Sample spectra were acquired on different instruments and, thus, covered a broad range of possible experimental conditions or were generated in silico. For each algorithm, more than 30,000 matches were performed. Statistical evaluation of the library search results revealed that principally both search algorithms can be combined with the Wiley Registry MSMS to create a reliable identification tool. It appears, however, that a higher degree of spectral similarity is necessary to obtain a correct match with the NIST MS Search program. This characteristic of the NIST MS Search program has a positive effect on specificity as it helps to avoid false positive matches (type I errors), but reduces sensitivity. Thus, particularly with sample spectra acquired on instruments differing in their Setup from tandem-in-space type fragmentation, a comparably higher number of false negative matches (type II errors) were observed by searching the Wiley Registry MSMS.
Resumo:
The main aim of the methodology presented in this paper is to provide a framework for a participatory process for the appraisal and selection of options to mitigate desertification and land degradation. This methodology is being developed within the EU project DESIRE (www.desire-project.eu/) in collaboration with WOCAT (www.wocat.org). It is used to select promising conservation strategies for test-implementation in each of the 16 degradation and desertification hotspot sites in the Mediterranean and around the world. The methodology consists of three main parts: In a first step, prevention and mitigation strategies already applied at the respective DESIRE study site are identified and listed during a workshop with representatives of different stakeholders groups (land users, policy makers, researchers). The participatory and process-oriented approach initiates a mutual learning process among the different stakeholders by sharing knowledge and jointly reflecting on current problems and solutions related to land degradation and desertification. In the second step these identified, locally applied solutions (technologies and approaches) are assessed with the help of the WOCAT methodology. Comprehensive questionnaires and a database system have been developed to document and evaluate all relevant aspects of technical measures as well as implementation approaches by teams of researchers and specialists, together with land users. This research process ensures systematic assessing and piecing together of local information, together with specific details about the environmental and socio-economic setting. The third part consists of another stakeholder workshop where promising strategies for sustainable land management in the given context are selected, based on the best practices database of WOCAT, including the evaluated locally applied strategies at the DESIRE sites. These promising strategies will be assessed with the help of a selection and decision support tool and adapted for test-implementation at the study site.
Resumo:
Background. Colorectal polyps are abnormal growths in the wall of the colon including the rectum. The study aims to estimate the prevalence and type of colonic polyps in children undergoing colonoscopic examination at Texas Children's Hospital (TCH) in Houston, Texas during 2000-2007. Also, to examine the factors associated with colonic polyps and the potential determinants of colonic polyps in children undergoing colonoscopy and compare those who had colonic polyps with those who did not on colonoscopy, and determine the significant risk factors of colonic polyps in these children. ^ Methods. We conducted a cross sectional study to analyze data collected at TCH. We obtained demographic, clinical, and histopathology information on consecutive patients who underwent colonoscopy during 2000-2007 from endoscopic records contained in the PEDS-CORI registry (Pediatric Endoscopy Database System-Clinical Outcomes Research Initiative), and abstracted data from the accompanying histopathology reports. ^ Results. We identified 2,693-unique patients, under 18 years of age, who underwent colonoscopy. Approximately 65.5% were white non-Hispanic, and 10.8% African-American. The mean age was 8.7 years and 51.8% were female patients. Polyps were present in 174 patients (6.5%). The most common two histological types were juvenile (60.6%), inflammatory (17.4%). We found that the prevalence of polyps was higher in younger aged children (12.9% in 0-5 years) than in older aged children (4% in 15-17 years), and slightly higher in males than in females (7.9% and 5.4% respectively). For males only, the odds of polyps were statistically significantly higher in Blacks and Hispanics compared to white non Hispanics (OR of 2.2 and 2.1, respectively, and 95% CI of 1.3, 3.9 and 1.3, 3.5 respectively). The indications for colonoscopy were different for children with polyps compared to those without polyps, i.e., 47.0% vs. 19.8% respectively for lower GI bleeding, 2.7% vs. 21.4% respectively for abdominal pain/bloating, and, or 0.9% vs. 9.6% respectively for diarrhea. ^ Conclusion. Colorectal polyps occur in about 1 in 15 children and adolescents undergoing first colonoscopy. The demographic variable of younger age is strongly associated with having polyps irrespective of ethnicity. Lower GI bleeding is strongly related to the presence of colorectal polyps in children and adolescents undergoing colonoscopy.^
Resumo:
En este proyecto se ha estudiado el abanico de posibilidades que las plataformas web y móviles ofrecen para aprender lenguajes de programación compilados. A continuación, se ha realizado el diseño y la implementación de una plataforma para el aprendizaje de lenguajes de programación desde dispositivos móviles, con posibilidad de compilación remota desde la aplicación desarrollada, analizando el proceso y las elecciones de desarrollo tomadas. Así, se ha desarrollado una app mediante la plataforma de desarrollo Cordova, que puede ser distribuida para todas las plataformas móviles que esta soporta, incluyendo las más populares: iOS y Android. Para la parte servidora se ha utilizado un servidor Apache (PHP) y el sistema NoSQL MongoDB para la base de datos. Para mayor facilidad en la gestión del contenido de la app, se ha desarrollado en paralelo un gestor web de la base de datos, el cual permite añadir, editar y eliminar contenido de la misma a través de una interfaz agradable y funcional. ABSTRACT. In this project I have studied the range of possibilities that web and mobile platforms offer to learn compiled programming languages. Next, I have designed and implemented a platform for learning programming languages from mobile devices, giving the possibility of remote compilation within the developed application. In this terms, I have developed an app with the Cordova development platform, which can be distributed for all the mobile platforms Cordova supports, including the most popular ones: iOS and Android. For the server part, I have used an Apache (PHP) server and the NoSQL database system MongoDB. In order to offer a more usable system and a better database management, I have also developed a web manager for the database, from which database content can be added, edited and removed, through a clear and functional interface.
Resumo:
RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.
Resumo:
A progressive spatial query retrieves spatial data based on previous queries (e.g., to fetch data in a more restricted area with higher resolution). A direct query, on the other side, is defined as an isolated window query. A multi-resolution spatial database system should support both progressive queries and traditional direct queries. It is conceptually challenging to support both types of query at the same time, as direct queries favour location-based data clustering, whereas progressive queries require fragmented data clustered by resolutions. Two new scaleless data structures are proposed in this paper. Experimental results using both synthetic and real world datasets demonstrate that the query processing time based on the new multiresolution approaches is comparable and often better than multi-representation data structures for both types of queries.
Resumo:
This paper presents a corpus-based descriptive analysis of the most prevalent transfer effects and connected speech processes observed in a comparison of 11 Vietnamese English speakers (6 females, 5 males) and 12 Australian English speakers (6 males, 6 females) over 24 grammatical paraphrase items. The phonetic processes are segmentally labelled in terms of IPA diacritic features using the EMU speech database system with the aim of labelling departures from native-speaker pronunciation. An analysis of prosodic features was made using ToBI framework. The results show many phonetic and prosodic processes which make non-native speakers’ speech distinct from native ones. The corpusbased methodology of analysing foreign accent may have implications for the evaluation of non-native accent, accented speech recognition and computer assisted pronunciation- learning.
Resumo:
JenPep is a relational database containing a compendium of thermodynamic binding data for the interaction of peptides with a range of important immunological molecules: the major histocompatibility complex, TAP transporter, and T cell receptor. The database also includes annotated lists of B cell and T cell epitopes. Version 2.0 of the database is implemented in a bespoke postgreSQL database system and is fully searchable online via a perl/HTML interface (URL: http://www.jenner.ac.uk/JenPep).
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016
Resumo:
The paper addresses issues related to the design of a graphical query mechanism that can act as an interface to any object-oriented database system (OODBS), in general, and the object model of ODMG 2.0, in particular. In the paper a brief literature survey of related work is given, and an analysis methodology that allows the evaluation of such languages is proposed. Moreover, the user's view level of a new graphical query language, namely GOQL (Graphical Object Query Language), for ODMG 2.0 is presented. The user's view level provides a graphical schema that does not contain any of the perplexing details of an object-oriented database schema, and it also provides a foundation for a graphical interface that can support ad-hoc queries for object-oriented database applications. We illustrate, using an example, the user's view level of GOQL
Resumo:
Vertebrate genomes are organised into a variety of nuclear environments and chromatin states that have profound effects on the regulation of gene transcription. This variation presents a major challenge to the expression of transgenes for experimental research, genetic therapies and the production of biopharmaceuticals. The majority of transgenes succumb to transcriptional silencing by their chromosomal environment when they are randomly integrated into the genome, a phenomenon known as chromosomal position effect (CPE). It is not always feasible to target transgene integration to transcriptionally permissive “safe harbour” loci that favour transgene expression, so there remains an unmet need to identify gene regulatory elements that can be added to transgenes which protect them against CPE. Dominant regulatory elements (DREs) with chromatin barrier (or boundary) activity have been shown to protect transgenes from CPE. The HS4 element from the chicken beta-globin locus and the A2UCOE element from a human housekeeping gene locus have been shown to function as DRE barriers in a wide variety of cell types and species. Despite rapid advances in the profiling of transcription factor binding, chromatin states and chromosomal looping interactions, progress towards functionally validating the many candidate barrier elements in vertebrates has been very slow. This is largely due to the lack of a tractable and efficient assay for chromatin barrier activity. In this study, I have developed the RGBarrier assay system to test the chromatin barrier activity of candidate DREs at pre-defined isogenic loci in human cells. The RGBarrier assay consists in a Flp-based RMCE reaction for the integration of an expression construct, carrying candidate DREs, in a pre-characterised chromosomal location. The RGBarrier system involves the tracking of red, green and blue fluorescent proteins by flow cytometry to monitor on-target versus off-target integration and transgene expression. The analysis of the reporter (GFP) expression for several weeks gives a measure of the protective ability of each candidate elements from chromosomal silencing. This assay can be scaled up to test tens of new putative barrier elements in the same chromosomal context in parallel. The defined chromosomal contexts of the RGBarrier assays will allow for detailed mechanistic studies of chromosomal silencing and DRE barrier element action. Understanding these mechanisms will be of paramount importance for the design of specific solutions for overcoming chromosomal silencing in specific transgenic applications.
Resumo:
El objetivo de este estudio es establecer si la dexmedetomidina (DEX) es segura y efectiva para el manejo coadyuvante de síndrome de abstinencia a alcohol (SAA) a través de la búsqueda de evidencia científica. Metodología: se realiza una revisión sistemática de literatura publicada y no publicada desde enero de 1989 hasta febrero 2016 en PubMed, Embase, Scopus, Bireme, Cochrane library y en otras bases de datos y portales. Los criterios de inclusión fueron ensayos clínicos aleatorizados y no aleatorizados, estudios cuasi-experimentales, estudios de cohorte, y estudios de casos y controles; que incluyeron pacientes mayores de 18 años hospitalizados con diagnóstico de SAA y donde se usó DEX como terapia coadyuvante. Resultados: 7 estudios, 477 pacientes, se incluyeron en el análisis final. Se encontraron dos ensayos clínicos aleatorizados, tres estudios de casos y controles y dos estudios de cohorte retrospectivo. Solo uno de los estudios fue doble ciego y utilizó placebo como comparador. Análisis y conclusiones: en los estudios experimentales se determinan que el uso de DEX como terapia coadyuvante en el manejo de SAA tiene significancia clínica y estadística para disminuir dosis de BZD en las primeras 24 horas de tratamiento; pero no demostraron tener otros beneficios clínicos. En los estudios no aleatorizados existe consenso que relaciona el uso de DEX con menores dosis de BZD de forma temprana. Recomendaciones: no se recomienda el uso de DEX en SAA de forma rutinaria. Se recomienda usar DEX solo en casos en el que exista evidencia fallo terapéutico a BZD.
Resumo:
Much information on flavonoid content of Brazilian foods has already been obtained; however, this information is spread in scientific publications and non-published data. The objectives of this work were to compile and evaluate the quality of national flavonoid data according to the United States Department of Agriculture`s Data Quality Evaluation System (USDA-DQES) with few modifications, for future dissemination in the TBCA-USP (Brazilian Food Composition Database). For the compilation, the most abundant compounds in the flavonoid subclasses were considered (flavonols, flavones, isoflavones, flavanones, flavan-3-ols, and anthocyanidins) and the analysis of the compounds by HPLC was adopted as criteria for data inclusion. The evaluation system considers five categories, and the maximum score assigned to each category is 20. For each data, a confidence code (CC) was attributed (A, B, C and D), indicating the quality and reliability of the information. Flavonoid data (773) present in 197 Brazilian foods were evaluated. The CC ""C"" (as average) was attributed to 99% of the data and ""B"" (above average) to 1%. The main categories assigned low average scores were: number of samples; sampling plan and analytical quality control (average scores 2, 5 and 4, respectively). The analytical method category received an average score of 9. The category assigned the highest score was the sample handling (20 average). These results show that researchers need to be conscious about the importance of the number and plan of evaluated samples and the complete description and documentation of all the processes of methodology execution and analytical quality control. (C) 2010 Elsevier Inc. All rights reserved.