26 resultados para sensor grid database system
em Universit
Resumo:
Antiresorptive agents such as bisphosphonates induce a rapid increase of BMD during the 1st year of treatment and a partial maintenance of bone architecture. Trabecular Bone Score (TBS), a new grey-level texture measurement that can be extracted from the DXA image, correlates with 3D parameters of bone micro-architecture. Aim: To evaluate the longitudinal effect of antiresorptive agents on spine BMD and on site-matched spine microarchitecture as assessed by TBS. Methods: From the BMD database for Province of Manitoba, Canada, we selected women age >50 with paired baseline and follow up spine DXA examinations who had not received any prior HRT or other antiresorptive drug.Women were divided in two subgroups: (1) those not receiving any HRT or antiresorptive drug during follow up (=non-users) and (2) those receiving non-HRT antiresorptive drug during follow up (=users) with high adherence (medication possession ratio >75%) from a provincial pharmacy database system. Lumbar spine TBS was derived by the Bone Disease Unit, University of Lausanne, for each spine DXA examination using anonymized files (blinded from clinical parameters and outcomes). Effects of antiresorptive treatment for users and non-users on TBS and BMD at baseline and during mean 3.7 years follow-up were compared. Results were expressed % change per year. Results: 1150 non-users and 534 users met the inclusion criteria. At baseline, users and non-users had a mean age and BMI of [62.2±7.9 vs 66.1±8.0 years] and [26.3±4.7 vs 24.7±4.0 kg/m²] respectively. Antiresorptive drugs received by users were bisphosphonates (86%), raloxifene (10%) and calcitonin (4%). Significant differences in BMD change and TBS change were seen between users and nonusers during follow-up (p<0.0001). Significant decreases in mean BMD and TBS (−0.36± 0.05% per year; −0.31±0.06% per year) were seen for non-users compared with baseline (p<0.001). A significant increase in mean BMD was seen for users compared with baseline (+1.86±0.0% per year, p<0.0018). TBS of users also increased compared with baseline (+0.20±0.08% per year, p<0.001), but more slowly than BMD. Conclusion: We observed a significant increase in spine BMD and a positive maintenance of bone micro-architecture from TBS with antiresorptive treatment, whereas the treatment naïve group lost both density and micro-architecture. TBS seems to be responsive to treatment and could be suitable for monitoring micro-architecture. This article is part of a Special Issue entitled ECTS 2011. Disclosure of interest: M.-A. Krieg: None declared, A. Goertzen: None declared, W. Leslie: None declared, D. Hans Consulting fees from Medimaps.
Resumo:
Abstract Dynamics is a central aspect of ski jumping, particularly during take-off and stable flight. Currently, measurement systems able to measure ski jumping dynamics (e.g. 3D cameras, force plates) are complex and only available in few research centres worldwide. This study proposes a method to determine dynamics using a wearable inertial sensor-based system which can be used routinely on any ski jumping hill. The system automatically calculates characteristic dynamic parameters during take-off (position and velocity of the centre of mass perpendicular to the table, force acting on the centre of mass perpendicular to the table and somersault angular velocity) and stable flight (total aerodynamic force). Furthermore, the acceleration of the ski perpendicular to the table was quantified to characterise the skis lift at take-off. The system was tested with two groups of 11 athletes with different jump distances. The force acting on the centre of mass, acceleration of the ski perpendicular to the table, somersault angular velocity and total aerodynamic force were different between groups and correlated with the jump distances. Furthermore, all dynamic parameters were within the range of prior studies based on stationary measurement systems, except for the centre of mass mean force which was slightly lower.
Resumo:
Résumé: L'automatisation du séquençage et de l'annotation des génomes, ainsi que l'application à large échelle de méthodes de mesure de l'expression génique, génèrent une quantité phénoménale de données pour des organismes modèles tels que l'homme ou la souris. Dans ce déluge de données, il devient très difficile d'obtenir des informations spécifiques à un organisme ou à un gène, et une telle recherche aboutit fréquemment à des réponses fragmentées, voir incomplètes. La création d'une base de données capable de gérer et d'intégrer aussi bien les données génomiques que les données transcriptomiques peut grandement améliorer la vitesse de recherche ainsi que la qualité des résultats obtenus, en permettant une comparaison directe de mesures d'expression des gènes provenant d'expériences réalisées grâce à des techniques différentes. L'objectif principal de ce projet, appelé CleanEx, est de fournir un accès direct aux données d'expression publiques par le biais de noms de gènes officiels, et de représenter des données d'expression produites selon des protocoles différents de manière à faciliter une analyse générale et une comparaison entre plusieurs jeux de données. Une mise à jour cohérente et régulière de la nomenclature des gènes est assurée en associant chaque expérience d'expression de gène à un identificateur permanent de la séquence-cible, donnant une description physique de la population d'ARN visée par l'expérience. Ces identificateurs sont ensuite associés à intervalles réguliers aux catalogues, en constante évolution, des gènes d'organismes modèles. Cette procédure automatique de traçage se fonde en partie sur des ressources externes d'information génomique, telles que UniGene et RefSeq. La partie centrale de CleanEx consiste en un index de gènes établi de manière hebdomadaire et qui contient les liens à toutes les données publiques d'expression déjà incorporées au système. En outre, la base de données des séquences-cible fournit un lien sur le gène correspondant ainsi qu'un contrôle de qualité de ce lien pour différents types de ressources expérimentales, telles que des clones ou des sondes Affymetrix. Le système de recherche en ligne de CleanEx offre un accès aux entrées individuelles ainsi qu'à des outils d'analyse croisée de jeux de donnnées. Ces outils se sont avérés très efficaces dans le cadre de la comparaison de l'expression de gènes, ainsi que, dans une certaine mesure, dans la détection d'une variation de cette expression liée au phénomène d'épissage alternatif. Les fichiers et les outils de CleanEx sont accessibles en ligne (http://www.cleanex.isb-sib.ch/). Abstract: The automatic genome sequencing and annotation, as well as the large-scale gene expression measurements methods, generate a massive amount of data for model organisms. Searching for genespecific or organism-specific information througout all the different databases has become a very difficult task, and often results in fragmented and unrelated answers. The generation of a database which will federate and integrate genomic and transcriptomic data together will greatly improve the search speed as well as the quality of the results by allowing a direct comparison of expression results obtained by different techniques. The main goal of this project, called the CleanEx database, is thus to provide access to public gene expression data via unique gene names and to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and crossdataset comparisons. A consistent and uptodate gene nomenclature is achieved by associating each single gene expression experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of genes from model organisms, such as human and mouse. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing crossreferences to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resources, such as cDNA clones or Affymetrix probe sets. The Affymetrix mapping files are accessible as text files, for further use in external applications, and as individual entries, via the webbased interfaces . The CleanEx webbased query interfaces offer access to individual entries via text string searches or quantitative expression criteria, as well as crossdataset analysis tools, and crosschip gene comparison. These tools have proven to be very efficient in expression data comparison and even, to a certain extent, in detection of differentially expressed splice variants. The CleanEx flat files and tools are available online at: http://www.cleanex.isbsib. ch/.
Resumo:
The present study proposes a method based on ski fixed inertial sensors to automatically compute spatio-temporal parameters (phase durations, cycle speed and cycle length) for the diagonal stride in classical cross-country skiing. The proposed system was validated against a marker-based motion capture system during indoor treadmill skiing. Skiing movement of 10 junior to world-cup athletes was measured for four different conditions. The accuracy (i.e. median error) and precision (i.e. interquartile range of error) of the system was below 6ms for cycle duration and ski thrust duration and below 35ms for pole push duration. Cycle speed precision (accuracy) was below 0.1m/s (0.005m/s) and cycle length precision (accuracy) was below 0.15m (0.005m). The system was sensitive to changes of conditions and was accurate enough to detect significant differences reported in previous studies. Since capture volume is not limited and setup is simple, the system would be well suited for outdoor measurements on snow.
Resumo:
Activation of the hepatoportal glucose sensors by portal glucose infusion leads to increased glucose clearance and induction of hypoglycemia. Here, we investigated whether glucagon-like peptide-1 (GLP-1) could modulate the activity of these sensors. Mice were therefore infused with saline (S-mice) or glucose (P-mice) through the portal vein at a rate of 25 mg/kg. min. In P-mice, glucose clearance increased to 67.5 +/- 3.7 mg/kg. min as compared with 24.1 +/- 1.5 mg/kg. min in S-mice, and glycemia decreased from 5.0 +/- 0.1 to 3.3 +/- 0.1 mmol/l at the end of the 3-h infusion period. Coinfusion of GLP-1 with glucose into the portal vein at a rate of 5 pmol/kg. min (P-GLP-1 mice) did not increase the glucose clearance rate (57.4 +/- 5.0 ml/kg. min) and hypoglycemia (3.8 +/- 0.1 mmol/l) observed in P-mice. In contrast, coinfusion of glucose and the GLP-1 receptor antagonist exendin-(9-39) into the portal vein at a rate of 0.5 pmol/kg. min (P-Ex mice) reduced glucose clearance to 36.1 +/- 2.6 ml/kg. min and transiently increased glycemia to 9.2 +/- 0.3 mmol/l at 60 min of infusion before it returned to the fasting level (5.6 +/- 0.3 mmol/l) at 3 h. When glucose and exendin-(9-39) were infused through the portal and femoral veins, respectively, glucose clearance increased to 70.0 +/- 4.6 ml/kg. min and glycemia decreased to 3.1 +/- 0.1 mmol/l, indicating that exendin-(9-39) has an effect only when infused into the portal vein. Finally, portal vein infusion of glucose in GLP-1 receptor(-/-) mice failed to increase the glucose clearance rate (26.7 +/- 2.9 ml/kg. min). Glycemia increased to 8.5 +/- 0.5 mmol/l at 60 min and remained elevated until the end of the glucose infusion (8.2 +/- 0.4 mmol/l). Together, our data show that the GLP-1 receptor is part of the hepatoportal glucose sensor and that basal fasting levels of GLP-1 sufficiently activate the receptor to confer maximum glucose competence to the sensor. These data demonstrate an important extrapancreatic effect of GLP-1 in the control of glucose homeostasis.
Resumo:
BACKGROUND: DNA sequence integrity, mRNA concentrations and protein-DNA interactions have been subject to genome-wide analyses based on microarrays with ever increasing efficiency and reliability over the past fifteen years. However, very recently novel technologies for Ultra High-Throughput DNA Sequencing (UHTS) have been harnessed to study these phenomena with unprecedented precision. As a consequence, the extensive bioinformatics environment available for array data management, analysis, interpretation and publication must be extended to include these novel sequencing data types. DESCRIPTION: MIMAS was originally conceived as a simple, convenient and local Microarray Information Management and Annotation System focused on GeneChips for expression profiling studies. MIMAS 3.0 enables users to manage data from high-density oligonucleotide SNP Chips, expression arrays (both 3'UTR and tiling) and promoter arrays, BeadArrays as well as UHTS data using MIAME-compliant standardized vocabulary. Importantly, researchers can export data in MAGE-TAB format and upload them to the EBI's ArrayExpress certified data repository using a one-step procedure. CONCLUSION: We have vastly extended the capability of the system such that it processes the data output of six types of GeneChips (Affymetrix), two different BeadArrays for mRNA and miRNA (Illumina) and the Genome Analyzer (a popular Ultra-High Throughput DNA Sequencer, Illumina), without compromising on its flexibility and user-friendliness. MIMAS, appropriately renamed into Multiomics Information Management and Annotation System, is currently used by scientists working in approximately 50 academic laboratories and genomics platforms in Switzerland and France. MIMAS 3.0 is freely available via http://multiomics.sourceforge.net/.
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.
Resumo:
Background: Disease management, a system of coordinated health care interventions for populations with chronic diseases in which patient self-care is a key aspect, has been shown to be effective for several conditions. Little is known on the supply of disease management programs in Switzerland. Objectives: To systematically search, record and evaluate data on existing disease management programs in Switzerland. Methods: Programs met our operational definition of disease management if their interventions targeted a chronic disease, included a multidisciplinary team and lasted at least 6 months. To find existing programs, we searched Swiss official websites, Swiss web-pages using Google, medical electronic database (Medline), and checked references from selected documents. We also contacted personally known individuals, those identified as possibly working in the field, individuals working in major Swiss health insurance companies and people recommended by previously contacted persons (snow ball strategy). We developed an extraction grid and collected information pertaining to the following 8 domains: patient population, intervention recipient, intervention content, delivery personnel, method of communication, intensity and complexity, environment and clinical outcomes (measures?). Results: We identified 8 programs fulfilling our operational definition of disease management. Programs targeted patients with diabetes, hypertension, heart failure, obesity, alcohol dependence, psychiatric disorders or breast cancer, and were mainly directed towards patients. The interventions were multifaceted and included education in almost all cases. Half of the programs included regularly scheduled follow-up, by phone in 3 instances. Healthcare professionals involved were physicians, nurses, case managers, social workers, psychologists and dietitians. None fulfilled the 6 criteria established by the Disease Management Association of America. Conclusions: Our study shows that disease management programs, in a country with universal health insurance coverage and little incentive to develop new healthcare strategies, are scarce, although we may have missed existing programs. Nonetheless, those already implemented are very interesting and rather comprehensive. Appropriate evaluation of these programs should be performed in order to build upon them and try to design a generic disease management framework suited to the Swiss healthcare system.
Resumo:
Since 2008, Intelligence units of six states of the western part of Switzerland have been sharing a common database for the analysis of high volume crimes. On a daily basis, events reported to the police are analysed, filtered and classified to detect crime repetitions and interpret the crime environment. Several forensic outcomes are integrated in the system such as matches of traces with persons, and links between scenes detected by the comparison of forensic case data. Systematic procedures have been settled to integrate links assumed mainly through DNA profiles, shoemarks patterns and images. A statistical outlook on a retrospective dataset of series from 2009 to 2011 of the database informs for instance on the number of repetition detected or confirmed and increased by forensic case data. Time needed to obtain forensic intelligence in regard with the type of marks treated, is seen as a critical issue. Furthermore, the underlying integration process of forensic intelligence into the crime intelligence database raised several difficulties in regards of the acquisition of data and the models used in the forensic databases. Solutions found and adopted operational procedures are described and discussed. This process form the basis to many other researches aimed at developing forensic intelligence models.
Resumo:
The aim of this paper is to evaluate the risks associated with the use of fake fingerprints on a livescan supplied with a method of liveness detection. The method is based on optical properties of the skin. The sensor uses several polarizations and illuminations to capture the information of the different layers of the human skin. These experiments also allow for the determination under which conditions the system is deceived and if there is an influence respectively of the nature of the fake, the mould used for the production or the individuals involved in the attack. These experiments showed that current multispectral sensors can be deceived by the use of fake fingerprints created with or without the cooperation of the subject. Fakes created from direct casts perform better than those produced by fakes created from indirect casts. The results showed that the success of the attack is influenced by two main factors. The first is the quality of the fakes, and by extension the quality of the original fingerprint. The second is the combination of the general patterns involved in the attacks since an appropriate combination can strongly increase the rates of successful attacks.
Resumo:
The clinical demand for a device to monitor Blood Pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is increasing. Based on the so-called Pulse Wave Velocity (PWV) principle, this paper introduces and evaluates a novel concept of BP monitor that can be fully integrated within a chest sensor. After a preliminary calibration, the sensor provides non-occlusive beat-by-beat estimations of Mean Arterial Pressure (MAP) by measuring the Pulse Transit Time (PTT) of arterial pressure pulses travelling from the ascending aorta towards the subcutaneous vasculature of the chest. In a cohort of 15 healthy male subjects, a total of 462 simultaneous readings consisting of reference MAP and chest PTT were acquired. Each subject was recorded at three different days: D, D+3 and D+14. Overall, the implemented protocol induced MAP values to range from 80 ± 6 mmHg in baseline, to 107 ± 9 mmHg during isometric handgrip maneuvers. Agreement between reference and chest-sensor MAP values was tested by using intraclass correlation coefficient (ICC = 0.78) and Bland-Altman analysis (mean error = 0.7 mmHg, standard deviation = 5.1 mmHg). The cumulative percentage of MAP values provided by the chest sensor falling within a range of ±5 mmHg compared to reference MAP readings was of 70%, within ±10 mmHg was of 91%, and within ±15mmHg was of 98%. These results point at the fact that the chest sensor complies with the British Hypertension Society (BHS) requirements of Grade A BP monitors, when applied to MAP readings. Grade A performance was maintained even two weeks after having performed the initial subject-dependent calibration. In conclusion, this paper introduces a sensor and a calibration strategy to perform MAP measurements at the chest. The encouraging performance of the presented technique paves the way towards an ambulatory-compliant, continuous and non-occlusive BP monitoring system.
Resumo:
HAMAP (High-quality Automated and Manual Annotation of Proteins-available at http://hamap.expasy.org/) is a system for the automatic classification and annotation of protein sequences. HAMAP provides annotation of the same quality and detail as UniProtKB/Swiss-Prot, using manually curated profiles for protein sequence family classification and expert curated rules for functional annotation of family members. HAMAP data and tools are made available through our website and as part of the UniRule pipeline of UniProt, providing annotation for millions of unreviewed sequences of UniProtKB/TrEMBL. Here we report on the growth of HAMAP and updates to the HAMAP system since our last report in the NAR Database Issue of 2013. We continue to augment HAMAP with new family profiles and annotation rules as new protein families are characterized and annotated in UniProtKB/Swiss-Prot; the latest version of HAMAP (as of 3 September 2014) contains 1983 family classification profiles and 1998 annotation rules (up from 1780 and 1720). We demonstrate how the complex logic of HAMAP rules allows for precise annotation of individual functional variants within large homologous protein families. We also describe improvements to our web-based tool HAMAP-Scan which simplify the classification and annotation of sequences, and the incorporation of an improved sequence-profile search algorithm.