83 resultados para Software testing. Test generation. Grammars
Resumo:
CONTEXT: A passive knee-extension test has been shown to be a reliable method of assessing hamstring tightness, but this method does not take into account the potential effect of gravity on the tested leg. OBJECTIVE: To compare an original passive knee-extension test with 2 adapted methods including gravity's effect on the lower leg. DESIGN: Repeated measures. SETTING: Laboratory. PARTICIPANTS: 20 young track and field athletes (16.6 ± 1.6 y, 177.6 ± 9.2 cm, 75.9 ± 24.8 kg). INTERVENTION: Each subject was tested in a randomized order with 3 different methods: In the original one (M1), passive knee angle was measured with a standard force of 68.7 N (7 kg) applied proximal to the lateral malleolus. The second (M2) and third (M3) methods took into account the relative lower-leg weight (measured respectively by handheld dynamometer and anthropometrical table) to individualize the force applied to assess passive knee angle. MAIN OUTCOME MEASURES: Passive knee angles measured with video-analysis software. RESULTS: No difference in mean individualized applied force was found between M2 and M3, so the authors assessed passive knee angle only with M2. The mean knee angle was different between M1 and M2 (68.8 ± 12.4 vs 73.1 ± 10.6, P < .001). Knee angles in M1 and M2 were correlated (r = .93, P < .001). CONCLUSIONS: Differences in knee angle were found between the original passive knee-extension test and a method with gravity correction. M2 is an improved version of the original method (M1) since it minimizes the effect of gravity. Therefore, we recommend using it rather than M1.
Resumo:
Using Monte Carlo simulations and reanalyzing the data of a validation study of the AEIM emotional intelligence test, we demonstrated that an atheoretical approach and the use of weak statistical procedures can result in biased validity estimates. These procedures included stepwise regression-and the general case of failing to include important theoretical controls-extreme scores analysis, and ignoring heteroscedasticity as well as measurement error. The authors of the AEIM test responded by offering more complete information about their analyses, allowing us to further examine the perils of ignoring theory and correct statistical procedures. In this paper we show with extended analyses that the AEIM test is invalid.
Resumo:
MALDI-TOF MS can be used for the identification of microorganism species. We have extended its application to a novel assay of Candida albicans susceptibility to fluconazole, based on monitoring modifications of the proteome of yeast cells grown in the presence of varying drug concentrations. The method was accurate, and reliable, and showed full agreement with the Clinical Laboratory Standards Institute's reference method. This proof-of-concept demonstration highlights the potential for this approach to test other pathogens.
Resumo:
Background: Although there have been many studies on isokinetic shoulder exercises in evaluation and rehabilitation programs, the cardiovascular and metabolic responses of those modes of muscle strength exercises have been poorly investigated. Objective: To analyze cardiovascular and metabolic responses during a standardized test used to study the internal (IR) and external (ER) rotators maximal isokinetic strength. Methods: Four days after an incremental exercise test on cycle ergometer, ten healthy subjects performed an isokinetic shoulder strength evaluation with cardiovascular (Heart rate, HR) and metabolic gas exchange (&Vdot;O_{2}) analysis. The IR and ER isokinetic strength, measured in seated position with 45° of shoulder abduction in scapular plane, was evaluated concentrically at 60, 120 and 240°/s and eccentrically at 60°/s, for both shoulder sides. An endurance test with 30 repetitions at 240°/s was performed at the end of each shoulder side testing. Results: There was a significant increase of mean HR with isokinetic exercise (P< 0.05). Increases of HR was 42-71% over the resting values. During endurance testing, increases of HR was 77-105% over the resting values, and corresponded to 85-86% of the maximal HR during incremental test. Increase of &Vdot;O_{2} during isokinetic exercises was from 6-11 ml/min/kg to 20-43 ml/min/kg. Conclusion: This study performed significant cardiovascular and metabolic responses to isokinetic exercise of rotators shoulder muscles. A warm-up should be performed before maximal high-intensity isokinetic shoulder testing. Our results indicated that observation and supervision are important during testing and/or training sessions, especially in subjects with risk for cardiovascular disorders.
Resumo:
We analyzed the species distribution of Candida blood isolates (CBIs), prospectively collected between 2004 and 2009 within FUNGINOS, and compared their antifungal susceptibility according to clinical breakpoints defined by the European Committee on Antimicrobial Susceptibility Testing (EUCAST) in 2013, and the Clinical and Laboratory Standards Institute (CLSI) in 2008 (old CLSI breakpoints) and 2012 (new CLSI breakpoints). CBIs were tested for susceptiblity to fluconazole, voriconazole and caspofungin by microtitre broth dilution (Sensititre(®) YeastOne? test panel). Of 1090 CBIs, 675 (61.9%) were C. albicans, 191 (17.5%) C. glabrata, 64 (5.9%) C. tropicalis, 59 (5.4%) C. parapsilosis, 33 (3%) C. dubliniensis, 22 (2%) C. krusei and 46 (4.2%) rare Candida species. Independently of the breakpoints applied, C. albicans was almost uniformly (>98%) susceptible to all three antifungal agents. In contrast, the proportions of fluconazole- and voriconazole-susceptible C. tropicalis and F-susceptible C. parapsilosis were lower according to EUCAST/new CLSI breakpoints than to the old CLSI breakpoints. For caspofungin, non-susceptibility occurred mainly in C. krusei (63.3%) and C. glabrata (9.4%). Nine isolates (five C. tropicalis, three C. albicans and one C. parapsilosis) were cross-resistant to azoles according to EUCAST breakpoints, compared with three isolates (two C. albicans and one C. tropicalis) according to new and two (2 C. albicans) according to old CLSI breakpoints. Four species (C. albicans, C. glabrata, C. tropicalis and C. parapsilosis) represented >90% of all CBIs. In vitro resistance to fluconazole, voriconazole and caspofungin was rare among C. albicans, but an increase of non-susceptibile isolates was observed among C. tropicalis/C. parapsilosis for the azoles and C. glabrata/C. krusei for caspofungin according to EUCAST and new CLSI breakpoints compared with old CLSI breakpoints.
Resumo:
Integrated approaches using different in vitro methods in combination with bioinformatics can (i) increase the success rate and speed of drug development; (ii) improve the accuracy of toxicological risk assessment; and (iii) increase our understanding of disease. Three-dimensional (3D) cell culture models are important building blocks of this strategy which has emerged during the last years. The majority of these models are organotypic, i.e., they aim to reproduce major functions of an organ or organ system. This implies in many cases that more than one cell type forms the 3D structure, and often matrix elements play an important role. This review summarizes the state of the art concerning commonalities of the different models. For instance, the theory of mass transport/metabolite exchange in 3D systems and the special analytical requirements for test endpoints in organotypic cultures are discussed in detail. In the next part, 3D model systems for selected organs--liver, lung, skin, brain--are presented and characterized in dedicated chapters. Also, 3D approaches to the modeling of tumors are presented and discussed. All chapters give a historical background, illustrate the large variety of approaches, and highlight up- and downsides as well as specific requirements. Moreover, they refer to the application in disease modeling, drug discovery and safety assessment. Finally, consensus recommendations indicate a roadmap for the successful implementation of 3D models in routine screening. It is expected that the use of such models will accelerate progress by reducing error rates and wrong predictions from compound testing.
Resumo:
PAPER 1: A THEORY ON THE EFFECTS OF INTERNATIONALIZATION ON FIRM ENTREPRENEURIAL BEHAVIOR AND GROWTH Abstract This article addresses the relationship. Past findings reveal that the direct effects of internationalization on performance are mixed and inconclusive. Our framework integrates firm entrepreneurial behavior as a mediating force of the troublesome Drawing on the tension between the entrepreneurship literature and the organizational inertia theory, we argue that internationalization is key to minimizing the stifling effects of inertia and in engendering entrepreneurial behavior towards growth. We suggest that firms that internationalize at a young age and enjoy an intense degree of internationalization tend to become more entrepreneurial than do late and weakly internationalized firms. As a consequence, early and intense internationalizers experience superior growth. Aware of the inherent endogeneity of our propositions, we also discuss how consistent estimates can be obtained when testing the model empirically. PAPER 2: DOES INTERNATIONALIZATION MATTER FOR GROWTH? THE CASE OF SWISS SOFTWARE FIRMS. Abstract This paper seeks to address the issue of whether early and intense internationalization leads to superior firm growth. We revisit the hypotheses of previous studies within the emerging research domain of international entrepreneurship. Empirical analyses on the performance implications of internationalization have so far been limited and inconsistent. Our paper intends to make two contributions to the international entrepreneurship literature. First, we bring additional empirical evidence as to the inconclusive firm performance endogeneity in our causal model, using a sample of 103 Swiss international small and medium-sized enterprises (SMEs). On one hand, we find that the degree of internationalization significantly increases perceived firm growth (i.e., relative firm performance in a market); however, age at internationalization was unrelated to perceived firm growth. On the other hand, we reproduced the causal path of a highly cited study that showed how age at internationalization was significantly and negatively associated with objective firm growth (i.e., sales). Interestingly, our results support the study similar setting (OLS regression with comparable control variables); however, the effect for age at internationalization reverses when we correct for endogeneity. PAPER 3: EFFECT OF INTERNATIONALIZATION ON FIRM ENTREPRENEURIAL ORIENTATION AND PERFORMANCE: THE CASE OF SWISS SOFTWARE FIRMS. Abstract How does internationalization influence a firm orientation (EO) and is this related to firm growth? This paper inquires into the performance theorizing, we test a process model in which EO plays a mediating role in accounting for the relationship between internationalization and growth. We position this paper on the tension zone between the entrepreneurship literature and the organizational inertia theory. We lay out the argument that internationalization is source of opportunities that drives a firm and thus mitigates inertial pressure. Using a sample of Swiss software small and medium-sized enterprises (SMEs), we found that degree of internationalization (but not age of internationalization) increases EO, which subsequently increased firm growth.
Resumo:
OBJECTIVE: The purpose of the present study was to submit the same materials that were tested in the round robin wear test of 2002/2003 to the Alabama wear method. METHODS: Nine restorative materials, seven composites (belleGlass, Chromasit, Estenia, Heliomolar, SureFil, Targis, Tetric Ceram) an amalgam (Amalcap) and a ceramic (IPS Empress) have been submitted to the Alabama wear method for localized and generalized wear. The test centre did not know which brand they were testing. Both volumetric and vertical loss had been determined with an optical sensor. After completion of the wear test, the raw data were sent to IVOCLAR for further analysis. The statistical analysis of the data included logarithmic transformation of the data, the calculation of relative ranks of each material within each test centre, measures of agreement between methods, the discrimination power and coefficient of variation of each method as well as measures of the consistency and global performance for each material. RESULTS: Relative ranks of the materials varied tremendously between the test centres. When all materials were taken into account and the test methods compared with each other, only ACTA agreed reasonably well with two other methods, i.e. OHSU and ZURICH. On the other hand, MUNICH did not agree with the other methods at all. The ZURICH method showed the lowest discrimination power, ACTA, IVOCLAR and ALABAMA localized the highest. Material-wise, the best global performance was achieved by the leucite reinforced ceramic material Empress, which was clearly ahead of belleGlass, SureFil and Estenia. In contrast, Heliomolar, Tetric Ceram and especially Chromasit demonstrated a poor global performance. The best consistency was achieved by SureFil, Tetric Ceram and Chromasit, whereas the consistency of Amalcap and Heliomolar was poor. When comparing the laboratory data with clinical data, a significant agreement was found for the IVOCLAR and ALABAMA generalized wear method. SIGNIFICANCE: As the different wear simulator settings measure different wear mechanisms, it seems reasonable to combine at least two different wear settings to assess the wear resistance of a new material.
Resumo:
Résumé La voie de signalisation de Wnt est extrêmement conservée au cours de l'évolution. Les protéines Wnt sont des molécules sécrétées qui se lient à la famille de récepteurs Frizzled. Cette interaction mène à la stabilisation de la protéine β-caténine, qui va s'accumuler dans le cytoplasme puis migrer dans le noyau où elle peut s'hétérodimériser avec les facteurs de transcription de la famille TCF/LEF. Il a été démontré que cette voie de signalisation joue un rôle important durant la lymphopoïèse et de récents résultats suggèrent un rôle clé de cette voie dans le renouvellement des Cellules Souches Hématopoïétique (CSH). Des études se basant sur un système de surexpression de protéines montrent clairement que la voie Wnt peut influencer l'hématopoïèse. Cependant, le rôle de la protéine β-caténine dans le système hématopoïétique n'a jamais été testé directement. Ce projet de thèse se propose d'étudier la fonction de la protéine β-caténine par sa délétion inductible via le système Cre-loxP. De façon surprenante, nous avons pu démontrer que les progéniteurs de la moelle osseuse, déficients en β-caténine, ne montrent aucune altération dans leur capacité à s'auto-renouveler et/ou à reconstituer toutes les lignées hématopoïétiques (myéloïde, érythroïde et lymphoïde) dans les souris-chimères. De plus, le développement, la survie des thymocytes ainsi que la prolifération des cellules T périphériques induite par un antigène, sont indépendants de β-caténine. Ces résultats suggèrent soit que la protéine β-caténine ne joue pas un rôle primordial dans le système hématopoiétique, soit que son absence pourrait être compensée par une autre protéine. Un candidat privilégié susceptible de se substituer à β-caténine, serait plakoglobine, aussi connu sous le nom de γ-caténine. En effet, ces deux protéines partagent de multiples caractéristiques structurelles. Afin de démontrer que la protéine γ-caténine peut compenser l'absence de β-caténine, nous avons généré des souris dans lesquelles, le système hématopoïétique est déficient pour ces deux protéines. Cette déficience combinée de β- caténine et γ-caténine ne perturbe pas la capacité des Cellules Souche Hématopoïétique-Long Terme (CSH-LT) de se renouveler, par contre elle agit sur un progéniteur précoce déjà différencié de la moelle osseuse. Ces résultats mettent en évidence que la protéine γ-caténine est capable de compenser l'absence de protéine β-caténine dans le système hématopoïétique. Par conséquent, ce travail contribue à une meilleure connaissance de la cascade Wnt dans l'hématopoïèse. Summary The canonical Wnt signal transduction pathway is a developmentally highly conserved. Wnts are secreted molecules which bind to the family of Frizzled receptors in a complex with the low density lipoprotein receptor related protein (LRP-5/6). This initial activation step leads to the stabilization and accumulation of β-catenin, first in the cytoplasm and subsequently in the nucleus where it forms heterodimers with TCF/LEF transcription factor family members. Wnt signalling has been shown to be important during early lymphopoiesis and has more recently, been suggested to be a key player in self-renewal of haematopoietic stem cells (HSCs). Although mostly gain of function studies indicate that components of the Wnt signalling pathway can influence the haematopoietic system, the role of β-catenin has never been directly investigated. The aim of this thesis project is to investigate the putatively critical role of β-catenin in vivo using the Cre-loxP mediated conditional loss of function approach. Surprisingly, β-catenin deficient bone marrow (BM) progenitors arc not impaired in their ability to self-renew and/or to reconstitute all haematopoietic lineages (myeloid, erythroid and lymphoid) in both mixed and straight bone marrow chimeras. In addition, both thymocyte development and survival, and antigen-induced proliferation of peripheral T cells are β- catenin independent. Our results do not necessarily exclude the possibility of an important function for β-catenin mediated Wnt signalling in the haematopoietic system, it rather raises the question that β-catenin is compensated for by another protein. A prime candidate that may take over the function of β-catenin in its absence, is the close relative plakoglobin, also know as γ-catenin. This protein shares multiple structural features with β-catenin. In order to investigate whether γ-catenin can compensate for the loss of β-catenin we have generated mice in which the haematopoietic compartment is deficient for both proteins. Combined deficiency of β-catenin and γ-catenin does not perturb Long Term-Haematopoietic Stem Cells (LT-HSC) self renewal, but affects an already lineage committed progenitor population within the BM. Our results demonstrate that y-catenin can indeed compensate for the loss of β-catenin within the haematopoietie system.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
There is a need for more efficient methods giving insight into the complex mechanisms of neurotoxicity. Testing strategies including in vitro methods have been proposed to comply with this requirement. With the present study we aimed to develop a novel in vitro approach which mimics in vivo complexity, detects neurotoxicity comprehensively, and provides mechanistic insight. For this purpose we combined rat primary re-aggregating brain cell cultures with a mass spectrometry (MS)-based metabolomics approach. For the proof of principle we treated developing re-aggregating brain cell cultures for 48h with the neurotoxicant methyl mercury chloride (0.1-100muM) and the brain stimulant caffeine (1-100muM) and acquired cellular metabolic profiles. To detect toxicant-induced metabolic alterations the profiles were analysed using commercial software which revealed patterns in the multi-parametric dataset by principal component analyses (PCA), and recognised the most significantly altered metabolites. PCA revealed concentration-dependent cluster formations for methyl mercury chloride (0.1-1muM), and treatment-dependent cluster formations for caffeine (1-100muM) at sub-cytotoxic concentrations. Four relevant metabolites responsible for the concentration-dependent alterations following methyl mercury chloride treatment could be identified using MS-MS fragmentation analysis. These were gamma-aminobutyric acid, choline, glutamine, creatine and spermine. Their respective mass ion intensities demonstrated metabolic alterations in line with the literature and suggest that the metabolites could be biomarkers for mechanisms of neurotoxicity or neuroprotection. In addition, we evaluated whether the approach could identify neurotoxic potential by testing eight compounds which have target organ toxicity in the liver, kidney or brain at sub-cytotoxic concentrations. PCA revealed cluster formations largely dependent on target organ toxicity indicating possible potential for the development of a neurotoxicity prediction model. With such results it could be useful to perform a validation study to determine the reliability, relevance and applicability of this approach to neurotoxicity screening. Thus, for the first time we show the benefits and utility of in vitro metabolomics to comprehensively detect neurotoxicity and to discover new biomarkers.
Resumo:
Understanding and anticipating biological invasions can focus either on traits that favour species invasiveness or on features of the receiving communities, habitats or landscapes that promote their invasibility. Here, we address invasibility at the regional scale, testing whether some habitats and landscapes are more invasible than others by fitting models that relate alien plant species richness to various environmental predictors. We use a multi-model information-theoretic approach to assess invasibility by modelling spatial and ecological patterns of alien invasion in landscape mosaics and testing competing hypotheses of environmental factors that may control invasibility. Because invasibility may be mediated by particular characteristics of invasiveness, we classified alien species according to their C-S-R plant strategies. We illustrate this approach with a set of 86 alien species in Northern Portugal. We first focus on predictors influencing species richness and expressing invasibility and then evaluate whether distinct plant strategies respond to the same or different groups of environmental predictors. We confirmed climate as a primary determinant of alien invasions and as a primary environmental gradient determining landscape invasibility. The effects of secondary gradients were detected only when the area was sub-sampled according to predictions based on the primary gradient. Then, multiple predictor types influenced patterns of alien species richness, with some types (landscape composition, topography and fire regime) prevailing over others. Alien species richness responded most strongly to extreme land management regimes, suggesting that intermediate disturbance induces biotic resistance by favouring native species richness. Land-use intensification facilitated alien invasion, whereas conservation areas hosted few invaders, highlighting the importance of ecosystem stability in preventing invasions. Plants with different strategies exhibited different responses to environmental gradients, particularly when the variations of the primary gradient were narrowed by sub-sampling. Such differential responses of plant strategies suggest using distinct control and eradication approaches for different areas and alien plant groups.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.