10 resultados para Design tool

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Major depressive disorder (MDD) places a significant disease burden on individuals as well as on societies. Several web-based interventions for MDD have shown to be effective in reducing depressive symptoms. However, it is not known whether web-based interventions, when used as adjunctive treatment tools to regular psychotherapy, have an additional effect compared to regular psychotherapy for depression. Methods/design This study is a currently recruiting pragmatic randomized controlled trial (RCT) that compares regular psychotherapy plus a web-based depression program (¿deprexis¿) with a control condition exclusively receiving regular psychotherapy. Adults with a depressive disorder (N?=?800) will be recruited in routine secondary care from therapists over the course of their initial sessions and will then be randomized within therapists to one of the two conditions. The primary outcome is depressive symptoms measured with the Beck Depression Inventory (BDI-II) at three months post randomization. Secondary outcomes include changes on various indicators such as anxiety, somatic symptoms and quality of life. All outcomes are again assessed at the secondary endpoint six months post randomization. In addition, the working alliance and feasibility/acceptability of the treatment condition will be explored. Discussion This is the first randomized controlled trial to examine the feasibility/acceptability and the effectiveness of a combination of traditional face-to-face psychotherapy and web-based depression program compared to regular psychotherapeutic treatment in depressed outpatients in routine care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STEPanizer is an easy-to-use computer-based software tool for the stereological assessment of digitally captured images from all kinds of microscopical (LM, TEM, LSM) and macroscopical (radiology, tomography) imaging modalities. The program design focuses on providing the user a defined workflow adapted to most basic stereological tasks. The software is compact, that is user friendly without being bulky. STEPanizer comprises the creation of test systems, the appropriate display of digital images with superimposed test systems, a scaling facility, a counting module and an export function for the transfer of results to spreadsheet programs. Here we describe the major workflow of the tool illustrating the application on two examples from transmission electron microscopy and light microscopy, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The aim of this study was to compare the results of tendency-oriented perimetry (TOP) and a dynamic strategy in octopus perimetry as screening methods in clinical practice. DESIGN: A prospective single centre observational case series was performed. PARTICIPANTS AND METHODS: In a newly opened general ophthalmologic practice 89 consecutive patients (171 eyes) with a clinical indication for octopus static perimetry testing (ocular hypertension or suspicious optic nerve cupping) were examined prospectively with TOP and a dynamic strategy. The visual fields were graded by 3 masked observers as normal, borderline or abnormal without any further clinical information. RESULTS: 83% eyes showed the same result for both strategies. In 14% there was a small difference (with one visual field being abnormal or normal, the other being borderline). In only 2.9% of the eyes (5 cases) was there a contradictory result. In 4 out of 5 cases the dynamic visual field was abnormal and TOP was normal. 4 of these cases came back for a second examination. In all 4 the follow-up examination showed a normal second dynamic visual field. CONCLUSIONS: Octopus static perimetry using a TOP strategy is a fast, patient-friendly and very reliable screening tool for the general ophthalmological practice. We found no false-negative results in our series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hepatitis C virus (HCV) vaccine efficacy may crucially depend on immunogen length and coverage of viral sequence diversity. However, covering a considerable proportion of the circulating viral sequence variants would likely require long immunogens, which for the conserved portions of the viral genome, would contain unnecessarily redundant sequence information. In this study, we present the design and in vitro performance analysis of a novel "epitome" approach that compresses frequent immune targets of the cellular immune response against HCV into a shorter immunogen sequence. Compression of immunological information is achieved by partial overlapping shared sequence motifs between individual epitopes. At the same time, sequence diversity coverage is provided by taking advantage of emerging cross-reactivity patterns among epitope variants so that epitope variants associated with the broadest variant cross-recognition are preferentially included. The processing and presentation analysis of specific epitopes included in such a compressed, in vitro-expressed HCV epitome indicated effective processing of a majority of tested epitopes, although re-presentation of some epitopes may require refined sequence design. Together, the present study establishes the epitome approach as a potential powerful tool for vaccine immunogen design, especially suitable for the induction of cellular immune responses against highly variable pathogens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the unknown usage scenarios, designing the elementary services of a service-oriented architecture (SOA), which form the basis for later composition, is rather difficult. Various design guide lines have been proposed by academia, tool vendors and consulting companies, but they differ in the rigor of validation and are often biased toward some technology. For that reason a multiple-case study was conducted in five large organizations that successfully introduced SOA in their daily business. The observed approaches are contrasted with the findings from a literature review to derive some recommendations for SOA service design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The Cochrane risk of bias (RoB) tool has been widely embraced by the systematic review community, but several studies have reported that its reliability is low. We aim to investigate whether training of raters, including objective and standardized instructions on how to assess risk of bias, can improve the reliability of this tool. We describe the methods that will be used in this investigation and present an intensive standardized training package for risk of bias assessment that could be used by contributors to the Cochrane Collaboration and other reviewers. METHODS/DESIGN This is a pilot study. We will first perform a systematic literature review to identify randomized clinical trials (RCTs) that will be used for risk of bias assessment. Using the identified RCTs, we will then do a randomized experiment, where raters will be allocated to two different training schemes: minimal training and intensive standardized training. We will calculate the chance-corrected weighted Kappa with 95% confidence intervals to quantify within- and between-group Kappa agreement for each of the domains of the risk of bias tool. To calculate between-group Kappa agreement, we will use risk of bias assessments from pairs of raters after resolution of disagreements. Between-group Kappa agreement will quantify the agreement between the risk of bias assessment of raters in the training groups and the risk of bias assessment of experienced raters. To compare agreement of raters under different training conditions, we will calculate differences between Kappa values with 95% confidence intervals. DISCUSSION This study will investigate whether the reliability of the risk of bias tool can be improved by training raters using standardized instructions for risk of bias assessment. One group of inexperienced raters will receive intensive training on risk of bias assessment and the other will receive minimal training. By including a control group with minimal training, we will attempt to mimic what many review authors commonly have to do, that is-conduct risk of bias assessment in RCTs without much formal training or standardized instructions. If our results indicate that an intense standardized training does improve the reliability of the RoB tool, our study is likely to help improve the quality of risk of bias assessments, which is a central component of evidence synthesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE Eligibility criteria are a key factor for the feasibility and validity of clinical trials. We aimed to develop an online tool to assess the potential effect of inclusion and exclusion criteria on the proportion of patients eligible for an acute stroke trial. METHODS We identified relevant inclusion and exclusion criteria of acute stroke trials. Based on these criteria and using a cohort of 1537 consecutive patients with acute ischemic stroke from 3 stroke centers, we developed a web portal feasibility platform for stroke studies (FePASS) to estimate proportions of eligible patients for acute stroke trials. We applied the FePASS resource to calculate the proportion of patients eligible for 4 recent stroke studies. RESULTS Sixty-one eligibility criteria were derived from 30 trials on acute ischemic stroke. FePASS, publicly available at http://fepass.uni-muenster.de, displays the proportion of patients in percent to assess the effect of varying values of relevant eligibility criteria, for example, age, symptom onset time, National Institutes of Health Stroke Scale, and prestroke modified Rankin Scale, on this proportion. The proportion of eligible patients for 4 recent stroke studies ranged from 2.1% to 11.3%. Slight variations of the inclusion criteria could substantially increase the proportion of eligible patients. CONCLUSIONS FePASS is an open access online resource to assess the effect of inclusion and exclusion criteria on the proportion of eligible patients for a stroke trial. FePASS can help to design stroke studies, optimize eligibility criteria, and to estimate the potential recruitment rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stray light contamination reduces considerably the precision of photometric of faint stars for low altitude spaceborne observatories. When measuring faint objects, the necessity of coping with stray light contamination arises in order to avoid systematic impacts on low signal-to-noise images. Stray light contamination can be represented by a flat offset in CCD data. Mitigation techniques begin by a comprehensive study during the design phase, followed by the use of target pointing optimisation and post-processing methods. We present a code that aims at simulating the stray-light contamination in low-Earth orbit coming from reflexion of solar light by the Earth. StrAy Light SimulAtor (SALSA) is a tool intended to be used at an early stage as a tool to evaluate the effective visible region in the sky and, therefore to optimise the observation sequence. SALSA can compute Earth stray light contamination for significant periods of time allowing missionwide parameters to be optimised (e.g. impose constraints on the point source transmission function (PST) and/or on the altitude of the satellite). It can also be used to study the behaviour of the stray light at different seasons or latitudes. Given the position of the satellite with respect to the Earth and the Sun, SALSA computes the stray light at the entrance of the telescope following a geometrical technique. After characterising the illuminated region of the Earth, the portion of illuminated Earth that affects the satellite is calculated. Then, the flux of reflected solar photons is evaluated at the entrance of the telescope. Using the PST of the instrument, the final stray light contamination at the detector is calculated. The analysis tools include time series analysis of the contamination, evaluation of the sky coverage and an objects visibility predictor. Effects of the South Atlantic Anomaly and of any shutdown periods of the instrument can be added. Several designs or mission concepts can be easily tested and compared. The code is not thought as a stand-alone mission designer. Its mandatory inputs are a time series describing the trajectory of the satellite and the characteristics of the instrument. This software suite has been applied to the design and analysis of CHEOPS (CHaracterizing ExOPlanet Satellite). This mission requires very high precision photometry to detect very shallow transits of exoplanets. Different altitudes and characteristics of the detector have been studied in order to find the best parameters, that reduce the effect of contamination. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION Every joint registry aims to improve patient care by identifying implants that have an inferior performance. For this reason, each registry records the implant name that has been used in the individual patient. In most registries, a paper-based approach has been utilized for this purpose. However, in addition to being time-consuming, this approach does not account for the fact that failure patterns are not necessarily implant specific but can be associated with design features that are used in a number of implants. Therefore, we aimed to develop and evaluate an implant product library that allows both time saving barcode scanning on site in the hospital for the registration of the implant components and a detailed description of implant specifications. MATERIALS AND METHODS A task force consisting of representatives of the German Arthroplasty Registry, industry, and computer specialists agreed on a solution that allows barcode scanning of implant components and that also uses a detailed standardized classification describing arthroplasty components. The manufacturers classified all their components that are sold in Germany according to this classification. The implant database was analyzed regarding the completeness of components by algorithms and real-time data. RESULTS The implant library could be set up successfully. At this point, the implant database includes more than 38,000 items, of which all were classified by the manufacturers according to the predefined scheme. Using patient data from the German Arthroplasty Registry, several errors in the database were detected, all of which were corrected by the respective implant manufacturers. CONCLUSIONS The implant library that was developed for the German Arthroplasty Registry allows not only on-site barcode scanning for the registration of the implant components but also its classification tree allows a sophisticated analysis regarding implant characteristics, regardless of brand or manufacturer. The database is maintained by the implant manufacturers, thereby allowing registries to focus their resources on other areas of research. The database might represent a possible global model, which might encourage harmonization between joint replacement registries enabling comparisons between joint replacement registries.