16 resultados para WEB (Computer program language)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

REMA is an interactive web-based program which predicts endonuclease cut sites in DNA sequences. It analyses Multiple sequences simultaneously and predicts the number and size of fragments as well as provides restriction maps. The users can select single or paired combinations of all commercially available enzymes. Additionally, REMA permits prediction of multiple sequence terminal fragment sizes and suggests suitable restriction enzymes for maximally discriminatory results. REMA is an easy to use, web based program which will have a wide application in molecular biology research. Availability: REMA is written in Perl and is freely available for non-commercial use. Detailed information on installation can be obtained from Jan Szubert (jan.szubert@gmail.com) and the web based application is accessible on the internet at the URL http://www.macaulay.ac.uk/rema. Contact: b.singh@macaulay.ac.uk. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Web-based programs are a potential medium for supporting weight loss because of their accessibility and wide reach. Research is warranted to determine the shorter- and longer-term effects of these programs in relation to weight loss and other health outcomes.

OBJECTIVE: The aim was to evaluate the effects of a Web-based component of a weight loss service (Imperative Health) in an overweight/obese population at risk of cardiovascular disease (CVD) using a randomized controlled design and a true control group.

METHODS: A total of 65 overweight/obese adults at high risk of CVD were randomly allocated to 1 of 2 groups. Group 1 (n=32) was provided with the Web-based program, which supported positive dietary and physical activity changes and assisted in managing weight. Group 2 continued with their usual self-care (n=33). Assessments were conducted face-to-face. The primary outcome was between-group change in weight at 3 months. Secondary outcomes included between-group change in anthropometric measurements, blood pressure, lipid measurements, physical activity, and energy intake at 3, 6, and 12 months. Interviews were conducted to explore participants' views of the Web-based program.

RESULTS: Retention rates for the intervention and control groups at 3 months were 78% (25/32) vs 97% (32/33), at 6 months were 66% (21/32) vs 94% (31/33), and at 12 months were 53% (17/32) vs 88% (29/33). Intention-to-treat analysis, using baseline observation carried forward imputation method, revealed that the intervention group lost more weight relative to the control group at 3 months (mean -3.41, 95% CI -4.70 to -2.13 kg vs mean -0.52, 95% CI -1.55 to 0.52 kg, P<.001), at 6 months (mean -3.47, 95% CI -4.95 to -1.98 kg vs mean -0.81, 95% CI -2.23 to 0.61 kg, P=.02), but not at 12 months (mean -2.38, 95% CI -3.48 to -0.97 kg vs mean -1.80, 95% CI -3.15 to -0.44 kg, P=.77). More intervention group participants lost ≥5% of their baseline body weight at 3 months (34%, 11/32 vs 3%, 1/33, P<.001) and 6 months (41%, 13/32 vs 18%, 6/33, P=.047), but not at 12 months (22%, 7/32 vs 21%, 7/33, P=.95) versus control group. The intervention group showed improvements in total cholesterol, triglycerides, and adopted more positive dietary and physical activity behaviors for up to 3 months verus control; however, these improvements were not sustained.

CONCLUSIONS: Although the intervention group had high attrition levels, this study provides evidence that this Web-based program can be used to initiate clinically relevant weight loss and lower CVD risk up to 3-6 months based on the proportion of intervention group participants losing ≥5% of their body weight versus control group. It also highlights a need for augmenting Web-based programs with further interventions, such as in-person support to enhance engagement and maintain these changes.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The use of technology in healthcare settings is on the increase and may represent a cost-effective means of delivering rehabilitation. Reductions in treatment time, and delivery in the home, are also thought to be benefits of this approach. Children and adolescents with brain injury often experience deficits in memory and executive functioning that can negatively affect their school work, social lives, and future occupations. Effective interventions that can be delivered at home, without the need for high-cost clinical involvement, could provide a means to address a current lack of provision. We have systematically reviewed studies examining the effects of technology-based interventions for the rehabilitation of deficits in memory and executive functioning in children and adolescents with acquired brain injury. Objectives To assess the effects of technology-based interventions compared to placebo intervention, no treatment, or other types of intervention, on the executive functioning and memory of children and adolescents with acquired brain injury. Search methods We ran the search on the 30 September 2015. We searched the Cochrane Injuries Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R), EMBASE Classic + EMBASE (OvidSP), ISI Web of Science (SCI-EXPANDED, SSCI, CPCI-S, and CPSI-SSH), CINAHL Plus (EBSCO), two other databases, and clinical trials registers. We also searched the internet, screened reference lists, and contacted authors of included studies. Selection criteria Randomised controlled trials comparing the use of a technological aid for the rehabilitation of children and adolescents with memory or executive-functioning deficits with placebo, no treatment, or another intervention. Data collection and analysis Two review authors independently reviewed titles and abstracts identified by the search strategy. Following retrieval of full-text manuscripts, two review authors independently performed data extraction and assessed the risk of bias. Main results Four studies (involving 206 participants) met the inclusion criteria for this review. Three studies, involving 194 participants, assessed the effects of online interventions to target executive functioning (that is monitoring and changing behaviour, problem solving, planning, etc.). These studies, which were all conducted by the same research team, compared online interventions against a 'placebo' (participants were given internet resources on brain injury). The interventions were delivered in the family home with additional support or training, or both, from a psychologist or doctoral student. The fourth study investigated the use of a computer program to target memory in addition to components of executive functioning (that is attention, organisation, and problem solving). No information on the study setting was provided, however a speech-language pathologist, teacher, or occupational therapist accompanied participants. Two studies assessed adolescents and young adults with mild to severe traumatic brain injury (TBI), while the remaining two studies assessed children and adolescents with moderate to severe TBI. Risk of bias We assessed the risk of selection bias as low for three studies and unclear for one study. Allocation bias was high in two studies, unclear in one study, and low in one study. Only one study (n = 120) was able to conceal allocation from participants, therefore overall selection bias was assessed as high. One study took steps to conceal assessors from allocation (low risk of detection bias), while the other three did not do so (high risk of detection bias). Primary outcome 1: Executive functioning: Technology-based intervention versus placebo Results from meta-analysis of three studies (n = 194) comparing online interventions with a placebo for children and adolescents with TBI, favoured the intervention immediately post-treatment (standardised mean difference (SMD) -0.37, 95% confidence interval (CI) -0.66 to -0.09; P = 0.62; I2 = 0%). (As there is no 'gold standard' measure in the field, we have not translated the SMD back to any particular scale.) This result is thought to represent only a small to medium effect size (using Cohen’s rule of thumb, where 0.2 is a small effect, 0.5 a medium one, and 0.8 or above is a large effect); this is unlikely to have a clinically important effect on the participant. The fourth study (n = 12) reported differences between the intervention and control groups on problem solving (an important component of executive functioning). No means or standard deviations were presented for this outcome, therefore an effect size could not be calculated. The quality of evidence for this outcome according to GRADE was very low. This means future research is highly likely to change the estimate of effect. Primary outcome 2: Memory One small study (n = 12) reported a statistically significant difference in improvement in sentence recall between the intervention and control group following an eight-week remediation programme. No means or standard deviations were presented for this outcome, therefore an effect size could not be calculated. Secondary outcomes Two studies (n = 158) reported on anxiety/depression as measured by the Child Behavior Checklist (CBCL) and were included in a meta-analysis. We found no evidence of an effect with the intervention (mean difference -5.59, 95% CI -11.46 to 0.28; I2 = 53%). The GRADE quality of evidence for this outcome was very low, meaning future research is likely to change the estimate of effect. A single study sought to record adverse events and reported none. Two studies reported on use of the intervention (range 0 to 13 and 1 to 24 sessions). One study reported on social functioning/social competence and found no effect. The included studies reported no data for other secondary outcomes (that is quality of life and academic achievement). Authors' conclusions This review provides low-quality evidence for the use of technology-based interventions in the rehabilitation of executive functions and memory for children and adolescents with TBI. As all of the included studies contained relatively small numbers of participants (12 to 120), our findings should be interpreted with caution. The involvement of a clinician or therapist, rather than use of the technology, may have led to the success of these interventions. Future research should seek to replicate these findings with larger samples, in other regions, using ecologically valid outcome measures, and reduced clinician involvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim. The purpose of this study was to develop and evaluate a computer-based, dietary, and physical activity self-management program for people recently diagnosed with type 2 diabetes. 
Methods. The computer-based program was developed in conjunction with the target group and evaluated in a 12-week randomised controlled trial (RCT). Participants were randomised to the intervention (computer-program) or control group (usual care). Primary outcomes were diabetes knowledge and goal setting (ADKnowl questionnaire, Diabetes Obstacles Questionnaire (DOQ)) measured at baseline and week 12. User feedback on the program was obtained via a questionnaire and focus groups. Results. Seventy participants completed the 12-week RCT (32 intervention, 38 control, mean age 59 (SD) years). After completion there was a significant between-group difference in the “knowledge and beliefs scale” of the DOQ. Two-thirds of the intervention group rated the program as either good or very good, 92% would recommend the program to others, and 96% agreed that the information within the program was clear and easy to understand. 
Conclusions. The computer-program resulted in a small but statistically significant improvement in diet-related knowledge and user satisfaction was high. With some further development, this computer-based educational tool may be a useful adjunct to diabetes self-management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the development of a two-dimensional transient catalyst model. Although designed primarily for two-stroke direct injection engines, the model is also applicable to four-stroke lean burn and diesel applications. The first section describes the geometries, properties and chemical processes simulated by the model and discusses the limitations and assumptions applied. A review of the modeling techniques adopted by other researchers is also included. The mathematical relationships which are used to represent the system are then described, together with the finite volume method used in the computer program. The need for a two-dimensional approach is explained and the methods used to model effects such as flow and temperature distribution are presented. The problems associated with developing surface reaction rates are discussed in detail and compared with published research. Validation and calibration of the model is achieved by comparing predictions with measurements from a flow reactor. While an extensive validation process, involving detailed measurements of gas composition and thermal gradients, has been completed, the analysis is too detailed for publication here and is the subject of a separate technical paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides a summary of our studies on robust speech recognition based on a new statistical approach – the probabilistic union model. We consider speech recognition given that part of the acoustic features may be corrupted by noise. The union model is a method for basing the recognition on the clean part of the features, thereby reducing the effect of the noise on recognition. To this end, the union model is similar to the missing feature method. However, the two methods achieve this end through different routes. The missing feature method usually requires the identity of the noisy data for noise removal, while the union model combines the local features based on the union of random events, to reduce the dependence of the model on information about the noise. We previously investigated the applications of the union model to speech recognition involving unknown partial corruption in frequency band, in time duration, and in feature streams. Additionally, a combination of the union model with conventional noise-reduction techniques was studied, as a means of dealing with a mixture of known or trainable noise and unknown unexpected noise. In this paper, a unified review, in the context of dealing with unknown partial feature corruption, is provided into each of these applications, giving the appropriate theory and implementation algorithms, along with an experimental evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose some extra rules to add to the well-known Sudoku puzzle and present an argument to justify their inclusion. The rules mean that puzzles can be created with fewer cells completed initially yet which still have only one solution. We have created a Web-based program which can be used to generate and solve both standard and extended (Complete) puzzles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the problem of seepage under the floor of hydraulic structures considering the compartment of flow that seeps through the surrounding banks of the canal. A computer program, utilizing a finite-element method and capable of handling three-dimensional (3D) saturated–unsaturated flow problems, was used. Different ratios of canal width/differential head applied on the structure were studied. The results produced from the two-dimensional (2D) analysis were observed to deviate largely from that obtained from 3D analysis of the same problem, despite the fact that the porous medium was isotropic and homogeneous. For example, the exit gradient obtained from 3D analysis was as high as 2.5 times its value obtained from 2D analysis. Uplift force acting upwards on the structure has also increased by about 46% compared with its value obtained from the 2D solution. When the canal width/ differential head ratio was 10 or higher, the 3D results were comparable to the 2D results. It is recommended to construct a core of low permeability soil in the banks of canal to reduce the seepage losses, uplift force, and exit gradient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the separation and recognition of overlapped speech sentences assuming single-channel observation. A system based on a combination of several different techniques is proposed. The system uses a missing-feature approach for improving crosstalk/noise robustness, a Wiener filter for speech enhancement, hidden Markov models for speech reconstruction, and speaker-dependent/-independent modeling for speaker and speech recognition. We develop the system on the Speech Separation Challenge database, involving a task of separating and recognizing two mixing sentences without assuming advanced knowledge about the identity of the speakers nor about the signal-to-noise ratio. The paper is an extended version of a previous conference paper submitted for the challenge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new approach to speech enhancement from single-channel measurements involving both noise and channel distortion (i.e., convolutional noise), and demonstrates its applications for robust speech recognition and for improving noisy speech quality. The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise for speech estimation. Third, we present an iterative algorithm which updates the noise and channel estimates of the corpus data model. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parallelizing compilers have difficulty analysing and optimising complex code. To address this, some analysis may be delayed until run-time, and techniques such as speculative execution used. Furthermore, to enhance performance, a feedback loop may be setup between the compile time and run-time analysis systems, as in iterative compilation. To extend this, it is proposed that the run-time analysis collects information about the values of variables not already determined, and estimates a probability measure for the sampled values. These measures may be used to guide optimisations in further analyses of the program. To address the problem of variables with measures as values, this paper also presents an outline of a novel combination of previous probabilistic denotational semantics models, applied to a simple imperative language.