867 resultados para Web based applications


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web surveys are becoming increasingly popular in survey research. Compared with face-to-face, telephone and mail surveys, web surveys may contain a different and new source of measurement error and bias: the type of device that respondents use to answer the survey questions. To the best of our knowledge, this is the first study that tests whether the use of mobile devices affects survey characteristics and stated preferences in a web-based choice experiment. The web survey was carried out in Germany with 3,400 respondents, of which 12 per cent used a mobile device (i.e. tablet or smartphone), and comprised a stated choice experiment on externalities of renewable energy production using wind, solar and biomass. Our main finding is that survey characteristics such as interview length and acquiescence tendency are affected by the device used. In contrast to what might be expected, we find that, compared with respondents using desktop computers and laptops, mobile device users spent more time to answer the survey and are less likely to be prone to acquiescence bias. In the choice experiment, mobile device users tended to be more consistent in their stated choices, and there are differences in willingness to pay between both subsamples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Little is known about medical educators' self-definition. AIMS The aim of this study is to survey an international community of medical educators focusing on the medical educators' self-definition. METHODS Within a comprehensive, web-based survey, an open question on the participants' views of how they would define a "medical educator" was sent to 2200 persons on the mailing list of the Association for Medical Education in Europe. The free text definitions were analysed using qualitative thematic analysis. RESULTS Of the, 200 medical educators invited to participate, 685 (31.1%) provided a definition of a "medical educator". The qualitative analysis of the free text definitions revealed that medical educators defined themselves in 13 roles, primarily as "Professional Expert", "Facilitator", "Information Provider", "Enthusiast", "Faculty Developer", "Mentor", "Undergraduate and Postgraduate Trainer", "Curriculum Developer", "Assessor and Assessment Creator", and "Researcher". CONCLUSIONS Our survey revealed that medical educators predominantly define themselves as "Professional Experts" and identified 12 further self-defined roles of a medical educator, several of which not to have been reported previously. The results can be used to further the understanding of our professional identity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web surveys are becoming increasingly popular in survey research including stated preference surveys. Compared with face-to-face, telephone and mail surveys, web surveys may contain a different and new source of measurement error and bias: the type of device that respondents use to answer the survey questions. This is the first study that tests whether the use of mobile devices, tablets or smartphones, affects survey characteristics and stated preferences in a web-based choice experiment. The web survey on expanding renewable energy production in Germany was carried out with 3182 respondents, of which 12% used a mobile device. Propensity score matching is used to account for selection bias in the use of mobile devices for survey completion. We find that mobile device users spent more time than desktop/laptop users to answer the survey. Yet, desktop/laptop users and mobile device users do not differ in acquiescence tendency as an indicator of extreme response patterns. For mobile device users only, we find a negative correlation between screen size and interview length and a positive correlation between screen size and acquiescence tendency. In the choice experiment data, we do not find significant differences in the tendency to choose the status quo option and scale between both subsamples. However, some of the estimates of implicit prices differ, albeit not in a unidirectional fashion. Model results for mobile device users indicate a U-shaped relationship between error variance and screen size. Together, the results suggest that using mobile devices is not detrimental to survey quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: Social anxiety disorder is one of the most researched conditions in the field of Internet-based self-help. Various studies have shown that cognitive-behavioral treatments can be efficacious to reduce social phobic symptoms. Most of the interventions tested include some form of support, whereas the efficacy of a web-based group format has yet to be investigated. The present study aims at investigating the possible added value of therapist-guided group support in an Internet-based guided self-help treatment for SAD. Methods: A total of 150 adults with a diagnosis of SAD are randomly assigned to either a wait-list control group or one of two active treatment conditions. Participants in the two active conditions use the same Internet-based self-help program, either with individual guidance by a therapist or with the support of a therapist-guided group of 6 individuals. In the group condition, participants communicate with each other via an integrated, protected discussion forum. The primary outcome variables are symptoms of SAD and diagnostic status immediately after the intervention (12 weeks) and at 6-month follow-up. Secondary endpoints are general symptomatology, depression, quality of life and adherence to treatment. Furthermore, process variables such as group processes and the working alliance are studied. Results: Results are currently being analyzed. Results at post-treatment will be presented and discussed. Potential moderating and mediating variables of treatment success will be addressed. Conclusion: The results of this study should indicate whether therapist-guided group support could enhance the efficacy of an internet based self-help treatment for SAD. This novel treatment format, if shown efficacious, could represent a cost-effective option and could be further modified to treat other conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The U.S. Air Force assesses Active Duty Air Force (ADAF) health annually using the Air Force Web-based Preventative Health Assessment (AF WebPHA). The assessment is based on a self-administered survey used to determine the overall Air Force health and readiness, as well as, the individual health of each airman. Individual survey responses as well as groups of responses generate further computer generated assessment and result in a classification of 'Critical', 'Priority', or 'Routine', depending on the need and urgency for further evaluation by a health care provider. The importance of the 'Priority' and 'Critical' classifications is to provide timely intervention to prevent or limit unfavorable outcomes that may threaten an airman. Though the USAF has been transitioning from a paper form to the online WebPHA survey for the last three years it was not made mandatory for all airmen until 2009. The survey covers many health aspects including family history, tobacco use, exercise, alcohol use, and mental health. ^ Military stressors such as deployment, change of station, and the trauma of war can aggravate and intensify the common baseline worries experienced by the general population and place airmen at additional risks for mental health concerns and illness. This study assesses the effectiveness of the AF WebPHA mental health screening questions in predicting a mental health disorder diagnosis according to International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes generated by physicians or their surrogates. In order to assess the sensitivity, specificity, and positive predictive value of the AF WebPHA as a screening tool for mental health, survey results were compared to ascertain if they generated any mental health disorder related diagnosis for the period from January 1, 2009 to March 31, 2010. ^ Statistical analysis of the AF WebPHA mental health responses when compared with matching ICD-9-CM codes found that the sensitivity for 'Critical' or 'Priority' responses was only 3.4% and that it would correctly predict those who had the selected mental health diagnosis 9% of the time.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objectives of this study were to identify and measure the average outcomes of the Open Door Mission's nine-month community-based substance abuse treatment program, identify predictors of successful outcomes, and make recommendations to the Open Door Mission for improving its treatment program.^ The Mission's program is exclusive to adult men who have limited financial resources: most of which were homeless or dependent on parents or other family members for basic living needs. Many, but not all, of these men are either chemically dependent or have a history of substance abuse.^ This study tracked a cohort of the Mission's graduates throughout this one-year study and identified various indicators of success at short-term intervals, which may be predictive of longer-term outcomes. We tracked various levels of 12-step program involvement, as well as other social and spiritual activities, such as church affiliation and recovery support.^ Twenty-four of the 66 subjects, or 36% met the Mission's requirements for success. Specific to this success criteria; Fifty-four, or 82% reported affiliation with a home church; Twenty-six, or 39% reported full-time employment; Sixty-one, or 92% did not report or were not identified as having any post-treatment arrests or incarceration, and; Forty, or 61% reported continuous abstinence from both drugs and alcohol.^ Five research-based hypotheses were developed and tested. The primary analysis tool was the web-based non-parametric dependency modeling tool, B-Course, which revealed some strong associations with certain variables, and helped the researchers generate and test several data-driven hypotheses. Full-time employment is the greatest predictor of abstinence: 95% of those who reported full time employment also reported continuous post-treatment abstinence, while 50% of those working part-time were abstinent and 29% of those with no employment were abstinent. Working with a 12-step sponsor, attending aftercare, and service with others were identified as predictors of abstinence.^ This study demonstrates that associations with abstinence and the ODM success criteria are not simply based on one social or behavioral factor. Rather, these relationships are interdependent, and show that abstinence is achieved and maintained through a combination of several 12-step recovery activities. This study used a simple assessment methodology, which demonstrated strong associations across variables and outcomes, which have practical applicability to the Open Door Mission for improving its treatment program. By leveraging the predictive capability of the various success determination methodologies discussed and developed throughout this study, we can identify accurate outcomes with both validity and reliability. This assessment instrument can also be used as an intervention that, if operationalized to the Mission’s clients during the primary treatment program, may measurably improve the effectiveness and outcomes of the Open Door Mission.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Epilepsy is a very complex disease which can have a variety of etiologies, co-morbidities, and a long list of psychosocial factors4. Clinical management of epilepsy patients typically includes serological tests, EEG's, and imaging studies to determine the single best antiepileptic drug (AED). Self-management is a vital component of achieving optimal health when living with a chronic disease. For patients with epilepsy self-management includes any necessary actions to control seizures and cope with any subsequent effects of the condition9; including aspects of treatment, seizure, and lifestyle. The use of computer-based applications can allow for more effective use of clinic visits and ultimately enhance the patient-provider relationship through focused discussion of determinants affecting self-management. ^ The purpose of this study is to conduct a systematic literature review on informatics application in epilepsy self-management in an effort to describe current evidence for informatics applications and decision support as an adjunct to successful clinical management of epilepsy. Each publication was analyzed for the type of study design utilized. ^ A total of 68 publications were included and categorized by the study design used, development stage, and clinical domain. Descriptive study designs comprised of three-fourths of the publications and indicate an underwhelming use of prospective studies. The vast majority of prospective studies also focused on clinician use to increase knowledge in treating patients with epilepsy. ^ Due to the chronic nature of epilepsy and the difficulty that both clinicians and patients can experience in managing epilepsy, more prospective studies are needed to evaluate applications that can effectively increase management activities. Within the last two decades of epilepsy research, management studies have employed the use of biomedical informatics applications. While the use of computer applications to manage epilepsy has increased, more progress is needed.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Specialized search engines such as PubMed, MedScape or Cochrane have increased dramatically the visibility of biomedical scientific results. These web-based tools allow physicians to access scientific papers instantly. However, this decisive improvement had not a proportional impact in clinical practice due to the lack of advanced search methods. Even queries highly specified for a concrete pathology frequently retrieve too many information, with publications related to patients treated by the physician beyond the scope of the results examined. In this work we present a new method to improve scientific article search using patient information. Two pathologies have been used within the project to retrieve relevant literature to patient data and to be integrated with other sources. Promising results suggest the suitability of the approach, highlighting publications dealing with patient features and facilitating literature search to physicians.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies for describing different domains. Par-ticular LD development characteristics such as agility and web-based architec-ture necessitate the revision, adaption, and lightening of existing methodologies for ontology development. This thesis proposes a lightweight method for ontol-ogy development in an LD context which will be based in data-driven agile de-velopments, existing resources to be reused, and the evaluation of the obtained products considering both classical ontological engineering principles and LD characteristics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on an innovative approach that aims to reduce information management costs in data-intensive and cognitively-complex biomedical environments. Recognizing the importance of prominent high-performance computing paradigms and large data processing technologies as well as collaboration support systems to remedy data-intensive issues, it adopts a hybrid approach by building on the synergy of these technologies. The proposed approach provides innovative Web-based workbenches that integrate and orchestrate a set of interoperable services that reduce the data-intensiveness and complexity overload at critical decision points to a manageable level, thus permitting stakeholders to be more productive and concentrate on creative activities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Systematic evaluation of Learning Objects is essential to make high quality Web-based education possible. For this reason, several educational repositories and e-Learning systems have developed their own evaluation models and tools. However, the differences of the context in which Learning Objects are produced and consumed suggest that no single evaluation model is sufficient for all scenarios. Besides, no much effort has been put in developing open tools to facilitate Learning Object evaluation and use the quality information for the benefit of end users. This paper presents LOEP, an open source web platform that aims to facilitate Learning Object evaluation in different scenarios and educational settings by supporting and integrating several evaluation models and quality metrics. The work exposed in this paper shows that LOEP is capable of providing Learning Object evaluation to e-Learning systems in an open, low cost, reliable and effective way. Possible scenarios where LOEP could be used to implement quality control policies and to enhance search engines are also described. Finally, we report the results of a survey conducted among reviewers that used LOEP, showing that they perceived LOEP as a powerful and easy to use tool for evaluating Learning Objects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Evaluating and measuring the pedagogical quality of Learning Objects is essential for achieving a successful web-based education. On one hand, teachers need some assurance of quality of the teaching resources before making them part of the curriculum. On the other hand, Learning Object Repositories need to include quality information into the ranking metrics used by the search engines in order to save users time when searching. For these reasons, several models such as LORI (Learning Object Review Instrument) have been proposed to evaluate Learning Object quality from a pedagogical perspective. However, no much effort has been put in defining and evaluating quality metrics based on those models. This paper proposes and evaluates a set of pedagogical quality metrics based on LORI. The work exposed in this paper shows that these metrics can be effectively and reliably used to provide quality-based sorting of search results. Besides, it strongly evidences that the evaluation of Learning Objects from a pedagogical perspective can notably enhance Learning Object search if suitable evaluations models and quality metrics are used. An evaluation of the LORI model is also described. Finally, all the presented metrics are compared and a discussion on their weaknesses and strengths is provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the recent years, the computer vision community has shown great interest on depth-based applications thanks to the performance and flexibility of the new generation of RGB-D imagery. In this paper, we present an efficient background subtraction algorithm based on the fusion of multiple region-based classifiers that processes depth and color data provided by RGB-D cameras. Foreground objects are detected by combining a region-based foreground prediction (based on depth data) with different background models (based on a Mixture of Gaussian algorithm) providing color and depth descriptions of the scene at pixel and region level. The information given by these modules is fused in a mixture of experts fashion to improve the foreground detection accuracy. The main contributions of the paper are the region-based models of both background and foreground, built from the depth and color data. The obtained results using different database sequences demonstrate that the proposed approach leads to a higher detection accuracy with respect to existing state-of-the-art techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los algoritmos basados en registros de desplazamiento con realimentación (en inglés FSR) se han utilizado como generadores de flujos pseudoaleatorios en aplicaciones con recursos limitados como los sistemas de apertura sin llave. Se considera canal primario a aquel que se utiliza para realizar una transmisión de información. La aparición de los ataques de canal auxiliar (en inglés SCA), que explotan información filtrada inintencionadamente a través de canales laterales como el consumo, las emisiones electromagnéticas o el tiempo empleado, supone una grave amenaza para estas aplicaciones, dado que los dispositivos son accesibles por un atacante. El objetivo de esta tesis es proporcionar un conjunto de protecciones que se puedan aplicar de forma automática y que utilicen recursos ya disponibles, evitando un incremento sustancial en los costes y alargando la vida útil de aplicaciones que puedan estar desplegadas. Explotamos el paralelismo existente en algoritmos FSR, ya que sólo hay 1 bit de diferencia entre estados de rondas consecutivas. Realizamos aportaciones en tres niveles: a nivel de sistema, utilizando un coprocesador reconfigurable, a través del compilador y a nivel de bit, aprovechando los recursos disponibles en el procesador. Proponemos un marco de trabajo que nos permite evaluar implementaciones de un algoritmo incluyendo los efectos introducidos por el compilador considerando que el atacante es experto. En el campo de los ataques, hemos propuesto un nuevo ataque diferencial que se adapta mejor a las condiciones de las implementaciones software de FSR, en las que el consumo entre rondas es muy similar. SORU2 es un co-procesador vectorial reconfigurable propuesto para reducir el consumo energético en aplicaciones con paralelismo y basadas en el uso de bucles. Proponemos el uso de SORU2, además, para ejecutar algoritmos basados en FSR de forma segura. Al ser reconfigurable, no supone un sobrecoste en recursos, ya que no está dedicado en exclusiva al algoritmo de cifrado. Proponemos una configuración que ejecuta múltiples algoritmos de cifrado similares de forma simultánea, con distintas implementaciones y claves. A partir de una implementación sin protecciones, que demostramos que es completamente vulnerable ante SCA, obtenemos una implementación segura a los ataques que hemos realizado. A nivel de compilador, proponemos un mecanismo para evaluar los efectos de las secuencias de optimización del compilador sobre una implementación. El número de posibles secuencias de optimizaciones de compilador es extremadamente alto. El marco de trabajo propuesto incluye un algoritmo para la selección de las secuencias de optimización a considerar. Debido a que las optimizaciones del compilador transforman las implementaciones, se pueden generar automáticamente implementaciones diferentes combinamos para incrementar la seguridad ante SCA. Proponemos 2 mecanismos de aplicación de estas contramedidas, que aumentan la seguridad de la implementación original sin poder considerarse seguras. Finalmente hemos propuesto la ejecución paralela a nivel de bit del algoritmo en un procesador. Utilizamos la forma algebraica normal del algoritmo, que automáticamente se paraleliza. La implementación sobre el algoritmo evaluado mejora en rendimiento y evita que se filtre información por una ejecución dependiente de datos. Sin embargo, es más vulnerable ante ataques diferenciales que la implementación original. Proponemos una modificación del algoritmo para obtener una implementación segura, descartando parcialmente ejecuciones del algoritmo, de forma aleatoria. Esta implementación no introduce una sobrecarga en rendimiento comparada con las implementaciones originales. En definitiva, hemos propuesto varios mecanismos originales a distintos niveles para introducir aleatoridad en implementaciones de algoritmos FSR sin incrementar sustancialmente los recursos necesarios. ABSTRACT Feedback Shift Registers (FSR) have been traditionally used to implement pseudorandom sequence generators. These generators are used in Stream ciphers in systems with tight resource constraints, such as Remote Keyless Entry. When communicating electronic devices, the primary channel is the one used to transmit the information. Side-Channel Attack (SCA) use additional information leaking from the actual implementation, including power consumption, electromagnetic emissions or timing information. Side-Channel Attacks (SCA) are a serious threat to FSR-based applications, as an attacker usually has physical access to the devices. The main objective of this Ph.D. thesis is to provide a set of countermeasures that can be applied automatically using the available resources, avoiding a significant cost overhead and extending the useful life of deployed systems. If possible, we propose to take advantage of the inherent parallelism of FSR-based algorithms, as the state of a FSR differs from previous values only in 1-bit. We have contributed in three different levels: architecture (using a reconfigurable co-processor), using compiler optimizations, and at bit level, making the most of the resources available at the processor. We have developed a framework to evaluate implementations of an algorithm including the effects introduced by the compiler. We consider the presence of an expert attacker with great knowledge on the application and the device. Regarding SCA, we have presented a new differential SCA that performs better than traditional SCA on software FSR-based algorithms, where the leaked values are similar between rounds. SORU2 is a reconfigurable vector co-processor. It has been developed to reduce energy consumption in loop-based applications with parallelism. In addition, we propose its use for secure implementations of FSR-based algorithms. The cost overhead is discarded as the co-processor is not exclusively dedicated to the encryption algorithm. We present a co-processor configuration that executes multiple simultaneous encryptions, using different implementations and keys. From a basic implementation, which is proved to be vulnerable to SCA, we obtain an implementation where the SCA applied were unsuccessful. At compiler level, we use the framework to evaluate the effect of sequences of compiler optimization passes on a software implementation. There are many optimization passes available. The optimization sequences are combinations of the available passes. The amount of sequences is extremely high. The framework includes an algorithm for the selection of interesting sequences that require detailed evaluation. As existing compiler optimizations transform the software implementation, using different optimization sequences we can automatically generate different implementations. We propose to randomly switch between the generated implementations to increase the resistance against SCA.We propose two countermeasures. The results show that, although they increase the resistance against SCA, the resulting implementations are not secure. At bit level, we propose to exploit bit level parallelism of FSR-based implementations using pseudo bitslice implementation in a wireless node processor. The bitslice implementation is automatically obtained from the Algebraic Normal Form of the algorithm. The results show a performance improvement, avoiding timing information leakage, but increasing the vulnerability against differential SCA.We provide a secure version of the algorithm by randomly discarding part of the data obtained. The overhead in performance is negligible when compared to the original implementations. To summarize, we have proposed a set of original countermeasures at different levels that introduce randomness in FSR-based algorithms avoiding a heavy overhead on the resources required.