31 resultados para Understandability
Resumo:
The objective of this study was to explore the perception of the legal authorities regarding different report types and visualization techniques for post-mortem radiological findings.
Resumo:
This paper presents our work at 2016 FIRE CHIS. Given a CHIS query and a document associated with that query, the task is to classify the sentences in the document as relevant to the query or not; and further classify the relevant sentences to be supporting, neutral or opposing to the claim made in the query. In this paper, we present two different approaches to do the classification. With the first approach, we implement two models to satisfy the task. We first implement an information retrieval model to retrieve the sentences that are relevant to the query; and then we use supervised learning method to train a classification model to classify the relevant sentences into support, oppose or neutral. With the second approach, we only use machine learning techniques to learn a model and classify the sentences into four classes (relevant & support, relevant & neutral, relevant & oppose, irrelevant & neutral). Our submission for CHIS uses the first approach.
Resumo:
PURPOSE most people with mental disorders receive treatment in primary care. The charts developed by the Dartmouth Primary Care Cooperative Research Network (COOP) and the World Organization of National Colleges, Academies, and Academic Associations of General Practitioners/Family Physicians (WONCA) have not yet been evaluated as a screen for these disorders, using a structured psychiatric interview by an expert or considering diagnoses other than depression. We evaluated the validity and feasibility of the COOP/WONCA Charts as a mental disorders screen by comparing them both with other questionnaires previously validated and with the assessment of a mental health specialist using a structured diagnostic interview. METHODS We trained community health workers and nurse assistants working in a collaborative mental health care model to administer the COOP/WONCA Charts, the 20-item Self-Reporting Questionnaire (SRQ-20), and the World Health Organization Five Well-Being Index (WHO-5) to 120 primary care patients. A psychiatrist blinded to the patients' results on these questionnaires administered the SCID, or Structured Clinical Interview for the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition). RESULTS The area under the receiver operating characteristic curve was at least 0.80 for single items, a 3-item combination, and the total score of the COOP/WONCA Charts, as well as for the SRQ-20 and the WHO-5, for screening both for all mental disorders and for depressive disorders. The accuracy, sensitivity, specificity, and positive and negative predictive values of these measures ranged between 0.77 and 0.92. Community health workers and nurse assistants rated the understandability, ease of use, and clinical relevance of all 3 questionnaires as satisfactory. CONCLUSIONS One-time assessment of patients with the COOP/WONCA Charts is a valid and feasible option for screening for mental disorders by primary care teams.
Resumo:
At the core of the analysis task in the development process is information systems requirements modelling, Modelling of requirements has been occurring for many years and the techniques used have progressed from flowcharting through data flow diagrams and entity-relationship diagrams to object-oriented schemas today. Unfortunately, researchers have been able to give little theoretical guidance only to practitioners on which techniques to use and when. In an attempt to address this situation, Wand and Weber have developed a series of models based on the ontological theory of Mario Bunge-the Bunge-Wand-Weber (BWW) models. Two particular criticisms of the models have persisted however-the understandability of the constructs in the BWW models and the difficulty in applying the models to a modelling technique. This paper addresses these issues by presenting a meta model of the BWW constructs using a meta language that is familiar to many IS professionals, more specific than plain English text, but easier to understand than the set-theoretic language of the original BWW models. Such a meta model also facilitates the application of the BWW theory to other modelling techniques that have similar meta models defined. Moreover, this approach supports the identification of patterns of constructs that might be common across meta models for modelling techniques. Such findings are useful in extending and refining the BWW theory. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This dissertation focuses on the strategies consumers use when making purchase decisions. It is organized in two main parts, one centering on descriptive and the other on applied decision making research. In the first part, a new process tracing tool called InterActive Process Tracing (IAPT) is pre- sented, which I developed to investigate the nature of consumers' decision strategies. This tool is a combination of several process tracing techniques, namely Active Information Search, Mouselab, and retrospective verbal protocol. To validate IAPT, two experiments on mobile phone purchase de- cisions were conducted where participants first repeatedly chose a mobile phone and then were asked to formalize their decision strategy so that it could be used to make choices for them. The choices made by the identified strategies correctly predicted the observed choices in 73% (Experiment 1) and 67% (Experiment 2) of the cases. Moreover, in Experiment 2, Mouselab and eye tracking were directly compared with respect to their impact on information search and strategy description. Only minor differences were found between these two methods. I conclude that IAPT is a useful research tool to identify choice strategies, and that using eye tracking technology did not increase its validity beyond that gained with Mouselab. In the second part, a prototype of a decision aid is introduced that was developed building in particular on the knowledge about consumers' decision strategies gained in Part I. This decision aid, which is called the InterActive Choice Aid (IACA), systematically assists consumers in their purchase decisions. To evaluate the prototype regarding its perceived utility, an experiment was conducted where IACA was compared to two other prototypes that were based on real-world consumer decision aids. All three prototypes differed in the number and type of tools they provided to facilitate the process of choosing, ranging from low (Amazon) to medium (Sunrise/dpreview) to high functionality (IACA). Overall, participants slightly preferred the prototype of medium functionality and this prototype was also rated best on the dimensions of understandability and ease of use. IACA was rated best regarding the two dimensions of ease of elimination and ease of comparison of alternatives. Moreover, participants choices were more in line with the normatively oriented weighted additive strategy when they used IACA than when they used the medium functionality prototype. The low functionality prototype was the least preferred overall. It is concluded that consumers can and will benefit from highly functional decision aids like IACA, but only when these systems are easy to understand and to use.
Resumo:
O desmantelamento das barreiras ao comércio mundial traduziu-se no aumento das operações comerciais e financeiras, na crescente necessidade de capitais, na utilização de novos instrumentos financeiros e no consequente desenvolvimento dos mercados de capitais com empresas cotadas em diversos mercados. Este fenómeno contribuiu ainda para a globalização da informação financeira que, para suprir as exigências de um mercado abrangente, teve de encontrar novas formas de difusão, sem contudo descurar as suas qualidades, nomeadamente a compreensibilidade, a comparabilidade e a fiabilidade. O interesse do desenvolvimento deste tema ressalta-se pela importância que a contabilidade nas organizações, recomenda-se ser organizada de forma a possibilitar o conhecimento claro e inequívoco dos elementos necessários ao cálculo do imposto, bem como a permitir o seu controlo, comportando todos os dados necessários ao preenchimento das declarações periódicas dos impostos. A parte prática do trabalho, em resumo, demonstra que, o ponto de partida para o apuramento do lucro tributável é o resultado contabilístico determinado com base nas suas regras próprias. Em Cabo Verde, a lei fiscal, apesar de desactualizada com o SNCRF, estabelece algumas regras próprias para ajustar o resultado contabilístico ao resultado fiscal. Em conclusão, a aplicação do novo normativo contabilístico cabo-verdiano (SNCRF) na correcção e determinação da matéria colectável em sede do IUR veio dar e construir uma base contabilística comum como ponto de partida para o estabelecimento de uma matéria colectável comum, e se concluir neste TFC, que tem todo o sentido. Sendo as demonstrações financeiras elaboradas com base nas NRF e subsidiariamente às NIC´s constituem o ponto de partida neutro para qualquer discussão sobre uma matéria colectável comum. The dismantling of barriers to world trade has resulted in increased commercial and financial transactions, the growing need for capital, and the use of new financial instruments and the consequent development of capital markets with listed companies in various markets. This phenomenon has also contributed for the globalization of financial information to meet the requirements of a comprehensive market had to find new ways of making, without neglecting their qualities, namely understandability, comparability and reliability. The interest of the development of this theme it emphasizes the importance of accounting in organizations, it is recommended to be organized in order to enable the clear and unambiguous understanding of the elements necessary for calculating the tax, as well as to allow its control, behaving all data needed to fill in of the periodic declarations of taxes. The practical part of the work, in short, shows that the starting point to determining taxable income is the accounting income determined based on its own rules. In Cape Verde, the tax law, although outdated with SNCRF establishes some rules themselves to adjust accounting income to taxable income. In conclusion, the application of new accounting standards Cape Verde (SNCRF) in the correction and determination of the tax base in the headquarters of the IUR to come and build a common basis of accounting as a starting point for the establishment of a common tax base, and conclude TFC this, which makes sense. Being the financial statements prepared in accordance with NRF 'and secondarily to NIC's are the neutral starting points for any discussion about a common tax base.
Resumo:
Within Data Envelopment Analysis, several alternative models allow for an environmental adjustment. The majority of them deliver divergent results. Decision makers face the difficult task of selecting the most suitable model. This study is performed to overcome this difficulty. By doing so, it fills a research gap. First, a two-step web-based survey is conducted. It aims (1) to identify the selection criteria, (2) to prioritize and weight the selection criteria with respect to the goal of selecting the most suitable model and (3) to collect the preferences about which model is preferable to fulfil each selection criterion. Second, Analytic Hierarchy Process is used to quantify the preferences expressed in the survey. Results show that the understandability, the applicability and the acceptability of the alternative models are valid selection criteria. The selection of the most suitable model depends on the preferences of the decision makers with regards to these criteria.
Resumo:
Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.
Resumo:
Over the past decade, organizations worldwide have begun to widely adopt agile software development practices, which offer greater flexibility to frequently changing business requirements, better cost effectiveness due to minimization of waste, faster time-to-market, and closer collaboration between business and IT. At the same time, IT services are continuing to be increasingly outsourced to third parties providing the organizations with the ability to focus on their core capabilities as well as to take advantage of better demand scalability, access to specialized skills, and cost benefits. An output-based pricing model, where the customers pay directly for the functionality that was delivered rather than the effort spent, is quickly becoming a new trend in IT outsourcing allowing to transfer the risk away from the customer while at the same time offering much better incentives for the supplier to optimize processes and improve efficiency, and consequently producing a true win-win outcome. Despite the widespread adoption of both agile practices and output-based outsourcing, there is little formal research available on how the two can be effectively combined in practice. Moreover, little practical guidance exists on how companies can measure the performance of their agile projects, which are being delivered in an output-based outsourced environment. This research attempted to shed light on this issue by developing a practical project monitoring framework which may be readily applied by organizations to monitor the performance of agile projects in an output-based outsourcing context, thus taking advantage of the combined benefits of such an arrangement Modified from action research approach, this research was divided into two cycles, each consisting of the Identification, Analysis, Verification, and Conclusion phases. During Cycle 1, a list of six Key Performance Indicators (KPIs) was proposed and accepted by the professionals in the studied multinational organization, which formed the core of the proposed framework and answered the first research sub-question of what needs to be measured. In Cycle 2, a more in-depth analysis was provided for each of the suggested Key Performance Indicators including the techniques for capturing, calculating, and evaluating the information provided by each KPI. In the course of Cycle 2, the second research sub-question was answered, clarifying how the data for each KPI needed to be measured, interpreted, and acted upon. Consequently, after two incremental research cycles, the primary research question was answered describing the practical framework that may be used for monitoring the performance of agile IT projects delivered in an output-based outsourcing context. This framework was evaluated by the professionals within the context of the studied organization and received positive feedback across all four evaluation criteria set forth in this research, including the low overhead of data collection, high value of provided information, ease of understandability of the metric dashboard, and high generalizability of the proposed framework.
Resumo:
Työn alustavana tavoitteena oli havaita yrityksen ict-ryhmässä käytettävien teknisten välineiden, informaation hallinnan järjestelmien ja ryhmän työtilojen vaikutus hiljasen tiedon jakamiseen tai sen estymiseen. Työssä tarkasteltiin myös vaikutusta ryhmän oppimiseen sekä ryhmään muodostuvan hiljaisen tiedon syntymisen ja henkilöitymisen syihin sekä hiljaisen tiedon laatuun. Lisäksi työssä pyritään löytämään tapoja, joiden avulla hiljaista tietoa voidaan hyödyntää ja jakaa ryhmässä siten että se mahdollistaa myös ryhmän oppimisen. Haastatteluissa nousivat esiin informaation hallintaongelmat, jotka johtuvat informaation hajautumisesta informaation eri hallinta järjestelmiin, tietoturvamääräysten aiheuttamiin esteisiin ja kriittisen teknisen dokumentaation tallentamiseen. Nämä vaikeuttavat tallennetun informaation hyödyntämistä ict-ryhmässä. Tämä osaltaan aiheuttaa hiljaisen tiedon henkilöitymistä ryhmässä ja hiljaisen tiedon jakamisen menetelmien jää hyödyntämättä. Haastatteluissa osoittautui, että hiljaisen tiedon hyödyntämisongelmat johtuvat osittain ryhmän sisäisten yhteyksien verkoston rakenteesta. Vahvojen linkkien vähyys ryhmässä ja vahvojen linkkien suuntautuminen ryhmän ulkoisiin verkostoihin vaikeuttaa hiljaisen tiedon siirtymistä ryhmässä ja aiheuttaa hiljaisen tiedon henkilöitymistä. Kiire todettiin ryhmän kannalta ongelmalliseksi hiljaisen tiedon jakamisen ja hyödyntämisen esteeksi. Tiedon adaptaatioon ja omaksumiseen liittyvät ongelmat ovat sidoksissa aikaan ja hiljaisen tiedon henkilöitymiseen ryhmässä. Ryhmätyö ja ryhmän päivittäisessä toiminnassa käytettävissä olevat hiljaisen tiedon jakamisen menetelmät osoittautuvat sopivimmiksi edistämään ryhmän yhteistyökykyä sekä sosiaalisen rakenteen ja luottamuksen kehittymistä. Ne poistavat myös tiedon omaksumisen ja adaptaation esteitä. Ryhmätyö vaikuttaa myönteisesti ryhmän kollektiiviseen osaamiseen ja ryhmän hiljaisen tiedon kehittymiseen. Lisäksi se parantaa teknisen dokumentaation ymmärrettävyyttä.
Resumo:
Les antipatrons sont de “mauvaises” solutions à des problèmes récurrents de conception logicielle. Leur apparition est soit due à de mauvais choix lors de la phase de conception soit à des altérations et des changements continus durant l’implantation des programmes. Dans la littérature, il est généralement admis que les antipatrons rendent la compréhension des programmes plus difficile. Cependant, peu d’études empiriques ont été menées pour vérifier l’impact des antipatrons sur la compréhension. Dans le cadre de ce travail de maîtrise, nous avons conçu et mené trois expériences, avec 24 sujets chacune, dans le but de recueillir des données sur la performance des sujets lors de tâches de compréhension et d’évaluer l’impact de l’existence de deux antipatrons, Blob et Spaghetti Code, et de leurs combinaisons sur la compréhension des programmes. Nous avons mesuré les performances des sujets en terme : (1) du TLX (NASA task load index) pour l’éffort ; (2) du temps consacré à l’exécution des tâches ; et, (3) de leurs pourcentages de réponses correctes. Les données recueillies montrent que la présence d’un antipatron ne diminue pas sensiblement la performance des sujets alors que la combinaison de deux antipatrons les entrave de façon significative. Nous concluons que les développeurs peuvent faire face à un seul antipatron, alors que la combinaison de plusieurs antipatrons devrait être évitée, éventuellement par le biais de détection et de réusinage.
Resumo:
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.
Resumo:
Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects.