829 resultados para Corpus (Creation, Annotation, etc.), Question Answering, Usability, User Satisfaction
Resumo:
This paper is a study about the way in which se structures are represented in 20 verb entries of nine dictionaries of Spanish language. There is a large number of these structures and they are problematic for native and non native speakers. Verbs of the analysis are middle-high frequency and, in the most part of the cases, very polysemous, and this allows to observe interconnections between the different se structures and the different meanings of each verb. Data of the lexicographic analysis are cross-checked with corpus analysis of the same units. As a result, it is observed that there is a large variety in the data which are offered in each dictionary and in the way they are offered, inter and intradictionary. The reasons range from the theoretical overall of each Project to practical performance. This leads to the conclusion that it is necessary to further progress in the dictionary model it is being handled, in order to offer lexico-grammatical phenomenon such as se verbs in an accurate, clear and exhaustive way.
Resumo:
Lappeenrannan teknillinen yliopisto tutkii pientasajännitesähkön käyttöä. Yliopisto on rakennuttanut Järvi-Suomen Energia Oy:n ja Suur-Savon Sähkö Oy:n kanssa yhteistyössä kokeellisen pientasajännitesähköverkon, jolla pystytään tarjoamaan kenttäolosuhteet pienjännitetutkimukselle todellisilla asiakkailla ja todentaa LVDC-teknologiaa ja muita älykkään sähköverkon toimintoja kenttäolosuhteissa. Verkon tasajänniteyhteys on rakennettu 20 kV sähkönjakeluverkon ja neljän kuluttajan välille. 20 kV keskijännite suunnataan tasamuuntamolla ±750 V pientasajännitteeksi ja uudestaan 400/230 V vaihtojännitteeksi kuluttajien läheisyydessä. Tämän kandidaatintyön tarkoituksena on luoda yliopistolle tietokanta pientasajännitesähköverkosta kertyvälle tiedolle ja mittaustuloksille. Tietokanta nähtiin tarpeelliseksi luoda, jotta pienjänniteverkon mittaustuloksia pystytään myöhemmin tarkastelemaan yhdessä ja yhtenäisessä muodossa. Yhdeksi tutkimuskysymykseksi muodostui, kuinka järjestää ja visualisoida kaikki verkosta palvelimille kertyvä mittausdata. Työssä on huomioitu myös kolme tietokantaa mahdollisesti hyödyntävää käyttäjäryhmää: kotitalousasiakkaat, sähköverkkoyhtiöt ja tutkimuslaboratorio, sekä pohdittu tietokannan hyötyä ja merkitystä näille käyttäjille. Toiseksi tutkimuskysymykseksi muodostuikin, mikä kaikesta tietokantaan talletetusta datasta olisi oleellisen tärkeää ottaa talteen näiden asiakkaiden kannalta, ja kuinka nämä voisivat hakea tietoa tietokannasta. Työn tutkimusmenetelmät perustuvat jo valmiiksi olemassa olevaan mittausdataan. Työtä varten on käytetty sekä painettua että sähköisessä muodossa olevaa kirjallisuutta. Työn tuloksena on saatu luotua tietokanta MySQL Workbench -ohjelmistolla, sekä mittausdatan keräys- ja käsittelyohjelmat Python-ohjelmointikielellä. Lisäksi on luotu erillinen MATLAB-rajapinta tiedon visualisoimista varten, jolla havainnollistetaan kolmen asiakasryhmän mittausdataa. Tietokanta ja sen tiedon visualisointi antavat kuluttajalle mahdollisuuden ymmärtää paremmin omaa sähkönkäyttöään, sekä sähköverkkoyhtiöille ja tutkimuslaboratorioille muun muassa tietoa sähkön laadusta ja verkon kuormituksesta.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Commercial computer games contain “physics engine” components, responsible for providing realistic interactions among game objects. The question naturally arises of whether these engines can be used to develop educational materials for high school and university physics education. To answer this question, the author's group recently conducted a detailed scientific investigation of the physics engine of Unreal Tournament 2004 (UT2004). This article presents their motivation, methodology, and results. The author presents the findings of experiments that probed the accessibility and fidelity of UT2004's physics engine, examples of educational materials developed, and an evaluation of their use in high school classes. The associated pedagogical implications of this approach are discussed, and the author suggests guidelines for educators on how to deploy the approach. Key resources are presented on an associated Web site.
Resumo:
The business system known as Pyramid does today not provide its user with a reasonable system regarding case management for support issues. The current system in place requires the customer to contact its provider via telephone to register new cases. In addition to this, current system doesn’t include any way for the user to view any of their current cases without contacting the provider.A solution to this issue is to migrate the current case management system from a telephone contact to a web based platform, where customers could easier access their current cases, but also directly through the website create new cases. This new system would reduce the time required to manually manage each individual case, for both customer and provider, resulting in an overall reduction in cost for both parties.The result is a system divided into two different sections, the first one is an API created in Pyramid that acts as a web service, and the second one a website which customers can connect to. The website will allow users to overview their current cases, but also the option to create new cases directly through the site. All the information used to the website is obtained through the web service inside Pyramid. Analyzing the final design of the system, the developers where able to conclude both positive and negative aspects of the systems’ final design. If the platform chosen was the optimal choice or not, and also what can be include if the system is further developed, will be discussed.The development process and the method used during development will also be analyzed and discussed, what positive and negative aspects that where encountered. In addition to this the cause and effect of a development team smaller than the suggested size will also be analyzed. Lastly an analysis of actions that could’ve been made in order to prevent certain issues from occurring will.
Resumo:
Les unités linguistiques sous-lexicales (p.ex., la syllabe, le phonème ou le phone) jouent un rôle crucial dans le traitement langagier. En particulier, le traitement langagier est profondément influencé par la distribution de ces unités. Par exemple, les syllabes les plus fréquentes sont articulées plus rapidement. Il est donc important d’avoir accès à des outils permettant de créer du matériel expérimental ou clinique pour l’étude du langage normal ou pathologique qui soit représentatif de l’utilisation des syllabes et des phones dans la langue orale. L’accès à ce type d’outil permet également de comparer des stimuli langagiers en fonction de leurs statistiques distributionnelles, ou encore d’étudier l’impact de ces statistiques sur le traitement langagier dans différentes populations. Pourtant, jusqu’à ce jour, aucun outil n’était disponible sur l’utilisation des unités linguistiques sous-lexicales du français oral québécois. Afin de combler cette lacune, un vaste corpus du français québécois oral spontané a été élaboré à partir d’enregistrements de 184 locuteurs québécois. Une base de données de syllabes et une base de données de phones ont ensuite été construites à partir de ce corpus, offrant une foule d’informations sur la structure des unités et sur leurs statistiques distributionnelles. Le fruit de ce projet, intitulé SyllabO +, sera rendu disponible en ligne en accès libre via le site web http://speechneurolab.ca/fr/syllabo dès la publication de l’article le décrivant. Cet outil incomparable sera d’une grande utilité dans plusieurs domaines, tels que les neurosciences cognitives, la psycholinguistique, la psychologie expérimentale, la phonétique, la phonologie, l’orthophonie et l’étude de l’acquisition des langues.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
For some years now the Internet and World Wide Web communities have envisaged moving to a next generation of Web technologies by promoting a globally unique, and persistent, identifier for identifying and locating many forms of published objects . These identifiers are called Universal Resource Names (URNs) and they hold out the prospect of being able to refer to an object by what it is (signified by its URN), rather than by where it is (the current URL technology). One early implementation of URN ideas is the Unicode-based Handle technology, developed at CNRI in Reston Virginia. The Digital Object Identifier (DOI) is a specific URN naming convention proposed just over 5 years ago and is now administered by the International DOI organisation, founded by a consortium of publishers and based in Washington DC. The DOI is being promoted for managing electronic content and for intellectual rights management of it, either using the published work itself, or, increasingly via metadata descriptors for the work in question. This paper describes the use of the CNRI handle parser to navigate a corpus of papers for the Electronic Publishing journal. These papers are in PDF format and based on our server in Nottingham. For each paper in the corpus a metadata descriptor is prepared for every citation appearing in the References section. The important factor is that the underlying handle is resolved locally in the first instance. In some cases (e.g. cross-citations within the corpus itself and links to known resources elsewhere) the handle can be handed over to CNRI for further resolution. This work shows the encouraging prospect of being able to use persistent URNs not only for intellectual property negotiations but also for search and discovery. In the test domain of this experiment every single resource, referred to within a given paper, can be resolved, at least to the level of metadata about the referred object. If the Web were to become more fully URN aware then a vast directed graph of linked resources could be accessed, via persistent names. Moreover, if these names delivered embedded metadata when resolved, the way would be open for a new generation of vastly more accurate and intelligent Web search engines.
Resumo:
The traditional process of filling the medicine trays and dispensing the medicines to the patients in the hospitals is manually done by reading the printed paper medicine chart. This process can be very strenuous and error-prone, given the number of sub-tasks involved in the entire workflow and the dynamic nature of the work environment. Therefore, efforts are being made to digitalise the medication dispensation process by introducing a mobile application called Smart Dosing application. The introduction of the Smart Dosing application into hospital workflow raises security concerns and calls for security requirement analysis. This thesis is written as a part of the smart medication management project at Embedded Systems Laboratory, A° bo Akademi University. The project aims at digitising the medicine dispensation process by integrating information from various health systems, and making them available through the Smart Dosing application. This application is intended to be used on a tablet computer which will be incorporated on the medicine tray. The smart medication management system include the medicine tray, the tablet device, and the medicine cups with the cup holders. Introducing the Smart Dosing application should not interfere with the existing process carried out by the nurses, and it should result in minimum modifications to the tray design and the workflow. The re-designing of the tray would include integrating the device running the application into the tray in a manner that the users find it convenient and make less errors while using it. The main objective of this thesis is to enhance the security of the hospital medicine dispensation process by ensuring the security of the Smart Dosing application at various levels. The methods used for writing this thesis was to analyse how the tray design, and the application user interface design can help prevent errors and what secure technology choices have to be made before starting the development of the next prototype of the Smart Dosing application. The thesis first understands the context of the use of the application, the end-users and their needs, and the errors made in everyday medication dispensation workflow by continuous discussions with the nursing researchers. The thesis then gains insight to the vulnerabilities, threats and risks of using mobile application in hospital medication dispensation process. The resulting list of security requirements was made by analysing the previously built prototype of the Smart Dosing application, continuous interactive discussions with the nursing researchers, and an exhaustive stateof- the-art study on security risks of using mobile applications in hospital context. The thesis also uses Octave Allegro method to make the readers understand the likelihood and impact of threats, and what steps should be taken to prevent or fix them. The security requirements obtained, as a result, are a starting point for the developers of the next iteration of the prototype for the Smart Dosing application.
Resumo:
There may be advantages to be gained by combining Case-Based Reasoning (CBR) techniques with numerical models. In this paper we consider how CBR can be used as a flexible query engine to improve the usability of numerical models. Particularly they can help to solve inverse and mixed problems, and to solve constraint problems. We discuss this idea with reference to the illustrative example of a pneumatic conveyor. We describe a model of the problem of particle degradation in such a conveyor, and the problems faced by design engineers. The solution of these problems requires a system that allows iterative sharing of control between user, CBR system, and numerical model. This multi-initiative interaction is illustrated for the pneumatic conveyor by means of Unified Modeling Language (UML) collaboration and sequence diagrams. We show approaches to the solution of these problems via a CBR tool.
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
La maitrise de la langue écrite dans les disciplines scolaires représente un indice prédictif de la réussite scolaire. Bien que, au secondaire, la compétence à communiquer de façon appropriée constitue une compétence transversale à développer dans tout le cursus scolaire, l’idée de transversalité est mise en question, car chaque discipline possède ses particularités langagières, lesquelles doivent être enseignées dans les contextes de production-réception spécifiques aux domaines d’apprentissage. Cette recherche en didactique de l’écrit et des sciences visait à dégager les caractéristiques langagières propres à la rédaction d’un rapport de laboratoire. Une grille d’analyse a été utilisée pour relever les occurrences d’unités linguistiques dans un corpus d’observation composé de 62 rapports de laboratoire d’élèves de 4e secondaire en sciences et technologies de l’environnement. Les résultats montrent que le rapport de laboratoire comporte des unités linguistiques de fréquentes à rares selon l’intention du scripteur dans les différentes parties du rapport (but, hypothèse, etc.) et que les erreurs commises par les élèves se rapportent aux conventions, à la précision et à l’orthographe. Du matériel pédagogique a été conçu à la lumière de ces observations afin d’outiller les enseignants de sciences pour qu’ils puissent mieux expliciter les conventions propres à la rédaction de ce type d’écrit.
Resumo:
The traditional process of filling the medicine trays and dispensing the medicines to the patients in the hospitals is manually done by reading the printed paper medicinechart. This process can be very strenuous and error-prone, given the number of sub-tasksinvolved in the entire workflow and the dynamic nature of the work environment.Therefore, efforts are being made to digitalise the medication dispensation process byintroducing a mobile application called Smart Dosing application. The introduction ofthe Smart Dosing application into hospital workflow raises security concerns and callsfor security requirement analysis. This thesis is written as a part of the smart medication management project at EmbeddedSystems Laboratory, A˚bo Akademi University. The project aims at digitising the medicine dispensation process by integrating information from various health systems, and making them available through the Smart Dosing application. This application is intended to be used on a tablet computer which will be incorporated on the medicine tray. The smart medication management system include the medicine tray, the tablet device, and the medicine cups with the cup holders. Introducing the Smart Dosing application should not interfere with the existing process carried out by the nurses, and it should result in minimum modifications to the tray design and the workflow. The re-designing of the tray would include integrating the device running the application into the tray in a manner that the users find it convenient and make less errors while using it. The main objective of this thesis is to enhance the security of the hospital medicine dispensation process by ensuring the security of the Smart Dosing application at various levels. The methods used for writing this thesis was to analyse how the tray design, and the application user interface design can help prevent errors and what secure technology choices have to be made before starting the development of the next prototype of the Smart Dosing application. The thesis first understands the context of the use of the application, the end-users and their needs, and the errors made in everyday medication dispensation workflow by continuous discussions with the nursing researchers. The thesis then gains insight to the vulnerabilities, threats and risks of using mobile application in hospital medication dispensation process. The resulting list of security requirements was made by analysing the previously built prototype of the Smart Dosing application, continuous interactive discussions with the nursing researchers, and an exhaustive state-of-the-art study on security risks of using mobile applications in hospital context. The thesis also uses Octave Allegro method to make the readers understand the likelihood and impact of threats, and what steps should be taken to prevent or fix them. The security requirements obtained, as a result, are a starting point for the developers of the next iteration of the prototype for the Smart Dosing application.
Resumo:
Este documento descreve o trabalho realizado em conjunto com a empresa MedSUPPORT[1] no desenvolvimento de uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. Atualmente a avaliação de satisfação junto dos seus clientes é um procedimento importante e que deve ser utilizado pelas empresas como mais uma ferramenta de avaliação dos seus produtos ou serviços. Para as unidades de saúde a avaliação da satisfação do utente é atualmente considerada como um objetivo fundamental dos serviços de saúde e tem vindo a ocupar um lugar progressivamente mais importante na avaliação da qualidade dos mesmos. Neste âmbito idealizou-se desenvolver uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. O estudo inicial sobre o conceito da satisfação de consumidores e utentes permitiu consolidar os conceitos associados à temática em estudo. Conhecer as oito dimensões que, de acordo com os investigadores englobam a satisfação do utente é um dos pontos relevantes do estudo inicial. Para avaliar junto do utente a sua satisfação é necessário questiona-lo diretamente. Para efeito desenvolveu-se um inquérito de satisfação estudando cuidadosamente cada um dos elementos que deste fazem parte. No desenvolvimento do inquérito de satisfação foram seguidas as seguintes etapas: Planeamento do questionário, partindo das oito dimensões da satisfação do utente até às métricas que serão avaliadas junto do utente; Análise dos dados a recolher, definindo-se, para cada métrica, se os dados serão nominais, ordinais ou provenientes de escalas balanceadas; Por último a formulação das perguntas do inquérito de satisfação foi alvo de estudo cuidado para garantir que o utente percecione da melhor forma o objetivo da questão. A definição das especificações da plataforma e do questionário passou por diferentes estudos, entre eles uma análise de benchmarking[2], que permitiram definir que o inquérito iv estará localizado numa zona acessível da unidade de saúde, será respondido com recurso a um ecrã táctil (tablet) e que estará alojado na web. As aplicações web desenvolvidas atualmente apresentam um design apelativo e intuitivo. Foi fundamental levar a cabo um estudo do design da aplicação web, como garantia que as cores utilizadas, o tipo de letra, e o local onde a informação são os mais adequados. Para desenvolver a aplicação web foi utilizada a linguagem de programação Ruby, com recurso à framework Ruby on Rails. Para a implementação da aplicação foram estudadas as diferentes tecnologias disponíveis, com enfoque no estudo do sistema de gestão de base de dados a utilizar. O desenvolvimento da aplicação web teve também como objetivo melhorar a gestão da informação gerada pelas respostas ao inquérito de satisfação. O colaborador da MedSUPPORT é o responsável pela gestão da informação pelo que as suas necessidades foram atendidas. Um menu para a gestão da informação é disponibilizado ao administrador da aplicação, colaborador MedSUPPORT. O menu de gestão da informação permitirá uma análise simplificada do estado atual com recurso a um painel do tipo dashboard e, a fim de melhorar a análise interna dos dados terá uma função de exportação dos dados para folha de cálculo. Para validação do estudo efetuado foram realizados os testes de funcionamento à plataforma, tanto à sua funcionalidade como à sua utilização em contexto real pelos utentes inquiridos nas unidades de saúde. Os testes em contexto real objetivaram validar o conceito junto dos utentes inquiridos.