911 resultados para Human-computer Interface
Resumo:
Introduction Language is the most important mean of communication and plays a central role in our everyday life. Brain damage (e.g. stroke) can lead to acquired disorders of lan- guage affecting the four linguistic modalities (i.e. reading, writing, speech production and comprehension) in different combinations and levels of severity. Every year, more than 5000 people (Aphasie Suisse) are affected by aphasia in Switzerland alone. Since aphasia is highly individual, the level of difficulty and the content of tasks have to be adapted continuously by the speech therapists. Computer-based assignments allow patients to train independently at home and thus increasing the frequency of ther- apy. Recent developments in tablet computers have opened new opportunities to use these devices for rehabilitation purposes. Especially older people, who have no prior experience with computers, can benefit from the new technologies. Methods The aim of this project was to develop an application that enables patients to train language related tasks autonomously and, on the other hand, allows speech therapists to assign exercises to the patients and to track their results online. Seven categories with various types of assignments were implemented. The application has two parts which are separated by a user management system into a patient interface and a therapist interface. Both interfaces were evaluated using the SUS (Subject Usability Scale). The patient interface was tested by 15 healthy controls and 5 patients. For the patients, we also collected tracking data for further analysis. The therapist interface was evaluated by 5 speech therapists. Results The SUS score are xpatients = 98 and xhealthy = 92.7 (median = 95, SD = 7, 95% CI [88.8, 96.6]) in case of the patient interface and xtherapists = 68 in case of the therapist interface. Conclusion Both, the patients and the healthy subjects, attested high SUS scores to the patient interface. These scores are considered as "best imaginable". The therapist interface got a lower SUS score compared to the patient interface, but is still considered as "good" and "usable". The user tracking system and the interviews revealed that there is room for improvements and inspired new ideas for future versions.
Resumo:
Background: Sensor-based recordings of human movements are becoming increasingly important for the assessment of motor symptoms in neurological disorders beyond rehabilitative purposes. ASSESS MS is a movement recording and analysis system being developed to automate the classification of motor dysfunction in patients with multiple sclerosis (MS) using depth-sensing computer vision. It aims to provide a more consistent and finer-grained measurement of motor dysfunction than currently possible. Objective: To test the usability and acceptability of ASSESS MS with health professionals and patients with MS. Methods: A prospective, mixed-methods study was carried out at 3 centers. After a 1-hour training session, a convenience sample of 12 health professionals (6 neurologists and 6 nurses) used ASSESS MS to capture recordings of standardized movements performed by 51 volunteer patients. Metrics for effectiveness, efficiency, and acceptability were defined and used to analyze data captured by ASSESS MS, video recordings of each examination, feedback questionnaires, and follow-up interviews. Results: All health professionals were able to complete recordings using ASSESS MS, achieving high levels of standardization on 3 of 4 metrics (movement performance, lateral positioning, and clear camera view but not distance positioning). Results were unaffected by patients’ level of physical or cognitive disability. ASSESS MS was perceived as easy to use by both patients and health professionals with high scores on the Likert-scale questions and positive interview commentary. ASSESS MS was highly acceptable to patients on all dimensions considered, including attitudes to future use, interaction (with health professionals), and overall perceptions of ASSESS MS. Health professionals also accepted ASSESS MS, but with greater ambivalence arising from the need to alter patient interaction styles. There was little variation in results across participating centers, and no differences between neurologists and nurses. Conclusions: In typical clinical settings, ASSESS MS is usable and acceptable to both patients and health professionals, generating data of a quality suitable for clinical analysis. An iterative design process appears to have been successful in accounting for factors that permit ASSESS MS to be used by a range of health professionals in new settings with minimal training. The study shows the potential of shifting ubiquitous sensing technologies from research into the clinic through a design approach that gives appropriate attention to the clinic environment.
Resumo:
The question concerning the circumstances under which it is advantageous for a company to outsource certain information systems functions has been a controversial issue for the last decade. While opponents emphasize the risks of outsourcing based on the loss of strategic potentials and increased transaction costs, proponents emphasize the strategic benefits of outsourcing and high potentials of cost-savings. This paper brings together both views by examining the conditions under which both the strategic potentials as well as savings in production and transaction costs of developing and maintaining software applications can better be achieved in-house as opposed to by an external vendor. We develop a theoretical framework from three complementary theories and test it empirically based on a mail survey of 139 German companies. The results show that insourcing is more cost efficient and advantageous in creating strategic benefits through IS if the provision of application services requires a high amount of firm specific human assets. These relationships, however, are partially moderated by differences in the trustworthiness and intrinsic motivation of internal versus external IS professionals. Moreover, capital shares with an external vendor can lower the risk of high transaction costs as well the risk of loosing the strategic opportunities of an IS.
Resumo:
A computer simulation study describing the electrophoretic separation and migration of methadone enantiomers in presence of free and immobilized (2-hydroxypropyl)-β-CD is presented. The 1:1 interaction of methadone with the neutral CD was simulated by using experimentally determined mobilities and complexation constants for the complexes in a low-pH BGE comprising phosphoric acid and KOH. The use of complex mobilities represents free solution conditions with the chiral selector being a buffer additive, whereas complex mobilities set to zero provide data that mimic migration and separation with the chiral selector being immobilized, that is CEC conditions in absence of unspecific interaction between analytes and the chiral stationary phase. Simulation data reveal that separations are quicker, electrophoretic displacement rates are reduced, and sensitivity is enhanced in CEC with on-column detection in comparison to free solution conditions. Simulation is used to study electrophoretic analyte behavior at the interface between sample and the CEC column with the chiral selector (analyte stacking) and at the rear end when analytes leave the environment with complexation (analyte destacking). The latter aspect is relevant for off-column analyte detection in CEC and is described here for the first time via the dynamics of migrating analyte zones. Simulation provides insight into means to counteract analyte dilution at the column end via use of a BGE with higher conductivity. Furthermore, the impact of EOF on analyte migration, separation, and detection for configurations with the selector zone being displaced or remaining immobilized under buffer flow is simulated. In all cases, the data reveal that detection should occur within or immediately after the selector zone.
Resumo:
The evolution of wireless access technologies and mobile devices, together with the constant demand for video services, has created new Human-Centric Multimedia Networking (HCMN) scenarios. However, HCMN poses several challenges for content creators and network providers to deliver multimedia data with an acceptable quality level based on the user experience. Moreover, human experience and context, as well as network information play an important role in adapting and optimizing video dissemination. In this paper, we discuss trends to provide video dissemination with Quality of Experience (QoE) support by integrating HCMN with cloud computing approaches. We identified five trends coming from such integration, namely Participatory Sensor Networks, Mobile Cloud Computing formation, QoE assessment, QoE management, and video or network adaptation.
Resumo:
Evidence suggests that sex-based differences in immune function may predispose women to numerous hypersensitivity conditions such as Systemic lupus erythematosus (SLE), Hashimoto's thyroiditis and asthma. To date, the exact mechanisms of sexual dimorphism in immunity are not fully characterized but sex hormones such as 17-β estradiol (E2) and progesterone (PR) are believed to be involved. Since E2 and PR may modulate the production of critical regulatory cytokines, we sought to characterize their effects on the in vitro human type-1/type-2 cytokine balance. We hypothesized that E2 and/or PR vary cytokine production and influence costimulatory molecule expression and apoptosis. We first described the effect of E2 and/or PR on type-1 (IFN-γ and IL-12) and type-2 (IL-4 and IL-10) cytokine production by human peripheral blood mononuclear cells (PBMC) treated with various T-lymphocyte and monocyte stimuli. E2 and/or PR were each used at concentrations similar to those found at the maternal-fetal interface during pregnancy. At this dose, E2 increased IFN-γ and IL-12 production and PR decreased IFN-γ production and tended to increase IL-4 production. Furthermore, the combination of E2+PR decreased IL-12 production. This suggests that E2 shifts the type-1/type-2 cytokine balance towards a type-1 response and that PR and E2+PR shift the balance towards a type-2 response. Next, we used intracellular cytokine detection to demonstrate that E2 and/or PR are capable of altering cytokine production of CD3+ T-cells and the CD3+CD4+ and CD3+CD8+ subsets. In addition, we used the H9 T-lymphocyte cell line and the THP-1 monocyte cell line to show that E2 and/or PR can induce cytokine effects in both T-cells and monocytes independent of their interaction. Lastly, we determined the effect of E2 and/or PR on costimulatory molecule expression and apoptosis as potential mechanisms for the cytokine-induced alterations. E2 increased and PR decreased CD80 expression on THP-1 cells and PR and E2+PR decreased CD28 expression in PBMC and Jurkat cells. Furthermore, E2, PR and E2+PR increased Fas-mediated apoptosis in Jurkat cells and E2 increased FasL expression on THP-1 cells. Thus, E2 and/or PR may alter the cytokine balance by modulating the CD28/CD80 costimulatory pathway and apoptosis. ^
Resumo:
This study evaluated the administration-time-dependent effects of a stimulant (Dexedrine 5-mg), a sleep-inducer (Halcion 0.25-mg) and placebo (control) on human performance. The investigation was conducted on 12 diurnally active (0700-2300) male adults (23-38 yrs) using a double-blind, randomized sixway-crossover three-treatment, two-timepoint (0830 vs 2030) design. Performance tests were conducted hourly during sleepless 13-hour studies using a computer generated, controlled and scored multi-task cognitive performance assessment battery (PAB) developed at the Walter Reed Army Institute of Research. Specific tests were Simple and Choice Reaction Time, Serial Addition/Subtraction, Spatial Orientation, Logical Reasoning, Time Estimation, Response Timing and the Stanford Sleepiness Scale. The major index of performance was "Throughput", a combined measure of speed and accuracy.^ For the Placebo condition, Single and Group Cosinor Analysis documented circadian rhythms in cognitive performance for the majority of tests, both for individuals and for the group. Performance was best around 1830-2030 and most variable around 0530-0700 when sleepiness was greatest (0300).^ Morning Dexedrine dosing marginally enhanced performance an average of 3% with reference to the corresponding in time control level. Dexedrine AM also increased alertness by 10% over the AM control. Dexedrine PM failed to improve performance with reference to the corresponding PM control baseline. With regard to AM and PM Dexedrine administrations, AM performance was 6% better with subjects 25% more alert.^ Morning Halcion administration caused a 7% performance decrement and 16% increase in sleepiness and a 13% decrement and 10% increase in sleepiness when administered in the evening compared to corresponding in time control data. Performance was 9% worse and sleepiness 24% greater after evening versus morning Halcion administration.^ These results suggest that for evening Halcion dosing, the overnight sleep deprivation occurring in coincidence with the nadir in performance due to circadian rhythmicity together with the CNS depressant effects combine to produce performance degradation. For Dexedrine, morning administration resulted in only marginal performance enhancement; Dexedrine in the evening was less effective, suggesting the 5-mg dose level may be too low to counteract the partial sleep deprivation and nocturnal nadir in performance. ^
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
BACKGROUND: Antiretroviral therapy has changed the natural history of human immunodeficiency virus (HIV) infection in developed countries, where it has become a chronic disease. This clinical scenario requires a new approach to simplify follow-up appointments and facilitate access to healthcare professionals. METHODOLOGY: We developed a new internet-based home care model covering the entire management of chronic HIV-infected patients. This was called Virtual Hospital. We report the results of a prospective randomised study performed over two years, comparing standard care received by HIV-infected patients with Virtual Hospital care. HIV-infected patients with access to a computer and broadband were randomised to be monitored either through Virtual Hospital (Arm I) or through standard care at the day hospital (Arm II). After one year of follow up, patients switched their care to the other arm. Virtual Hospital offered four main services: Virtual Consultations, Telepharmacy, Virtual Library and Virtual Community. A technical and clinical evaluation of Virtual Hospital was carried out. FINDINGS: Of the 83 randomised patients, 42 were monitored during the first year through Virtual Hospital (Arm I) and 41 through standard care (Arm II). Baseline characteristics of patients were similar in the two arms. The level of technical satisfaction with the virtual system was high: 85% of patients considered that Virtual Hospital improved their access to clinical data and they felt comfortable with the videoconference system. Neither clinical parameters [level of CD4+ T lymphocytes, proportion of patients with an undetectable level of viral load (p = 0.21) and compliance levels >90% (p = 0.58)] nor the evaluation of quality of life or psychological questionnaires changed significantly between the two types of care. CONCLUSIONS: Virtual Hospital is a feasible and safe tool for the multidisciplinary home care of chronic HIV patients. Telemedicine should be considered as an appropriate support service for the management of chronic HIV infection. TRIAL REGISTRATION: Clinical-Trials.gov: NCT01117675.
Resumo:
Identification and tracking of objects in specific environments such as harbors or security areas is a matter of great importance nowadays. With this purpose, numerous systems based on different technologies have been developed, resulting in a great amount of gathered data displayed through a variety of interfaces. Such amount of information has to be evaluated by human operators in order to take the correct decisions, sometimes under highly critical situations demanding both speed and accuracy. In order to face this problem we describe IDT-3D, a platform for identification and tracking of vessels in a harbour environment able to represent fused information in real time using a Virtual Reality application. The effectiveness of using IDT-3D as an integrated surveillance system is currently under evaluation. Preliminary results point to a significant decrease in the times of reaction and decision making of operators facing up a critical situation. Although the current application focus of IDT-3D is quite specific, the results of this research could be extended to the identification and tracking of targets in other controlled environments of interest as coastlines, borders or even urban areas.
Resumo:
Advances in solid-state lighting have overcome common limitations on optical wireless such as power needs due to light dispersion. It's been recently proposed the modification of lamp's drivers to take advantages of its switching behaviour to include data links maintaining the illumination control they provide. In this paper, a remote access application using visible light communications is presented that provides wireless access to a remote computer using a touchscreen as user interface
Resumo:
The Universidad Politécnica of Madrid (UPM) includes schools and faculties that were for engineering degrees, architecture and computer science, that are now in a quick EEES Bolonia Plan metamorphosis getting into degrees, masters and doctorate structures. They are focused towards action in machines, constructions, enterprises, that are subjected to machines, human and environment created risks. These are present in actions such as use loads, wind, snow, waves, flows, earthquakes, forces and effects in machines, vehicles behavior, chemical effects, and other environmental factors including effects of crops, cattle and beasts, forests, and varied essential economic and social disturbances. Emphasis is for authors in this session more about risks of natural origin, such as for hail, winds, snow or waves that are not exactly known a priori, but that are often considered with statistical expected distributions giving extreme values for convenient return periods. These distributions are known from measures in time, statistic of extremes and models about hazard scenarios and about responses of man made constructions or devices. In each engineering field theories were built about hazards scenarios and how to cover for important risks. Engineers must get that the systems they handle, such as vehicles, machines, firms or agro lands or forests, obtain production with enough safety for persons and with decent economic results in spite of risks. For that risks must be considered in planning, in realization and in operation, and safety margins must be taken but at a reasonable cost. That is a small level of risks will often remain, due to limitations in costs or because of due to strange hazards, and maybe they will be covered by insurance in cases such as in transport with cars, ships or aircrafts, in agro for hail, or for fire in houses or in forests. These and other decisions about quality, security for men or about business financial risks are sometimes considered with Decision Theories models, using often tools from Statistics or operational Research. The authors have done and are following field surveys about risk consideration in the careers in UPM, making deep analysis of curricula taking into account the new structures of degrees in the EEES Bolonia Plan, and they have considered the risk structures offered by diverse schools of Decision theories. That gives an aspect of the needs and uses, and recommendations about improving in the teaching about risk, that may include special subjects especially oriented for each career, school or faculty, so as to be recommended to be included into the curricula, including an elaboration and presentation format using a multi-criteria decision model.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
This work proposes an encapsulation scheme aimed at simplifying the reuse process of hardware cores. This hardware encapsulation approach has been conceived with a twofold objective. First, we look for the improvement of the reuse interface associated with the hardware core description. This is carried out in a first encapsulation level by improving the limited types and configuration options available in the conventional HDLs interface, and also providing information related to the implementation itself. Second, we have devised a more generic interface focused on describing the function avoiding details from a particular implementation, what corresponds to a second encapsulation level. This encapsulation allows the designer to define how to configure and use the design to implement a given functionality. The proposed encapsulation schemes help improving the amount of information that can be supplied with the design, and also allow to automate the process of searching, configuring and implementing diverse alternatives.