918 resultados para Human-computer systems
Resumo:
The aim of this study was to culture human fetal bone cells (dedicated cell banks of fetal bone derived from 14 week gestation femurs) within both hyaluronic acid gel and collagen foam, to compare the biocompatibility of both matrices as potential delivery systems for bone engineering and particularly for oral application. Fetal bone cell banks were prepared from one organ donation and cells were cultured for up to 4 weeks within hyaluronic acid (Mesolis(®)) and collagen foams (TissueFleece(®)). Cell survival and differentiation were assessed by cell proliferation assays and histology of frozen sections stained with Giemsa, von Kossa and ALP at 1, 2 and 4 weeks of culture. Within both materials, fetal bone cells could proliferate in three-dimensional structure at ∼70% capacity compared to monolayer culture. In addition, these cells were positive for ALP and von Kossa staining, indicating cellular differentiation and matrix production. Collagen foam provides a better structure for fetal bone cell delivery if cavity filling is necessary and hydrogels would permit an injectable technique for difficult to treat areas. In all, there was high biocompatibility, cellular differentiation and matrix deposition seen in both matrices by fetal bone cells, allowing for easy cell delivery for bone stimulation in vivo. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
We present a computer-simulation study of the effect of the distribution of energy barriers in an anisotropic magnetic system on the relaxation behavior of the magnetization. While the relaxation law for the magnetization can be approximated in all cases by a time logarithmic decay, the law for the dependence of the magnetic viscosity with temperature is found to be quite sensitive to the shape of the distribution of barriers. The low-temperature region for the magnetic viscosity never extrapolates to a positive no-null value. Moreover our computer simulation results agree reasonably well with some recent relaxation experiments on highly anisotropic single-domain particles.
Resumo:
How communication systems emerge and remain stable is an important question in both cognitive science and evolutionary biology. For communication to arise, not only must individuals cooperate by signaling reliable information, but they must also coordinate and perpetuate signals. Most studies on the emergence of communication in humans typically consider scenarios where individuals implicitly share the same interests. Likewise, most studies on human cooperation consider scenarios where shared conventions of signals and meanings cannot be developed de novo. Here, we combined both approaches with an economic experiment where participants could develop a common language, but under different conditions fostering or hindering cooperation. Participants endeavored to acquire a resource through a learning task in a computer-based environment. After this task, participants had the option to transmit a signal (a color) to a fellow group member, who would subsequently play the same learning task. We varied the way participants competed with each other (either global scale or local scale) and the cost of transmitting a signal (either costly or noncostly) and tracked the way in which signals were used as communication among players. Under global competition, players signaled more often and more consistently, scored higher individual payoffs, and established shared associations of signals and meanings. In addition, costly signals were also more likely to be used under global competition; whereas under local competition, fewer signals were sent and no effective communication system was developed. Our results demonstrate that communication involves both a coordination and a cooperative dilemma and show the importance of studying language evolution under different conditions influencing human cooperation.
Resumo:
Seven tesla (T) MR imaging is potentially promising for the morphologic evaluation of coronary arteries because of the increased signal-to-noise ratio compared to lower field strengths, in turn allowing improved spatial resolution, improved temporal resolution, or reduced scanning times. However, there are a large number of technical challenges, including the commercial 7 T systems not being equipped with homogeneous body radiofrequency coils, conservative specific absorption rate constraints, and magnified sample-induced amplitude of radiofrequency field inhomogeneity. In the present study, an initial attempt was made to address these challenges and to implement coronary MR angiography at 7 T. A single-element radiofrequency transmit and receive coil was designed and a 7 T specific imaging protocol was implemented, including significant changes in scout scanning, contrast generation, and navigator geometry compared to current protocols at 3 T. With this methodology, the first human coronary MR images were successfully obtained at 7 T, with both qualitative and quantitative findings being presented.
Resumo:
Abstract
Resumo:
Psychophysical studies suggest that humans preferentially use a narrow band of low spatial frequencies for face recognition. Here we asked whether artificial face recognition systems have an improved recognition performance at the same spatial frequencies as humans. To this end, we estimated recognition performance over a large database of face images by computing three discriminability measures: Fisher Linear Discriminant Analysis, Non-Parametric Discriminant Analysis, and Mutual Information. In order to address frequency dependence, discriminabilities were measured as a function of (filtered) image size. All three measures revealed a maximum at the same image sizes, where the spatial frequency content corresponds to the psychophysical found frequencies. Our results therefore support the notion that the critical band of spatial frequencies for face recognition in humans and machines follows from inherent properties of face images, and that the use of these frequencies is associated with optimal face recognition performance.
Resumo:
The main goal of the InterAmbAr reseach project is to analyze the relationships between landscape systems and human land-use strategies on mountains and littoral plains from a long-term perspective. The study adopts a high resolution analysis of small-scale study areas located in the Mediterranean region of north-eastern Catalonia. The study areas are distributed along an altitudinal transect from the high mountain (above 2000m a.s.l.) to the littoral plain of Empordà (Fig. 1). High resolution interdisciplinary research has been carried out from 2010, based on the integration of palaeoenvironmental and archaeological data. The micro-scale approach is used to understand human-environmental relationships. It allows better understanding of the local-regional nature of environmental changes and the synergies between catchment-based systems, hydro-sedimentary regimes, human mobility, land-uses, human environments, demography, etc.
Resumo:
A brain-computer interface (BCI) is a new communication channel between the human brain and a computer. Applications of BCI systems comprise the restoration of movements, communication and environmental control. In this study experiments were made that used the BCI system to control or to navigate in virtual environments (VE) just by thoughts. BCI experiments for navigation in VR were conducted so far with synchronous BCI and asynchronous BCI systems. The synchronous BCI analyzes the EEG patterns in a predefined time window and has 2 to 3 degrees of freedom.
Resumo:
Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
Human-Centered Design (HCD) is a well-recognized approach to the design of interactive computing systems that supports everyday and professional lives of people. To that end, the HCD approach put central emphasis on the explicit understanding of users and context of use by involving users throughout the entire design and development process. With mobile computing, the diversity of users as well as the variety in the spatial, temporal, and social settings of the context of use has notably expanded, which affect the effort of interaction designers to understand users and context of use. The emergence of the mobile apps era in 2008 as a result of structural changes in the mobile industry and the profound enhanced capabilities of mobile devices, further intensify the embeddedness of technology in the daily life of people and the challenges that interaction designers face to cost-efficiently understand users and context of use. Supporting interaction designers in this challenge requires understanding of their existing practice, rationality, and work environment. The main objective of this dissertation is to contribute to interaction design theories by generating understanding on the HCD practice of mobile systems in the mobile apps era, as well as to explain the rationality of interaction designers in attending to users and context of use. To achieve that, a literature study is carried out, followed by a mixed-methods research that combines multiple qualitative interview studies and a quantitative questionnaire study. The dissertation contributes new insights regarding the evolving HCD practice at an important time of transition from stationary computing to mobile computing. Firstly, a gap is identified between interaction design as practiced in research and in the industry regarding the involvement of users in context; whereas the utilization of field evaluations, i.e. in real-life environments, has become more common in academic projects, interaction designers in the industry still rely, by large, on lab evaluations. Secondly, the findings indicate on new aspects that can explain this gap and the rationality of interaction designers in the industry in attending to users and context; essentially, the professional-client relationship was found to inhibit the involvement of users, while the mental distance between practitioners and users as well as the perceived innovativeness of the designed system are suggested in explaining the inclination to study users in situ. Thirdly, the research contributes the first explanatory model on the relation between the organizational context and HCD; essentially, innovation-focused organizational strategies greatly affect the cost-effective usage of data on users and context of use. Last, the findings suggest a change in the nature of HCD in the mobile apps era, at least with universal consumer systems; evidently, the central attention on the explicit understanding of users and context of use shifts from an early requirements phase and continual activities during design and development to follow-up activities. That is, the main effort to understand users is by collecting data on their actual usage of the system, either before or after the system is deployed. The findings inform both researchers and practitioners in interaction design. In particular, the dissertation suggest on action research as a useful approach to support interaction designers and further inform theories on interaction design. With regard to the interaction design practice, the dissertation highlights strategies that encourage a more cost-effective user- and context-informed interaction design process. With the continual embeddedness of computing into people’s life, e.g. with wearable devices and connected car systems, the dissertation provides a timely and valuable view on the evolving humancentered design.
Resumo:
Le système de différenciation entre le « soi » et le « non-soi » des vertébrés permet la détection et le rejet de pathogènes et de cellules allogéniques. Il requiert la surveillance de petits peptides présentés à la surface cellulaire par les molécules du complexe majeur d’histocompatibilité de classe I (CMH I). Les molécules du CMH I sont des hétérodimères composés par une chaîne lourde encodée par des gènes du CMH et une chaîne légère encodée par le gène β2-microglobuline. L’ensemble des peptides est appelé l’immunopeptidome du CMH I. Nous avons utilisé des approches en biologie de systèmes pour définir la composition et l’origine cellulaire de l’immunopeptidome du CMH I présenté par des cellules B lymphoblastoïdes dérivés de deux pairs de fratries avec un CMH I identique. Nous avons découvert que l’immunopeptidome du CMH I est spécifique à l’individu et au type cellulaire, qu’il dérive préférentiellement de transcrits abondants, est enrichi en transcrits possédant d’éléments de reconnaissance par les petits ARNs, mais qu’il ne montre aucun biais ni vers les régions génétiques invariables ni vers les régions polymorphiques. Nous avons également développé une nouvelle méthode qui combine la spectrométrie de masse, le séquençage de nouvelle génération et la bioinformatique pour l’identification à grand échelle de peptides du CMH I, dont ceux résultants de polymorphismes nucléotidiques simples non-synonymes (PNS-ns), appelés antigènes mineurs d’histocompatibilité (AMHs), qui sont les cibles de réponses allo-immunitaires. La comparaison de l’origine génomique de l’immunopeptidome de soeurs avec un CMH I identique a révélé que 0,5% des PNS-ns étaient représentés dans l’immunopeptidome et que 0,3% des peptides du CMH I seraient immunogéniques envers une des deux soeurs. En résumé, nous avons découvert des nouveaux facteurs qui modèlent l’immunopeptidome du CMH I et nous présentons une nouvelle stratégie pour l’indentification de ces peptides, laquelle pourrait accélérer énormément le développement d’immunothérapies ciblant les AMHs.