936 resultados para Human engineering.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sacrificial templating process using lithographically printed minimal surface structures allows complex de novo geometries of delicate hydrogel materials. The hydrogel scaffolds based on cellulose and chitin nanofibrils show differences in terms of attachment of human mesenchymal stem cells, and allow their differentiation into osteogenic outcomes. The approach here serves as a first example toward designer hydrogel scaffolds viable for biomimetic tissue engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of experiments is described, evaluating user recall of visualisations of historical chronology. Such visualisations are widely created but have not hitherto been evaluated. Users were tested on their ability to learn a sequence of historical events presented in a virtual environment (VE) fly-through visualisation, compared with the learning of equivalent material in other formats that are sequential but lack the 3D spatial aspect. Memorability is a particularly important function of visualisation in education. The measures used during evaluation are enumerated and discussed. The majority of the experiments reported compared three conditions, one using a virtual environment visualisation with a significant spatial element, one using a serial on-screen presentation in PowerPoint, and one using serial presentation on paper. Some aspects were trialled with groups having contrasting prior experience of computers, in the UK and Ukraine. Evidence suggests that a more complex environment including animations and sounds or music, intended to engage users and reinforce memorability, were in fact distracting. Findings are reported in relation to the age of the participants, suggesting that children at 11–14 years benefit less from, or are even disadvantaged by, VE visualisations when compared with 7–9 year olds or undergraduates. Finally, results suggest that VE visualisations offering a ‘landscape’ of information are more memorable than those based on a linear model. Keywords: timeline, chronographics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Farmácia (Biotecnologia Farmacêutica), Universidade de Lisboa, Faculdade de Farmácia, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de mestrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to recent studies, informal learning accounts for more than 75% of our continuous learning through life. However, the awareness of this learning, its benefits and its potential is still not very clear. In engineering contexts, informal learning could play an invaluable role helping students or employees to engage with peers and also with more experience colleagues, exchanging ideas and discussing problems. This work presents an initial set of results of the piloting phase of a project (TRAILER) where an innovative service based on Information & Communication Technologies was developed in order to aid the collection and visibility of informal learning. This set of results concerns engineering contexts (academic and business), from the learners' perspective. The major idea that emerged from these piloting trials was that it represented a good way of collecting, recording and sharing informal learning that otherwise could easily be forgotten. Several benefits were reported between the two communities such as being helpful in managing competences and human resources within an institution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation presented to obtain the Ph.D degree in Engineering and Technology Sciences, Biotechnology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recombinant human adenovirus (Ad) vectors are being extensively explored for their use in gene therapy and recombinant vaccines. Ad vectors are attractive for many reasons, including the fact that (1) they are relatively safe, based on their use as live oral vaccines, (2) they can accept large transgene inserts, (3) they can infect dividing and postmitotic cells, and (4) they can be produced to high titers. However, there are also a number of major problems associated with Ad vectors, including transient foreign gene expression due to host cellular immune responses, problems with humoral immunity, and the creation of replication competent adenoviruses (RCA). Most Ad vectors contain deletions in the E1 region that allow for insertion of a transgene. However, the E1 gene products are required for replication and thus must be supplied in trans by a helper ceillille that will allow for the growth and packaging of the defective virus. For this purpose the 293 cell line (Graham et al., 1977) is used most often; however, homologous recombination between the vector and the cell line often results in the generation of RCA. The presence of RCA in batches of adenoviral vectors for clinical use is a safety risk because tlley . may result in the mobilization and spread of the replication-defective vector viruses, and in significant tissue damage and pathogenicity. The present research focused on the alteration of the 293 cell line such that RCA formation can be eliminated. The strategy to modify the 293 cells involved the removal of the first 380 bp of the adenovirus genome through the process of homologous recombination. The first step towards this goal involved identifying and cloning the left-end cellular-viral jUl1ction from 293 cells to assemble sequences required for homologous recombination. Polymerase chain reaction (PCR) was performed to clone the junction, and the clone was verified through sequencing. The plasn1id PAM2 was then constructed, which served as the targeting cassette used to modify the 293 cells. The cassette consisted of (1) the cellular-viral junction as the left-end region of homology, (2) the neo gene to use for positive selection upon tranfection into 293 cells, (3) the adenoviral genome from bp 380 to bp 3438 as the right-end region of homology, and (4) the HSV-tk gene to use for negative selection. The plasmid PAM2 was linearized to produce a double strand break outside the region of homology, and transfected into 293 cells using the calcium-phosphate technique. Cells were first selected for their resistance to the drug G418, and subsequently for their resistance to the drug Gancyclovir (GANC). From 17 transfections, 100 pools of G418f and GANCf cells were picked using cloning lings and expanded for screening. Genomic DNA was isolated from the pools and screened for the presence of the 380 bps using PCR. Ten of the most promising pools were diluted to single cells and expanded in order to isolate homogeneous cell lines. From these, an additional 100 G41Sf and GANef foci were screened. These preliminary screening results appear promising for the detection of the desired cell line. Future work would include further cloning and purification of the promising cell lines that have potentially undergone homologous recombination, in order to isolate a homogeneous cell line of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alors que l’Imagerie par résonance magnétique (IRM) permet d’obtenir un large éventail de données anatomiques et fonctionnelles, les scanneurs cliniques sont généralement restreints à l’utilisation du proton pour leurs images et leurs applications spectroscopiques. Le phosphore jouant un rôle prépondérant dans le métabolisme énergétique, l’utilisation de cet atome en spectroscopie RM présente un énorme avantage dans l’observation du corps humain. Cela représente un certain nombre de déEis techniques à relever dus à la faible concentration de phosphore et sa fréquence de résonance différente. L’objectif de ce projet a été de développer la capacité à réaliser des expériences de spectroscopie phosphore sur un scanneur IRM clinique de 3 Tesla. Nous présentons ici les différentes étapes nécessaires à la conception et la validation d’une antenne IRM syntonisée à la fréquence du phosphore. Nous présentons aussi l’information relative à réalisation de fantômes utilisés dans les tests de validation et la calibration. Finalement, nous présentons les résultats préliminaires d’acquisitions spectroscopiques sur un muscle humain permettant d’identiEier les différents métabolites phosphorylés à haute énergie. Ces résultats s’inscrivent dans un projet de plus grande envergure où les impacts des changements du métabolisme énergétique sont étudiés en relation avec l’âge et les pathologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information and communication technologies are the tools that underpin the emerging “Knowledge Society”. Exchange of information or knowledge between people and through networks of people has always taken place. But the ICT has radically changed the magnitude of this exchange, and thus factors such as timeliness of information and information dissemination patterns have become more important than ever.Since information and knowledge are so vital for the all round human development, libraries and institutions that manage these resources are indeed invaluable. So, the Library and Information Centres have a key role in the acquisition, processing, preservation and dissemination of information and knowledge. ln the modern context, library is providing service based on different types of documents such as manuscripts, printed, digital, etc. At the same time, acquisition, access, process, service etc. of these resources have become complicated now than ever before. The lCT made instrumental to extend libraries beyond the physical walls of a building and providing assistance in navigating and analyzing tremendous amounts of knowledge with a variety of digital tools. Thus, modern libraries are increasingly being re-defined as places to get unrestricted access to information in many formats and from many sources.The research was conducted in the university libraries in Kerala State, India. lt was identified that even though the information resources are flooding world over and several technologies have emerged to manage the situation for providing effective services to its clientele, most of the university libraries in Kerala were unable to exploit these technologies at maximum level. Though the libraries have automated many of their functions, wide gap prevails between the possible services and provided services. There are many good examples world over in the application of lCTs in libraries for the maximization of services and many such libraries have adopted the principles of reengineering and re-defining as a management strategy. Hence this study was targeted to look into how effectively adopted the modern lCTs in our libraries for maximizing the efficiency of operations and services and whether the principles of re-engineering and- redefining can be applied towards this.Data‘ was collected from library users, viz; student as well as faculty users; library ,professionals and university librarians, using structured questionnaires. This has been .supplemented by-observation of working of the libraries, discussions and interviews with the different types of users and staff, review of literature, etc. Personal observation of the organization set up, management practices, functions, facilities, resources, utilization of information resources and facilities by the users, etc. of the university libraries in Kerala have been made. Statistical techniques like percentage, mean, weighted mean, standard deviation, correlation, trend analysis, etc. have been used to analyse data.All the libraries could exploit only a very few possibilities of modern lCTs and hence they could not achieve effective Universal Bibliographic Control and desired efficiency and effectiveness in services. Because of this, the users as well as professionals are dissatisfied. Functional effectiveness in acquisition, access and process of information resources in various formats, development and maintenance of OPAC and WebOPAC, digital document delivery to remote users, Web based clearing of library counter services and resources, development of full-text databases, digital libraries and institutional repositories, consortia based operations for e-journals and databases, user education and information literacy, professional development with stress on lCTs, network administration and website maintenance, marketing of information, etc. are major areas need special attention to improve the situation. Finance, knowledge level on ICTs among library staff, professional dynamism and leadership, vision and support of the administrators and policy makers, prevailing educational set up and social environment in the state, etc. are some of the major hurdles in reaping the maximum possibilities of lCTs by the university libraries in Kerala. The principles of Business Process Re-engineering are found suitable to effectively apply to re-structure and redefine the operations and service system of the libraries. Most of the conventional departments or divisions prevailing in the university libraries were functioning as watertight compartments and their existing management system was more rigid to adopt the principles of change management. Hence, a thorough re-structuring of the divisions was indicated. Consortia based activities and pooling and sharing of information resources was advocated to meet the varied needs of the users in the main campuses and off campuses of the universities, affiliated colleges and remote stations. A uniform staff policy similar to that prevailing in CSIR, DRDO, ISRO, etc. has been proposed by the study not only in the university libraries in kerala but for the entire country.Restructuring of Lis education,integrated and Planned development of school,college,research and public library systems,etc.were also justified for reaping maximum benefits of the modern ICTs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-adaptive software provides a profound solution for adapting applications to changing contexts in dynamic and heterogeneous environments. Having emerged from Autonomic Computing, it incorporates fully autonomous decision making based on predefined structural and behavioural models. The most common approach for architectural runtime adaptation is the MAPE-K adaptation loop implementing an external adaptation manager without manual user control. However, it has turned out that adaptation behaviour lacks acceptance if it does not correspond to a user’s expectations – particularly for Ubiquitous Computing scenarios with user interaction. Adaptations can be irritating and distracting if they are not appropriate for a certain situation. In general, uncertainty during development and at run-time causes problems with users being outside the adaptation loop. In a literature study, we analyse publications about self-adaptive software research. The results show a discrepancy between the motivated application domains, the maturity of examples, and the quality of evaluations on the one hand and the provided solutions on the other hand. Only few publications analysed the impact of their work on the user, but many employ user-oriented examples for motivation and demonstration. To incorporate the user within the adaptation loop and to deal with uncertainty, our proposed solutions enable user participation for interactive selfadaptive software while at the same time maintaining the benefits of intelligent autonomous behaviour. We define three dimensions of user participation, namely temporal, behavioural, and structural user participation. This dissertation contributes solutions for user participation in the temporal and behavioural dimension. The temporal dimension addresses the moment of adaptation which is classically determined by the self-adaptive system. We provide mechanisms allowing users to influence or to define the moment of adaptation. With our solution, users can have full control over the moment of adaptation or the self-adaptive software considers the user’s situation more appropriately. The behavioural dimension addresses the actual adaptation logic and the resulting run-time behaviour. Application behaviour is established during development and does not necessarily match the run-time expectations. Our contributions are three distinct solutions which allow users to make changes to the application’s runtime behaviour: dynamic utility functions, fuzzy-based reasoning, and learning-based reasoning. The foundation of our work is a notification and feedback solution that improves intelligibility and controllability of self-adaptive applications by implementing a bi-directional communication between self-adaptive software and the user. The different mechanisms from the temporal and behavioural participation dimension require the notification and feedback solution to inform users on adaptation actions and to provide a mechanism to influence adaptations. Case studies show the feasibility of the developed solutions. Moreover, an extensive user study with 62 participants was conducted to evaluate the impact of notifications before and after adaptations. Although the study revealed that there is no preference for a particular notification design, participants clearly appreciated intelligibility and controllability over autonomous adaptations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, and evaluated. (The site "1001 Questions," is available at http://teach-computers.org/learner.html). Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information." Because similarity between topics is computed based on what is already known about them, Learner exhibits bootstrapping behavior --- the quality of its questions improves as it gathers more knowledge. By summing evidence for and against posing any given question, Learner also exhibits noise tolerance, limiting the effect of incorrect similarities. The KA power of shallow semantic analogy from nearest neighbors is one of the main findings of this thesis. I perform an analysis of commonsense knowledge collected by another research effort that did not rely on analogical reasoning and demonstrate that indeed there is sufficient amount of correlation in the knowledge base to motivate using cumulative analogy from nearest neighbors as a KA method. Empirically, evaluating the percentages of questions answered affirmatively, negatively and judged to be nonsensical in the cumulative analogy case compares favorably with the baseline, no-similarity case that relies on random objects rather than nearest neighbors. Of the questions generated by cumulative analogy, contributors answered 45% affirmatively, 28% negatively and marked 13% as nonsensical; in the control, no-similarity case 8% of questions were answered affirmatively, 60% negatively and 26% were marked as nonsensical.