922 resultados para Informatics Engineering - Human Computer Interaction
Resumo:
Modeling is a step to perform a finite element analysis. Different methods of model construction are reported in literature, as the Bio-CAD modeling. The purpose of this study was to perform a model evaluation and application using two methods of Bio-CAD modeling from human edentulous hemi-mandible on the finite element analysis. From CT scans of dried human skull was reconstructed a stereolithographic model. Two methods of modeling were performed: STL conversion approach (Model 1) associated to STL simplification and reverse engineering approach (Model 2). For finite element analysis was used the action of lateral pterygoid muscle as loading condition to assess total displacement (D), equivalent von-Mises stress (VM) and maximum principal stress (MP). Two models presented differences on the geometry regarding surface number (1834 (model 1); 282 (model 2)). Were observed differences in finite element mesh regarding element number (30428 nodes/16683 elements (model 1); 15801 nodes/8410 elements (model 2). D, VM and MP stress areas presented similar distribution in two models. The values were different regarding maximum and minimum values of D (ranging 0-0.511 mm (model 1) and 0-0.544 mm (model 2), VM stress (6.36E-04-11.4 MPa (model 1) and 2.15E-04-14.7 MPa (model 2) and MP stress (-1.43-9.14 MPa (model 1) and -1.2-11.6 MPa (model 2). From two methods of Bio-CAD modeling, the reverse engineering presented better anatomical representation compared to the STL conversion approach. The models presented differences in the finite element mesh, total displacement and stress distribution.
Resumo:
Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.
Resumo:
Abstract not available
Resumo:
The very nature of computer science with its constant changes forces those who wish to follow to adapt and react quickly. Large companies invest in being up to date in order to generate revenue and stay active on the market. Universities, on the other hand, need to imply same practices of staying up to date with industry needs in order to produce industry ready engineers. By interviewing former students, now engineers in the industry, and current university staff this thesis aims to learn if there is space for enhancing the education through different lecturing approaches and/or curriculum adaptation and development. In order to address these concerns a qualitative research has been conducted, focusing on data collection obtained through semi-structured live world interviews. The method used follows the seven stages of research interviewing introduced by Kvale and focuses on collecting and preparing relevant data for analysis. The collected data is transcribed, refined, and further on analyzed in the “Findings and analysis” chapter. The focus of analyzing was answering the three research questions; learning how higher education impacts a Computer Science and Informatics Engineers’ job, how to better undergo the transition from studies to working in the industry and how to develop a curriculum that helps support the previous two. Unaltered quoted extracts are presented and individually analyzed. To paint a better picture a theme-wise analysis is presented summing valuable themes that were repeated throughout the interviewing phase. The findings obtained imply that there are several factors directly influencing the quality of education. From the student side, it mostly concerns expectation and dedication involving studies, and from the university side it is commitment to the curriculum development process. Due to the time and resource limitations this research provides findings conducted on a narrowed scope, although it can serve as a great foundation for further development; possibly as a PhD research.
Resumo:
Early definitions of Smart Building focused almost entirely on the technology aspect and did not suggest user interaction at all. Indeed, today we would attribute it more to the concept of the automated building. In this sense, control of comfort conditions inside buildings is a problem that is being well investigated, since it has a direct effect on users’ productivity and an indirect effect on energy saving. Therefore, from the users’ perspective, a typical environment can be considered comfortable, if it’s capable of providing adequate thermal comfort, visual comfort and indoor air quality conditions and acoustic comfort. In the last years, the scientific community has dealt with many challenges, especially from a technological point of view. For instance, smart sensing devices, the internet, and communication technologies have enabled a new paradigm called Edge computing that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This has allowed us to improve services, sustainability and decision making. Many solutions have been implemented such as smart classrooms, controlling the thermal condition of the building, monitoring HVAC data for energy-efficient of the campus and so forth. Though these projects provide to the realization of smart campus, a framework for smart campus is yet to be determined. These new technologies have also introduced new research challenges: within this thesis work, some of the principal open challenges will be faced, proposing a new conceptual framework, technologies and tools to move forward the actual implementation of smart campuses. Keeping in mind, several problems known in the literature have been investigated: the occupancy detection, noise monitoring for acoustic comfort, context awareness inside the building, wayfinding indoor, strategic deployment for air quality and books preserving.
Resumo:
The human mitochondrial Hsp70, also called mortalin, is of considerable importance for mitochondria biogenesis and the correct functioning of the cell machinery. In the mitochondrial matrix, mortalin acts in the importing and folding process of nucleus-encoded proteins. The in vivo deregulation of mortalin expression and/or function has been correlated with age-related diseases and certain cancers due to its interaction with the p53 protein. In spite of its critical biological roles, structural and functional studies on mortalin are limited by its insoluble recombinant production. This study provides the first report of the production of folded and soluble recombinant mortalin when co-expressed with the human Hsp70-escort protein 1, but it is still likely prone to self-association. The monomeric fraction of mortalin presented a slightly elongated shape and basal ATPase activity that is higher than that of its cytoplasmic counterpart Hsp70-1A, suggesting that it was obtained in the functional state. Through small angle X-ray scattering, we assessed the low-resolution structural model of monomeric mortalin that is characterized by an elongated shape. This model adequately accommodated high resolution structures of Hsp70 domains indicating its quality. We also observed that mortalin interacts with adenosine nucleotides with high affinity. Thermally induced unfolding experiments indicated that mortalin is formed by at least two domains and that the transition is sensitive to the presence of adenosine nucleotides and that this process is dependent on the presence of Mg2+ ions. Interestingly, the thermal-induced unfolding assays of mortalin suggested the presence of an aggregation/association event, which was not observed for human Hsp70-1A, and this finding may explain its natural tendency for in vivo aggregation. Our study may contribute to the structural understanding of mortalin as well as to contribute for its recombinant production for antitumor compound screenings.
Resumo:
This paper proposes an architecture for machining process and production monitoring to be applied in machine tools with open Computer numerical control (CNC). A brief description of the advantages of using open CNC for machining process and production monitoring is presented with an emphasis on the CNC architecture using a personal computer (PC)-based human-machine interface. The proposed architecture uses the CNC data and sensors to gather information about the machining process and production. It allows the development of different levels of monitoring systems with mininium investment, minimum need for sensor installation, and low intrusiveness to the process. Successful examples of the utilization of this architecture in a laboratory environment are briefly described. As a Conclusion, it is shown that a wide range of monitoring solutions can be implemented in production processes using the proposed architecture.
Resumo:
Background: Persistent infection with oncogenic types of human papillomavirus (HPV) is the major risk factor for invasive cervical cancer (ICC), and non-European variants of HPV-16 are associated with an increased risk of persistence and ICC. HLA class II polymorphisms are also associated with genetic susceptibility to ICC. Our aim is to verify if these associations are influenced by HPV-16 variability. Methods: We characterized HPV-16 variants by PCR in 107 ICC cases, which were typed for HLA-DQA1, DRB1 and DQB1 genes and compared to 257 controls. We measured the magnitude of associations by logistic regression analysis. Results: European ( E), Asian-American ( AA) and African (Af) variants were identified. Here we show that inverse association between DQB1*05 ( adjusted odds ratio [ OR] = 0.66; 95% confidence interval [CI]: 0.39-1.12]) and HPV-16 positive ICC in our previous report was mostly attributable to AA variant carriers ( OR = 0.27; 95% CI: 0.10-0.75). We observed similar proportions of HLA DRB1*1302 carriers in E-P positive cases and controls, but interestingly, this allele was not found in AA cases ( p = 0.03, Fisher exact test). A positive association with DRB1*15 was observed in both groups of women harboring either E ( OR = 2.99; 95% CI: 1.13-7.86) or AA variants ( OR = 2.34; 95% CI: 1.00-5.46). There was an inverse association between DRB1*04 and ICC among women with HPV-16 carrying the 350T [83L] single nucleotide polymorphism in the E6 gene ( OR = 0.27; 95% CI: 0.08-0.96). An inverse association between DQB1*05 and cases carrying 350G (83V) variants was also found ( OR = 0.37; 95% CI: 0.15-0.89). Conclusion: Our results suggest that the association between HLA polymorphism and risk of ICC might be influenced by the distribution of HPV-16 variants.
Resumo:
This paper proposes a novel computer vision approach that processes video sequences of people walking and then recognises those people by their gait. Human motion carries different information that can be analysed in various ways. The skeleton carries motion information about human joints, and the silhouette carries information about boundary motion of the human body. Moreover, binary and gray-level images contain different information about human movements. This work proposes to recover these different kinds of information to interpret the global motion of the human body based on four different segmented image models, using a fusion model to improve classification. Our proposed method considers the set of the segmented frames of each individual as a distinct class and each frame as an object of this class. The methodology applies background extraction using the Gaussian Mixture Model (GMM), a scale reduction based on the Wavelet Transform (WT) and feature extraction by Principal Component Analysis (PCA). We propose four new schemas for motion information capture: the Silhouette-Gray-Wavelet model (SGW) captures motion based on grey level variations; the Silhouette-Binary-Wavelet model (SBW) captures motion based on binary information; the Silhouette-Edge-Binary model (SEW) captures motion based on edge information and the Silhouette Skeleton Wavelet model (SSW) captures motion based on skeleton movement. The classification rates obtained separately from these four different models are then merged using a new proposed fusion technique. The results suggest excellent performance in terms of recognising people by their gait.
Resumo:
The aim of this article is to analyze the theoretical model proposed by [Jabbour CJC, Santos FCA. Relationships between human resource dimensions and environmental management in companies: proposal of a model. Journal of Cleaner Production 2008;16(1):5 1-8.] based on the data collected in four Brazilian companies. This model investigates how the phases of the environmental management system can be linked to human resource practices in order to attain continuous improvement of a company`s environmental performance. Our aim is to contribute to a field, which has little empirical evidence. Although the interaction between the phases of the environmental management system and human resource practices is recommended by the specialized literature [Daily BE Huang S. Achieving sustainability through attention to human resource factors in environmental management. International Journal of Operations and Production Management 2001:21(12):1539-52.], the results indicate that most of the theoretical assumptions could not be confirmed in these Brazilian companies. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
One of the e-learning environment goal is to attend the individual needs of students during the learning process. The adaptation of contents, activities and tools into different visualization or in a variety of content types is an important feature of this environment, bringing to the user the sensation that there are suitable workplaces to his profile in the same system. Nevertheless, it is important the investigation of student behaviour aspects, considering the context where the interaction happens, to achieve an efficient personalization process. The paper goal is to present an approach to identify the student learning profile analyzing the context of interaction. Besides this, the learning profile could be analyzed in different dimensions allows the system to deal with the different focus of the learning.
Resumo:
The aim of this work is to present a simple, practical and efficient protocol for drug design, in particular Diabetes, which includes selection of the illness, good choice of a target as well as a bioactive ligand and then usage of various computer aided drug design and medicinal chemistry tools to design novel potential drug candidates in different diseases. We have selected the validated target dipeptidyl peptidase IV (DPP-IV), whose inhibition contributes to reduce glucose levels in type 2 diabetes patients. The most active inhibitor with complex X-ray structure reported was initially extracted from the BindingDB database. By using molecular modification strategies widely used in medicinal chemistry, besides current state-of-the-art tools in drug design (including flexible docking, virtual screening, molecular interaction fields, molecular dynamics. ADME and toxicity predictions), we have proposed 4 novel potential DPP-IV inhibitors with drug properties for Diabetes control, which have been supported and validated by all the computational tools used herewith.