385 resultados para Electrical engineering|Artificial intelligence
Resumo:
Digital Human Models (DHM) have been used for over 25 years. They have evolved from simple drawing templates, which are nowadays still used in architecture, to complex and Computer Aided Engineering (CAE) integrated design and analysis tools for various ergonomic tasks. DHM are most frequently used for applications in product design and production planning, with many successful implementations documented. DHM from other domains, as for example computer user interfaces, artificial intelligence, training and education, or the entertainment industry show that there is also an ongoing development towards a comprehensive understanding and holistic modeling of human behavior. While the development of DHM for the game sector has seen significant progress in recent years, advances of DHM in the area of ergonomics have been comparatively modest. As a consequence, we need to question if current DHM systems are fit for the design of future mobile work systems. So far it appears that DHM in Ergonomics are rather limited to some traditional applications. According to Dul et al. (2012), future characteristics of Human Factors and Ergonomics (HFE) can be assigned to six main trends: (1) global change of work systems, (2) cultural diversity, (3) ageing, (4) information and communication technology (ICT), (5) enhanced competiveness and the need for innovation, and; (6) sustainability and corporate social responsibility. Based on a literature review, we systematically investigate the capabilities of current ergonomic DHM systems versus the ‘Future of Ergonomics’ requirements. It is found that DHMs already provide broad functionality in support of trends (1) and (2), and more limited options in regards to trend (3). Today’s DHM provide access to a broad range of national and international databases for correct differentiation and characterization of anthropometry for global populations. Some DHM explicitly address social and cultural modeling of groups of people. In comparison, the trends of growing importance of ICT (4), the need for innovation (5) and sustainability (6) are addressed primarily from a hardware-oriented and engineering perspective and not reflected in DHM. This reflects a persistent separation between hardware design (engineering) and software design (information technology) in the view of DHM – a disconnection which needs to be urgently overcome in the era of software defined user interfaces and mobile devices. The design of a mobile ICT-device is discussed to exemplify the need for a comprehensive future DHM solution. Designing such mobile devices requires an approach that includes organizational aspects as well as technical and cognitive ergonomics. Multiple interrelationships between the different aspects result in a challenging setting for future DHM. In conclusion, the ‘Future of Ergonomics’ pose particular challenges for DHM in regards to the design of mobile work systems, and moreover mobile information access.
Resumo:
Clustering identities in a broadcast video is a useful task to aid in video annotation and retrieval. Quality based frame selection is a crucial task in video face clustering, to both improve the clustering performance and reduce the computational cost. We present a frame work that selects the highest quality frames available in a video to cluster the face. This frame selection technique is based on low level and high level features (face symmetry, sharpness, contrast and brightness) to select the highest quality facial images available in a face sequence for clustering. We also consider the temporal distribution of the faces to ensure that selected faces are taken at times distributed throughout the sequence. Normalized feature scores are fused and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face clustering system. We present a news video database to evaluate the clustering system performance. Experiments on the newly created news database show that the proposed method selects the best quality face images in the video sequence, resulting in improved clustering performance.
Resumo:
In a people-to-people matching systems, filtering is widely applied to find the most suitable matches. The results returned are either too many or only a few when the search is generic or specific respectively. The use of a sophisticated recommendation approach becomes necessary. Traditionally, the object of recommendation is the item which is inanimate. In online dating systems, reciprocal recommendation is required to suggest a partner only when the user and the recommended candidate both are satisfied. In this paper, an innovative reciprocal collaborative method is developed based on the idea of similarity and common neighbors, utilizing the information of relevance feedback and feature importance. Extensive experiments are carried out using data gathered from a real online dating service. Compared to benchmarking methods, our results show the proposed method can achieve noticeable better performance.
Resumo:
A new community and communication type of social networks - online dating - are gaining momentum. With many people joining in the dating network, users become overwhelmed by choices for an ideal partner. A solution to this problem is providing users with partners recommendation based on their interests and activities. Traditional recommendation methods ignore the users’ needs and provide recommendations equally to all users. In this paper, we propose a recommendation approach that employs different recommendation strategies to different groups of members. A segmentation method using the Gaussian Mixture Model (GMM) is proposed to customize users’ needs. Then a targeted recommendation strategy is applied to each identified segment. Empirical results show that the proposed approach outperforms several existing recommendation methods.
Resumo:
The rapid development of the World Wide Web has created massive information leading to the information overload problem. Under this circumstance, personalization techniques have been brought out to help users in finding content which meet their personalized interests or needs out of massively increasing information. User profiling techniques have performed the core role in this research. Traditionally, most user profiling techniques create user representations in a static way. However, changes of user interests may occur with time in real world applications. In this research we develop algorithms for mining user interests by integrating time decay mechanisms into topic-based user interest profiling. Time forgetting functions will be integrated into the calculation of topic interest measurements on in-depth level. The experimental study shows that, considering temporal effects of user interests by integrating time forgetting mechanisms shows better performance of recommendation.
Resumo:
Most recommender systems attempt to use collaborative filtering, content-based filtering or hybrid approach to recommend items to new users. Collaborative filtering recommends items to new users based on their similar neighbours, and content-based filtering approach tries to recommend items that are similar to new users' profiles. The fundamental issues include how to profile new users, and how to deal with the over-specialization in content-based recommender systems. Indeed, the terms used to describe items can be formed as a concept hierarchy. Therefore, we aim to describe user profiles or information needs by using concepts vectors. This paper presents a new method to acquire user information needs, which allows new users to describe their preferences on a concept hierarchy rather than rating items. It also develops a new ranking function to recommend items to new users based on their information needs. The proposed approach is evaluated on Amazon book datasets. The experimental results demonstrate that the proposed approach can largely improve the effectiveness of recommender systems.
Resumo:
Different reputation models are used in the web in order to generate reputation values for products using uses' review data. Most of the current reputation models use review ratings and neglect users' textual reviews, because it is more difficult to process. However, we argue that the overall reputation score for an item does not reflect the actual reputation for all of its features. And that's why the use of users' textual reviews is necessary. In our work we introduce a new reputation model that defines a new aggregation method for users' extracted opinions about products' features from users' text. Our model uses features ontology in order to define general features and sub-features of a product. It also reflects the frequencies of positive and negative opinions. We provide a case study to show how our results compare with other reputation models.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
BACKGROUND Research on engineering design is a core area of concern within engineering education and a fundamental understanding of how engineering students approach and undertake design is necessary in order to develop effective design models and pedagogies. Understanding the factors related to design experiences in education and how they affect student practice can help educators as well as designers to leverage these factors as part of the design process. PURPOSE This study investigated the design practices of first-year engineering students’ and their experiences with a first-year engineering course design project. The research questions that guided the investigation were: 1. From a student perspective, what design parameters or criteria are most important? 2. How does this perspective impact subsequent student design practice throughout the design process? DESIGN/METHOD The authors employed qualitative multi-case study methods (Miles & Huberman, 1994) in order to the answer the research questions. Participant teams were observed and video recorded during team design meetings in which they researched the background for the design problem, brainstormed and sketched possible solutions, as well as built prototypes and final models of their design solutions as part of a course design project. Analysis focused on explanation building (Yin, 2009) and utilized within-case and cross-case analysis (Miles & Huberman, 1994). RESULTS We found that students focused disproportionally on the functional parameter, i.e. the physical implementation of their solution, and the possible/applicable parameter, i.e. a possible and applicable solution that benefited the user, in comparison to other given parameters such as safety and innovativeness. In addition, we found that individual teams focused on the functional and possible/ applicable parameters in early design phases such as brainstorming/ ideation and sketching. When prompted to discuss these non-salient parameters (from the student perspective) in the final design report, student design teams often used a post-hoc justification to support how the final designs fit the parameters that they did not initially consider. CONCLUSIONS This study suggests is that student design teams become fixated on (and consequently prioritize) certain parameters they interpret as important because they feel these parameters were described more explicitly in terms how they were met and assessed. Students fail to consider other parameters, perceived to be less directly assessable, unless prompted to do so. Failure to consider other parameters in the early design phases subsequently affects their approach in design phases as well. Case studies examining students’ study strategies within three Australian Universities illustrate similarities with some student approaches to design.
Resumo:
Purpose of review: Artificial corneas are being developed to meet a shortage of donor corneas as well as to address cases where allografting is contraindicated. A range of artificial corneas has been developed. Here we review several newer designs and especially those inspired by naturally occurring biomaterials found with the human body and elsewhere. Recent findings: Recent trends in the development of artificial corneas indicate a move towards the use of materials derived from native sources including decellularized corneal tissue and tissue substitutes synthesized by corneal cells in vitro when grown either on their own, or in conjunction with novel protein-based scaffolds. Biologically inspired materials are also being considered for implantation on their own with the view to promoting endogenous corneal tissue. Summary: More recent attempts at making artificial corneas have taken a more nature-based or nature-inspired approach. Several will in the near future be likely to be available clinically.
Resumo:
While social engineering represents a real and ominous threat to many organizations, companies, governments, and individuals, social networking sites (SNSs), have been identified as among the most common means of social engineering attacks. Owing to factors that reduce the ability of users to detect social engineering tricks and increase the ability of attackers to launch them, SNSs seem to be perfect breeding ground for exploiting the vulnerabilities of people, and the weakest link in security. This work will contribute to the knowledge of social engineering by identifying different entities and subentities that affect social engineering based attacks in SNSs. Moreover, this paper includes an intensive and comprehensive overview of different aspects of social engineering threats in SNSs.
Resumo:
There is no doubt that social engineering plays a vital role in compromising most security defenses, and in attacks on people, organizations, companies, or even governments. It is the art of deceiving and tricking people to reveal critical information or to perform an action that benefits the attacker in some way. Fraudulent and deceptive people have been using social engineering traps and tactics using information technology such as e-mails, social networks, web sites, and applications to trick victims into obeying them, accepting threats, and falling victim to various crimes and attacks such as phishing, sexual abuse, financial abuse, identity theft, impersonation, physical crime, and many other forms of attack. Although organizations, researchers, practitioners, and lawyers recognize the severe risk of social engineering-based threats, there is a severe lack of understanding and controlling of such threats. One side of the problem is perhaps the unclear concept of social engineering as well as the complexity of understand human behaviors in behaving toward, approaching, accepting, and failing to recognize threats or the deception behind them. The aim of this paper is to explain the definition of social engineering based on the related theories of the many related disciplines such as psychology, sociology, information technology, marketing, and behaviourism. We hope, by this work, to help researchers, practitioners, lawyers, and other decision makers to get a fuller picture of social engineering and, therefore, to open new directions of collaboration toward detecting and controlling it.
Resumo:
Modern health information systems can generate several exabytes of patient data, the so called "Health Big Data", per year. Many health managers and experts believe that with the data, it is possible to easily discover useful knowledge to improve health policies, increase patient safety and eliminate redundancies and unnecessary costs. The objective of this paper is to discuss the characteristics of Health Big Data as well as the challenges and solutions for health Big Data Analytics (BDA) – the process of extracting knowledge from sets of Health Big Data – and to design and evaluate a pipelined framework for use as a guideline/reference in health BDA.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
Road networks are a national critical infrastructure. The road assets need to be monitored and maintained efficiently as their conditions deteriorate over time. The condition of one of such assets, road pavement, plays a major role in the road network maintenance programmes. Pavement conditions depend upon many factors such as pavement types, traffic and environmental conditions. This paper presents a data analytics case study for assessing the factors affecting the pavement deflection values measured by the traffic speed deflectometer (TSD) device. The analytics process includes acquisition and integration of data from multiple sources, data pre-processing, mining useful information from them and utilising data mining outputs for knowledge deployment. Data mining techniques are able to show how TSD outputs vary in different roads, traffic and environmental conditions. The generated data mining models map the TSD outputs to some classes and define correction factors for each class.