923 resultados para Other Computer Engineering
Resumo:
Background: Constructive alignment (CA) is a pedagogical approach that emphasizes the alignment between the intended learning outcomes (ILOs), teaching and learning activities (TLAs) and assessment tasks (ATs) as well as creation of a teaching/learning environment where students will be able to actively create their knowledge. Objectives: This paper aims at investigating the extent of constructively-aligned courses in Computer Engineering and Informatics department at Dalarna University, Sweden. This study is based on empirical observations of teacher’s perceptions of implementation of CA in their courses. Methods: Ten teachers (5 from each department) were asked to fill a paper-based questionnaire, which included a number of questions related to issues of implementing CA in courses. Results: Responses to the items of the questionnaire were mixed. Teachers clearly state the ILOs in their courses and try to align the TLAs and ATs to the ILOs. Computer Engineering teachers do not explicitly communicate the ILOs to the students as compared to Informatics teachers. In addition, Computer Engineering teachers stated that their students are less active in learning activities as compared to Informatics teachers. When asked about their subjective ratings of teaching methods all teachers stated that their current teaching is teacher-centered but they try to shift the focus of activity from them to the students. Conclusions: From teachers’ perspectives, the courses are partially constructively-aligned. Their courses are “aligned”, i.e. ILOs, TLAs and ATs are aligned to each other but they are not “constructive” since, according to them, there was a low student engagement in learning activities, especially in Computer Engineering department.
Resumo:
Broad consensus has been reached within the Education and Cognitive Psychology research communities on the need to center the learning process on experimentation and concrete application of knowledge, rather than on a bare transfer of notions. Several advantages arise from this educational approach, ranging from the reinforce of students learning, to the increased opportunity for a student to gain greater insight into the studied topics, up to the possibility for learners to acquire practical skills and long-lasting proficiency. This is especially true in Engineering education, where integrating conceptual knowledge and practical skills assumes a strategic importance. In this scenario, learners are called to play a primary role. They are actively involved in the construction of their own knowledge, instead of passively receiving it. As a result, traditional, teacher-centered learning environments should be replaced by novel learner-centered solutions. Information and Communication Technologies enable the development of innovative solutions that provide suitable answers to the need for the availability of experimentation supports in educational context. Virtual Laboratories, Adaptive Web-Based Educational Systems and Computer-Supported Collaborative Learning environments can significantly foster different learner-centered instructional strategies, offering the opportunity to enhance personalization, individualization and cooperation. More specifically, they allow students to explore different kinds of materials, to access and compare several information sources, to face real or realistic problems and to work on authentic and multi-facet case studies. In addition, they encourage cooperation among peers and provide support through coached and scaffolded activities aimed at fostering reflection and meta-cognitive reasoning. This dissertation will guide readers within this research field, presenting both the theoretical and applicative results of a research aimed at designing an open, flexible, learner-centered virtual lab for supporting students in learning Information Security.
Resumo:
Many research-based instruction strategies (RBISs) have been developed; their superior efficacy with respect to student learning has been demonstrated in many studies. Collecting and interpreting evidence about: 1) the extent to which electrical and computer engineering (ECE) faculty members are using RBISs in core, required engineering science courses, and 2) concerns that they express about using them, are important aspects of understanding how engineering education is evolving. The authors surveyed ECE faculty members, asking about their awareness and use of selected RBISs. The survey also asked what concerns ECE faculty members had about using RBISs. Respondent data showed that awareness of RBISs was very high, but estimates of use of RBISs, based on survey data, varied from 10% to 70%, depending on characteristics of the strategy. The most significant concern was the amount of class time that using an RBIS might take; efforts to increase use of RBISs must address this.
Resumo:
The evidence suggests that emotional intelligence and personality traits are important qualities that workers need in order to successfully exercise a profession. This article assumes that the main purpose of universities is to promote employment by providing an education that facilitates the acquisition of abilities, skills, competencies and values. In this study, the emotional intelligence and personality profiles of two groups of Spanish students studying degrees in two different academic disciplines – computer engineering and teacher training – were analysed and compared. In addition, the skills forming part of the emotional intelligence and personality traits required by professionals (computer engineers and teachers) in their work were studied, and the profiles obtained for the students were compared with those identified by the professionals in each field. Results revealed significant differences between the profiles of the two groups of students, with the teacher training students scoring higher on interpersonal skills; differences were also found between professionals and students for most competencies, with professionals in both fields demanding more competencies that those evidenced by graduates. The implications of these results for the incorporation of generic social, emotional and personal competencies into the university curriculum are discussed.
Resumo:
Includes indexes.
Resumo:
Mode of access: Internet.
Resumo:
This thesis covers the challenges of creating and maintaining an introductory engineering laboratory. The history of the University of Illinois Electrical and Computer Engineering department’s introductory course, ECE 110, is recounted. The current state of the course, as of Fall 2008, is discussed along with current challenges arising from the use of a hand-wired prototyping board with logic gates. A plan for overcoming these issues using a new microcontroller-based board with a pseudo hardware description language is discussed. The new microcontroller based system implementation is extensively detailed along with its new accompanying description language. This new system was tried in several sections of the Fall 2008 semester alongside the old system; the students’ final performances with the two different approaches are compared in terms of design, performance, complexity, and enjoyment. The system in its first run shows great promise, increasing the students’ enjoyment, and improving the performance of their designs.
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
Resumo:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
Resumo:
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.
Resumo:
This work consists on the design and implementation of a complete monitored security system. Two computers make up the basic system: one computer is the transmitter and the other is the receiver. Both computers interconnect by modems. Depending on the status of the input sensors (magnetic contacts, motion detectors and others) the transmitter detects an alarm condition and sends a detailed report of the event via modem to the receiver computer.
Resumo:
The primary purpose of this thesis was to design and develop a prototype e-commerce system where dynamic parameters are included in the decision-making process and execution of an online transaction. The system developed and implemented takes into account previous usage history, priority and associated engineering capabilities. The system was developed using three-tiered client server architecture. The interface was the Internet browser. The middle tiered web server was implemented using Active Server Pages, which form a link between the client system and other servers. A relational database management system formed the data component of the three-tiered architecture. It includes a capability for data warehousing which extracts needed information from the stored data of the customers as well as their orders. The system organizes and analyzes the data that is generated during a transaction to formulate a client's behavior model during and after a transaction. This is used for making decisions like pricing, order rescheduling during a client's forthcoming transaction. The system helps among other things to bring about predictability to a transaction execution process, which could be highly desirable in the current competitive scenario.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.