943 resultados para engineering, electrical


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Online Social Network (OSN) services provided by Internet companies bring people together to chat, share the information, and enjoy the information. Meanwhile, huge amounts of data are generated by those services (they can be regarded as the social media ) every day, every hour, even every minute, and every second. Currently, researchers are interested in analyzing the OSN data, extracting interesting patterns from it, and applying those patterns to real-world applications. However, due to the large-scale property of the OSN data, it is difficult to effectively analyze it. This dissertation focuses on applying data mining and information retrieval techniques to mine two key components in the social media data â users and user-generated contents. Specifically, it aims at addressing three problems related to the social media users and contents: (1) how does one organize the users and the contents? (2) how does one summarize the textual contents so that users do not have to go over every post to capture the general idea? (3) how does one identify the influential users in the social media to benefit other applications, e.g., Marketing Campaign? The contribution of this dissertation is briefly summarized as follows. (1) It provides a comprehensive and versatile data mining framework to analyze the users and user-generated contents from the social media. (2) It designs a hierarchical co-clustering algorithm to organize the users and contents. (3) It proposes multi-document summarization methods to extract core information from the social network contents. (4) It introduces three important dimensions of social influence, and a dynamic influence model for identifying influential users.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Am Institut für Arbeitswissenschaft und Betriebsorganisation (ifab) Universität Karlsruhe wird zurzeit das Projekt LIVE-Fab (Lernen in der virtuellen Fabrik) gemeinsam mit der Fachhochschule Landshut, Fachbereich Maschinenbau, durchgeführt. Dieses Projekt wird vom Bundesministerium für Bildung und Forschung (BMBF) im Rahmen des Programms âžNeue Medien in der Bildung✠gefÃrdert. Das Ziel des Projektes ist die Entwicklung eines anschaulichen Lehr- und Lernmodells für eine Fabrik als funktionierendes Ganzes. Dazu soll im Rechner eine Modellfabrik mit den Bereichen Wareneingang, Fertigung, Montage und Qualitätssicherung abgebildet werden. Die Fabrik mit ihren Anlagen (Maschinen, Transportsysteme etc.) und Materialflüsse soll in einem 3D-Modell visuell erfassbar sein. Die Grundlagen zur Schaffung einer virtuell funktionierenden Produktion einschließlich Anlagenplanung, Arbeitsvorbereitung, die Mechanismen, Kundenbestellungen und Qualitätsmanagement sollen in einzelnen Fallstudien den Studierenden vermittelt werden. Den Studierenden aus den Fachbereichen Maschinenbau, Wirtschaftsingenieurwesen, Elektrotechnik und Betriebswirtschaft mit technischer Ausrichtung soll mit der virtuellen Fabrik ein Werkzeug an die Hand gegeben werden, mit dem sie die komplexen, ineinander verzahnten Vorgänge eines Produktionsprozesses besser verstehen lernen. Dies bedeutet, dass in der virtuellen Fabrik die inhaltlichen Aspekte mehrerer vorgelagerter Vorlesungen kombiniert werden und dadurch ein Verbund zum Verständnis der Produktionsprozesse geschaffen wird.(DIPF/Orig.)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A presente dissertação é o resultado de um estudo realizado entre Março de 2015 e Março de 2016 centrado no tema Eficiência Energética nos Edifícios, no âmbito da Dissertação do 2º ano do Mestrado em Engenharia Eletrotécnica â Sistemas Elétricos de Energia no Instituto Superior de Engenharia do Porto (ISEP). Atualmente, os edifícios são responsáveis por cerca de 40% do consumo de energia na maioria dos países da europa. Energia consumida, principalmente, no aquecimento, arrefecimento e na alimentação de aparelhos elétricos. Os hospitais, como grandes edifícios, são grandes consumidores de energia e, na maioria dos países europeus, situam-se entre os edifícios públicos menos eficientes. Neste contexto, representam um tipo de edifícios cuja atividade apresenta um potencial de poupança energético importante. O tipo de atividade aí desenvolvida, aliada às especificidades do sector da saúde, faz deste tipo de edifícios um alvo de análise e otimização energética bastante apetecível. O presente trabalho passa pelo estudo do potencial para a eficiência energética de um hospital situado na zona do Porto. Foi, inicialmente, efetuado um levantamento das necessidades energéticas, de modo a identificar os sectores prioritários de atuação. Este estudo conta com a análise dos consumos obtidos através do processo de monitorização, substituição da iluminação existente por uma mais eficiente, a instalação de painéis solares para reduzir o consumo destinado às águas quentes sanitárias, a substituição de caldeira a diesel por caldeira a biomassa, substituição de um chiller por um mais eficiente, entre outros. Os consumos registados no hospital em estudo serão comparados com um plano nacional (Eficiência Energética e Hídrica no Sistema Nacional de Saúde), para, desta forma, se perceber quais os consumos do hospital em estudo, quando comparados com outros hospitais.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A poster of this paper will be presented at the 25th International Conference on Parallel Architecture and Compilation Technology (PACT â16), September 11-15, 2016, Haifa, Israel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When a task must be executed in a remote or dangerous environment, teleoperation systems may be employed to extend the influence of the human operator. In the case of manipulation tasks, haptic feedback of the forces experienced by the remote (slave) system is often highly useful in improving an operator's ability to perform effectively. In many of these cases (especially teleoperation over the internet and ground-to-space teleoperation), substantial communication latency exists in the control loop and has the strong tendency to cause instability of the system. The first viable solution to this problem in the literature was based on a scattering/wave transformation from transmission line theory. This wave transformation requires the designer to select a wave impedance parameter appropriate to the teleoperation system. It is widely recognized that a small value of wave impedance is well suited to free motion and a large value is preferable for contact tasks. Beyond this basic observation, however, very little guidance exists in the literature regarding the selection of an appropriate value. Moreover, prior research on impedance selection generally fails to account for the fact that in any realistic contact task there will simultaneously exist contact considerations (perpendicular to the surface of contact) and quasi-free-motion considerations (parallel to the surface of contact). The primary contribution of the present work is to introduce an approximate linearized optimum for the choice of wave impedance and to apply this quasi-optimal choice to the Cartesian reality of such a contact task, in which it cannot be expected that a given joint will be either perfectly normal to or perfectly parallel to the motion constraint. The proposed scheme selects a wave impedance matrix that is appropriate to the conditions encountered by the manipulator. This choice may be implemented as a static wave impedance value or as a time-varying choice updated according to the instantaneous conditions encountered. A Lyapunov-like analysis is presented demonstrating that time variation in wave impedance will not violate the passivity of the system. Experimental trials, both in simulation and on a haptic feedback device, are presented validating the technique. Consideration is also given to the case of an uncertain environment, in which an a priori impedance choice may not be possible.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2015 The Institution of Engineering and Technology. In this study, the authors derive some new refined Jensen-based inequalities, which encompass both the Jensen inequality and its most recent improvement based on the Wirtinger integral inequality. The potential capability of this approach is demonstrated through applications to stability analysis of time-delay systems. More precisely, by using the newly derived inequalities, they establish new stability criteria for two classes of time-delay systems, namely discrete and distributed constant delays systems and interval time-varying delay systems. The resulting stability conditions are derived in terms of linear matrix inequalities, which can be efficiently solved by various convex optimisation algorithms. Numerical examples are given to show the effectiveness and least conservativeness of the results obtained in this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this paper is to present the current evidence relative to the effectiveness of pair programming (PP) as a pedagogical tool in higher education CS/SE courses. We performed a systematic literature review (SLR) of empirical studies that investigated factors affecting the effectiveness of PP for CS/SE students and studies that measured the effectiveness of PP for CS/SE students. Seventy-four papers were used in our synthesis of evidence, and 14 compatibility factors that can potentially affect PP's effectiveness as a pedagogical tool were identified. Results showed that students' skill level was the factor that affected PP's effectiveness the most. The most common measure used to gauge PP's effectiveness was time spent on programming. In addition, students' satisfaction when using PP was overall higher than when working solo. Our meta-analyses showed that PP was effective in improving students' grades on assignments. Finally, in the studies that used quality as a measure of effectiveness, the number of test cases succeeded, academic performance, and expert opinion were the quality measures mostly applied. The results of this SLR show two clear gaps in this research field: 1) a lack of studies focusing on pair compatibility factors aimed at making PP an effective pedagogical tool and 2) a lack of studies investigating PP for software design/modeling tasks in conjunction with programming tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Unknown RFID tags appear when the unread tagged objects are moved in or tagged objects are misplaced. This paper studies the practically important problem of unknown tag detection while taking both time-efficiency and energy-efficiency of battery-powered active tags into consideration. We first propose a Sampling Bloom Filter which generalizes the standard Bloom Filter. Using the new filtering technique, we propose the Sampling Bloom Filter-based Unknown tag Detection Protocol (SBF-UDP), whose detection accuracy is tunable by the end users. We present the theoretical analysis to minimize the time and energy costs. SBF-UDP can be tuned to either the time-saving mode or the energy-saving mode, according to the specific requirements. Extensive simulations are conducted to evaluate the performance of the proposed protocol. The experimental results show that SBF-UDP considerably outperforms the previous related protocols in terms of both time-efficiency and energy-efficiency. For example, when 3 or more unknown tags appear in the RFID system with 30 000 known tags, the proposed SBF-UDP is able to successfully report the existence of unknown tags with a confidence more than 99%. While our protocol runs 9 times faster than the fastest existing scheme and reducing the energy consumption by more than 80%.