839 resultados para Time-sharing computer systems
Resumo:
Reading scientific articles is more time-consuming than reading news because readers need to search and read many citations. This paper proposes a citation guided method for summarizing multiple scientific papers. A phenomenon we can observe is that citation sentences in one paragraph or section usually talk about a common fact, which is usually represented as a set of noun phrases co-occurring in citation texts and it is usually discussed from different aspects. We design a multi-document summarization system based on common fact detection. One challenge is that citations may not use the same terms to refer to a common fact. We thus use term association discovering algorithm to expand terms based on a large set of scientific article abstracts. Then, citations can be clustered based on common facts. The common fact is used as a salient term set to get relevant sentences from the corresponding cited articles to form a summary. Experiments show that our method outperforms three baseline methods by ROUGE metric.©2013 Elsevier B.V. All rights reserved.
Resumo:
A class of intelligent systems located on anthropocentric objects that provide a crew with recommendations on the anthropocentric object's rational behavior in typical situations of operation is considered. We refer to this class of intelligent systems as onboard real-time advisory expert systems. Here, we present a formal model of the object domain, procedures for obtaining knowledge about the object domain, and a semantic structure of basic functional units of the onboard real-time advisory expert systems of typical situations. The stages of the development and improvement of knowledge bases for onboard real-time advisory expert systems of typical situations that are important in practice are considered.
Resumo:
The paper discusses facilities of computer systems for editing scientific and technical texts, which partially automate functions of human editor and thus help the writer to improve text quality. Two experimental systems LINAR and CONUT developed in 90s to control the quality of Russian scientific and technical texts are briefly described; and general principles for designing more powerful editing systems are pointed out. Features of an editing system being now under development are outlined, primarily the underlying linguistic knowledge base and procedures controlling the text.
Resumo:
A szerzők cikkükben a számítástechnikai hulladékokkal foglalkoznak, számítástechnikai eszközök alatt a számítógép konfigurációk összetevőit értik, tehát számítógépeket (asztali, hordozható, terminál stb.), és perifériáit (monitor, nyomtató, cd-író stb.), valamint ezek alkatrészeit és kiegészítőit (chipek, mechanikus részek, festékkazetták stb.). A rendszeres használat környezeti hatásait csak abból a szempontból vizsgálták, hogy ennek során bizonyos alkatrészek, kellékek (kiemelten a nyomtatók festékkazettái) a gépnél nagyobb gyakorisággal cserélődnek, s válhatnak hulladékká. A fő fókusz a számítástechnikai eszközök élettartamának vége, s ebből a szempontból kulcsfogalom a használt személyi számítógép kategória. _____ In their article, the authors discuss the issue of computer waste; under the category of information technology devices they understand the components of computer configurations, that is computers (desktop, portable, terminal etc.) and their peripheries (monitor, printer, CD writer, etc), and also the components and supplements of these (chips, mechanical parts, toner cartridges, etc.). The environmental impact of regular use was examined only from one aspect: during regular use certain components and accessories (especially the toner cartridges of printers) are more often changed and become waste. The main focus is the end of the life time of computer devices, and from this point of view used personal computers are a key concept.
Resumo:
An Automatic Vehicle Location (AVL) system is a computer-based vehicle tracking system that is capable of determining a vehicle's location in real time. As a major technology of the Advanced Public Transportation System (APTS), AVL systems have been widely deployed by transit agencies for purposes such as real-time operation monitoring, computer-aided dispatching, and arrival time prediction. AVL systems make a large amount of transit performance data available that are valuable for transit performance management and planning purposes. However, the difficulties of extracting useful information from the huge spatial-temporal database have hindered off-line applications of the AVL data. ^ In this study, a data mining process, including data integration, cluster analysis, and multiple regression, is proposed. The AVL-generated data are first integrated into a Geographic Information System (GIS) platform. The model-based cluster method is employed to investigate the spatial and temporal patterns of transit travel speeds, which may be easily translated into travel time. The transit speed variations along the route segments are identified. Transit service periods such as morning peak, mid-day, afternoon peak, and evening periods are determined based on analyses of transit travel speed variations for different times of day. The seasonal patterns of transit performance are investigated by using the analysis of variance (ANOVA). Travel speed models based on the clustered time-of-day intervals are developed using important factors identified as having significant effects on speed for different time-of-day periods. ^ It has been found that transit performance varied from different seasons and different time-of-day periods. The geographic location of a transit route segment also plays a role in the variation of the transit performance. The results of this research indicate that advanced data mining techniques have good potential in providing automated techniques of assisting transit agencies in service planning, scheduling, and operations control. ^
Resumo:
Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
The local area network (LAN) interconnecting computer systems and soft- ware can make a significant contribution to the hospitality industry. The author discusses the advantages and disadvantages of such systems.
Resumo:
In his discussion - Database As A Tool For Hospitality Management - William O'Brien, Assistant Professor, School of Hospitality Management at Florida International University, O’Brien offers at the outset, “Database systems offer sweeping possibilities for better management of information in the hospitality industry. The author discusses what such systems are capable of accomplishing.” The author opens with a bit of background on database system development, which also lends an impression as to the complexion of the rest of the article; uh, it’s a shade technical. “In early 1981, Ashton-Tate introduced dBase 11. It was the first microcomputer database management processor to offer relational capabilities and a user-friendly query system combined with a fast, convenient report writer,” O’Brien informs. “When 16-bit microcomputers such as the IBM PC series were introduced late the following year, more powerful database products followed: dBase 111, Friday!, and Framework. The effect on the entire business community, and the hospitality industry in particular, has been remarkable”, he further offers with his informed outlook. Professor O’Brien offers a few anecdotal situations to illustrate how much a comprehensive data-base system means to a hospitality operation, especially when billing is involved. Although attitudes about computer systems, as well as the systems themselves have changed since this article was written, there is pertinent, fundamental information to be gleaned. In regards to the digression of the personal touch when a customer is engaged with a computer system, O’Brien says, “A modern data processing system should not force an employee to treat valued customers as numbers…” He also cautions, “Any computer system that decreases the availability of the personal touch is simply unacceptable.” In a system’s ability to process information, O’Brien suggests that in the past businesses were so enamored with just having an automated system that they failed to take full advantage of its capabilities. O’Brien says that a lot of savings, in time and money, went un-noticed and/or under-appreciated. Today, everyone has an integrated system, and the wise business manager is the business manager who takes full advantage of all his resources. O’Brien invokes the 80/20 rule, and offers, “…the last 20 percent of results costs 80 percent of the effort. But times have changed. Everyone is automating data management, so that last 20 percent that could be ignored a short time ago represents a significant competitive differential.” The evolution of data systems takes center stage for much of the article; pitfalls also emerge.
Resumo:
In this letter, we consider wireless powered communication networks which could operate perpetually, as the base station (BS) broadcasts energy to the multiple energy harvesting (EH) information transmitters. These employ “harvest then transmit” mechanism, as they spend all of their energy harvested during the previous BS energy broadcast to transmit the information towards the BS. Assuming time division multiple access (TDMA), we propose a novel transmission scheme for jointly optimal allocation of the BS broadcasting power and time sharing among the wireless nodes, which maximizes the overall network throughput, under the constraint of average transmit power and maximum transmit power at the BS. The proposed scheme significantly outperforms “state of the art” schemes that employ only the optimal time allocation. If a single EH transmitter is considered, we generalize the optimal solutions for the case of fixed circuit power consumption, which refers to a much more practical scenario.
Resumo:
This thesis explores aesthetization in general and fashion in particular in digital technology design and how we can design digital technology to account for the extended influences of fashion. The thesis applies a combination of methods to explore the new design space at the intersection of fashion and technology. First, it contributes to theoretical understandings of aesthetization and fashion institutionalization that influence digital technology design. We show that there is an unstable aesthetization in mobile design and the increased aesthetization is closely related to the fashion industry. Fashion emerged through shared institutional activities, which are usually in the form of action nets in the design of digital devices. “Tech Fashion” is proposed to interpret such dynamic action nets of institutional arrangements that make digital technology fashionable and desirable. Second, through associative design research, we have designed and developed two prototypes that account for institutionalized fashion values, such as the concept “outfit-centric accessory.” We call for a more extensive collaboration between fashion design and interaction design.
Resumo:
Réalisé en cotutelle avec l'École normale supérieure de Cachan – Université Paris-Saclay
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This paper is reviewing objective assessments of Parkinson’s disease(PD) motor symptoms, cardinal, and dyskinesia, using sensor systems. It surveys the manifestation of PD symptoms, sensors that were used for their detection, types of signals (measures) as well as their signal processing (data analysis) methods. A summary of this review’s finding is represented in a table including devices (sensors), measures and methods that were used in each reviewed motor symptom assessment study. In the gathered studies among sensors, accelerometers and touch screen devices are the most widely used to detect PD symptoms and among symptoms, bradykinesia and tremor were found to be mostly evaluated. In general, machine learning methods are potentially promising for this. PD is a complex disease that requires continuous monitoring and multidimensional symptom analysis. Combining existing technologies to develop new sensor platforms may assist in assessing the overall symptom profile more accurately to develop useful tools towards supporting better treatment process.