859 resultados para Time-sharing computer systems.
Resumo:
NanoStreams explores the design, implementation,and system software stack of micro-servers aimed at processingdata in-situ and in real time. These micro-servers can serve theemerging Edge computing ecosystem, namely the provisioningof advanced computational, storage, and networking capabilitynear data sources to achieve both low latency event processingand high throughput analytical processing, before consideringoff-loading some of this processing to high-capacity datacentres.NanoStreams explores a scale-out micro-server architecture thatcan achieve equivalent QoS to that of conventional rack-mountedservers for high-capacity datacentres, but with dramaticallyreduced form factors and power consumption. To this end,NanoStreams introduces novel solutions in programmable & con-figurable hardware accelerators, as well as the system softwarestack used to access, share, and program those accelerators.Our NanoStreams micro-server prototype has demonstrated 5.5×higher energy-efficiency than a standard Xeon Server. Simulationsof the microserver’s memory system extended to leveragehybrid DDR/NVM main memory indicated 5× higher energyefficiencythan a conventional DDR-based system.
Resumo:
Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements
Resumo:
This paper presents a technique for real-time crowd density estimation based on textures of crowd images. In this technique, the current image from a sequence of input images is classified into a crowd density class. Then, the classification is corrected by a low-pass filter based on the crowd density classification of the last n images of the input sequence. The technique obtained 73.89% of correct classification in a real-time application on a sequence of 9892 crowd images. Distributed processing was used in order to obtain real-time performance. © Springer-Verlag Berlin Heidelberg 2005.
Resumo:
Despite the abundant availability of protocols and application for peer-to-peer file sharing, several drawbacks are still present in the field. Among most notable drawbacks is the lack of a simple and interoperable way to share information among independent peer-to-peer networks. Another drawback is the requirement that the shared content can be accessed only by a limited number of compatible applications, making impossible their access to others applications and system. In this work we present a new approach for peer-to-peer data indexing, focused on organization and retrieval of metadata which describes the shared content. This approach results in a common and interoperable infrastructure, which provides a transparent access to data shared on multiple data sharing networks via a simple API. The proposed approach is evaluated using a case study, implemented as a cross-platform extension to Mozilla Firefox browser, and demonstrates the advantages of such interoperability over conventional distributed data access strategies. © 2009 IEEE.
Resumo:
A simple method for designing a digital state-derivative feedback gain and a feedforward gain such that the control law is equivalent to a known and adequate state feedback and feedforward control law of a digital redesigned system is presented. It is assumed that the plant is a linear controllable, time-invariant, Single-Input (SI) or Multiple-Input (MI) system. This procedure allows the use of well-known continuous-time state feedback design methods to directly design discrete-time state-derivative feedback control systems. The state-derivative feedback can be useful, for instance, in the vibration control of mechanical systems, where the main sensors are accelerometers. One example considering the digital redesign with state-derivative feedback of a helicopter illustrates the proposed method. © 2009 IEEE.
Resumo:
This work presents a methodological proposal for acquisition of biometric data through telemetry basing its development on a research-action and a case study. Nowadays, the qualified professionals of physical evaluation have to use specific devices to obtain biometric signals and data. These devices in the most of the time are high cost and difficult to use and handling. Therefore, the methodological proposal was elaborate in order to develop, conceptually, a bio telemetric device which could acquire the desirable biometric signals: oxymetry, biometrics, corporal temperature and pedometry which are essential for the area of physical evaluation. It was researched the existent biometrics sensors, the possible ways for the remote transmission of signals and the computer systems available so that the acquisition of data could be possible. This methodological proposal of remote acquisition of biometrical signals is structured in four modules: Acquisitor of biometrics data; Converser and transmitter of biometric signals; Receiver and Processor of biometrics signals and Generator of Interpretative Graphs. The modules aim the obtention of interpretative graphics of human biometric signals. In order to validate this proposal a functional prototype was developed and it is presented in the development of this work.
Resumo:
This paper proposes a new switched control design method for some classes of linear time-invariant systems with polytopic uncertainties. This method uses a quadratic Lyapunov function to design the feedback controller gains based on linear matrix inequalities (LMIs). The controller gain is chosen by a switching law that returns the smallest value of the time derivative of the Lyapunov function. The proposed methodology offers less conservative alternative than the well-known controller for uncertain systems with only one state feedback gain. The control design of a magnetic levitator illustrates the procedure. © 2013 Wallysonn A. de Souza et al.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Model predictive control (MPC) applications in the process industry usually deal with process systems that show time delays (dead times) between the system inputs and outputs. Also, in many industrial applications of MPC, integrating outputs resulting from liquid level control or recycle streams need to be considered as controlled outputs. Conventional MPC packages can be applied to time-delay systems but stability of the closed loop system will depend on the tuning parameters of the controller and cannot be guaranteed even in the nominal case. In this work, a state space model based on the analytical step response model is extended to the case of integrating time systems with time delays. This model is applied to the development of two versions of a nominally stable MPC, which is designed to the practical scenario in which one has targets for some of the inputs and/or outputs that may be unreachable and zone control (or interval tracking) for the remaining outputs. The controller is tested through simulation of a multivariable industrial reactor system. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.
Resumo:
La prova informatica richiede l’adozione di precauzioni come in un qualsiasi altro accertamento scientifico. Si fornisce una panoramica sugli aspetti metodologici e applicativi dell’informatica forense alla luce del recente standard ISO/IEC 27037:2012 in tema di trattamento del reperto informatico nelle fasi di identificazione, raccolta, acquisizione e conservazione del dato digitale. Tali metodologie si attengono scrupolosamente alle esigenze di integrità e autenticità richieste dalle norme in materia di informatica forense, in particolare della Legge 48/2008 di ratifica della Convenzione di Budapest sul Cybercrime. In merito al reato di pedopornografia si offre una rassegna della normativa comunitaria e nazionale, ponendo l’enfasi sugli aspetti rilevanti ai fini dell’analisi forense. Rilevato che il file sharing su reti peer-to-peer è il canale sul quale maggiormente si concentra lo scambio di materiale illecito, si fornisce una panoramica dei protocolli e dei sistemi maggiormente diffusi, ponendo enfasi sulla rete eDonkey e il software eMule che trovano ampia diffusione tra gli utenti italiani. Si accenna alle problematiche che si incontrano nelle attività di indagine e di repressione del fenomeno, di competenza delle forze di polizia, per poi concentrarsi e fornire il contributo rilevante in tema di analisi forensi di sistemi informatici sequestrati a soggetti indagati (o imputati) di reato di pedopornografia: la progettazione e l’implementazione di eMuleForensic consente di svolgere in maniera estremamente precisa e rapida le operazioni di analisi degli eventi che si verificano utilizzando il software di file sharing eMule; il software è disponibile sia in rete all’url http://www.emuleforensic.com, sia come tool all’interno della distribuzione forense DEFT. Infine si fornisce una proposta di protocollo operativo per l’analisi forense di sistemi informatici coinvolti in indagini forensi di pedopornografia.
Resumo:
The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies. The aim of this study is to evaluate the effects obtained after applying active learning methodologies to the achievement of generic competences as well as to the academic performance. This study has been carried out at the Universidad Politécnica de Madrid, where these methodologies have been applied to the Operating Systems I subject of the degree in Technical Engineering in Computer Systems. The fundamental hypothesis tested was whether the implementation of active learning methodologies (cooperative learning and problem based learning) favours the achievement of certain generic competences (‘teamwork’ and ‘planning and time management’) and also whether this fact improved the academic performance of our students. The original approach of this work consists in using psychometric tests to measure the degree of acquired student’s generic competences instead of using opinion surveys, as usual. Results indicated that active learning methodologies improve the academic performance when compared to the traditional lecture/discussion method, according to the success rate obtained. These methods seem to have as well an effect on the teamwork competence (the perception of the behaviour of the other members in the group) but not on the perception of each students’ behaviour. Active learning does not produce any significant change in the generic competence ‘planning and time management'.
Resumo:
Interacting with a computer system in the operating room (OR) can be a frustrating experience for a surgeon, who currently has to verbally delegate to an assistant every computer interaction task. This indirect mode of interaction is time consuming, error prone and can lead to poor usability of OR computer systems. This thesis describes the design and evaluation of a joystick-like device that allows direct surgeon control of the computer in the OR. The device was tested extensively in comparison to a mouse and delegated dictation with seven surgeons, eleven residents, and five graduate students. The device contains no electronic parts, is easy to use, is unobtrusive, has no physical connection to the computer and makes use of an existing tool in the OR. We performed a user study to determine its effectiveness in allowing a user to perform all the tasks they would be expected to perform on an OR computer system during a computer-assisted surgery. Dictation was found to be superior to the joystick in qualitative measures, but the joystick was preferred over dictation in user satisfaction responses. The mouse outperformed both joystick and dictation, but it is not a readily accepted modality in the OR.
Resumo:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
Resumo:
Purpose: The aim of this project was to design and evaluate a system that would produce tailored information for stroke patients and their carers, customised according to their informational needs, and facilitate communication between the patient and, health professional. Method: A human factors development approach was used to develop a computer system, which dynamically compiles stroke education booklets for patients and carers. Patients and carers are able to select the topics about which they wish to receive information, the amount of information they want, and the font size of the printed booklet. The system is designed so that the health professional interacts with it, thereby providing opportunities for communication between the health professional and patient/carer at a number of points in time. Results: Preliminary evaluation of the system by health professionals, patients and carers was positive. A randomised controlled trial that examines the effect of the system on patient and carer outcomes is underway. (C) 2004 Elsevier Ireland Ltd. All rights reserved.