445 resultados para linux embarcado


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A FORTRAN 90 program is presented which calculates the total cross sections, and the electron energy spectra of the singly and doubly differential cross sections for the single target ionization of neutral atoms ranging from hydrogen up to and including argon. The code is applicable for the case of both high and low Z projectile impact in fast ion-atom collisions. The theoretical models provided for the program user are based on two quantum mechanical approximations which have proved to be very successful in the study of ionization in ion-atom collisions. These are the continuum-distorted-wave (CDW) and continuum-distorted-wave eikonal-initial-state (CDW-EIS) approximations. The codes presented here extend previously published. codes for single ionization of. target hydrogen [Crothers and McCartney, Comput. Phys. Commun. 72 (1992) 288], target helium [Nesbitt, O'Rourke and Crothers, Comput. Phys. Commun. 114 (1998) 385] and target atoms ranging from lithium to neon [O'Rourke, McSherry and Crothers, Comput. Phys. Commun. 131 (2000) 129]. Cross sections for all of these target atoms may be obtained as limiting cases from the present code. Title of program: ARGON Catalogue identifier: ADSE Program summary URL: http://cpc.cs.qub.ac.uk/cpc/summaries/ADSE Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it is operable: Computers: Four by 200 MHz Pro Pentium Linux server, DEC Alpha 21164; Four by 400 MHz Pentium 2 Xeon 450 Linux server, IBM SP2 and SUN Enterprise 3500 Installations: Queen's University, Belfast Operating systems under which the program has been tested: Red-hat Linux 5.2, Digital UNIX Version 4.0d, AIX, Solaris SunOS 5.7 Compilers: PGI workstations, DEC CAMPUS Programming language used: FORTRAN 90 with MPI directives No. of bits in a word: 64, except on Linux servers 32 Number of processors used: any number Has the code been vectorized or parallelized? Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 32 189 Distribution format: tar gzip file Keywords: Single ionization, cross sections, continuum-distorted-wave model, continuum- distorted-wave eikonal-initial-state model, target atoms, wave treatment Nature of physical problem: The code calculates total, and differential cross sections for the single ionization of target atoms ranging from hydrogen up to and including argon by both light and heavy ion impact. Method of solution: ARGON allows the user to calculate the cross sections using either the CDW or CDW-EIS [J. Phys. B 16 (1983) 3229] models within the wave treatment. Restrictions on the complexity of the program: Both the CDW and CDW-EIS models are two-state perturbative approximations. Typical running time: Times vary according to input data and number of processors. For one processor the test input data for double differential cross sections (40 points) took less than one second, whereas the test input for total cross sections (20 points) took 32 minutes. Unusual features of the program: none (C) 2003 Elsevier B.V All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Policy-based network management (PBNM) paradigms provide an effective tool for end-to-end resource
management in converged next generation networks by enabling unified, adaptive and scalable solutions
that integrate and co-ordinate diverse resource management mechanisms associated with heterogeneous
access technologies. In our project, a PBNM framework for end-to-end QoS management in converged
networks is being developed. The framework consists of distributed functional entities managed within a
policy-based infrastructure to provide QoS and resource management in converged networks. Within any
QoS control framework, an effective admission control scheme is essential for maintaining the QoS of
flows present in the network. Measurement based admission control (MBAC) and parameter basedadmission control (PBAC) are two commonly used approaches. This paper presents the implementationand analysis of various measurement-based admission control schemes developed within a Java-based
prototype of our policy-based framework. The evaluation is made with real traffic flows on a Linux-based experimental testbed where the current prototype is deployed. Our results show that unlike with classic MBAC or PBAC only schemes, a hybrid approach that combines both methods can simultaneously result in improved admission control and network utilization efficiency

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the design and implementation of a measurement-based QoS and resource management framework, CNQF (Converged Networks’ QoS Management Framework). CNQF is designed to provide unified, scalable QoS control and resource management through the use of a policy-based network
management paradigm. It achieves this via distributed functional entities that are deployed to co-ordinate the resources of the transport network through centralized policy-driven decisions supported by measurement-based control architecture. We present the CNQF architecture, implementation of the
prototype and validation of various inbuilt QoS control mechanisms using real traffic flows on a Linux-based experimental test bed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a novel algorithm for decomposing NFA automata into one-state-active modules for parallel execution on Multiprocessor Systems on Chip (MP-SoC). Furthermore, performance related studies based on a 16-PE system for Snort, Bro and Linux-L7 regular expressions are presented. ©2009 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enhancing sampling and analyzing simulations are central issues in molecular simulation. Recently, we introduced PLUMED, an open-source plug-in that provides some of the most popular molecular dynamics (MD) codes with implementations of a variety of different enhanced sampling algorithms and collective variables (CVs). The rapid changes in this field, in particular new directions in enhanced sampling and dimensionality reduction together with new hardware, require a code that is more flexible and more efficient. We therefore present PLUMED 2 here a,complete rewrite of the code in an object-oriented programming language (C++). This new version introduces greater flexibility and greater modularity, which both extends its core capabilities and makes it far easier to add new methods and CVs. It also has a simpler interface with the MD engines and provides a single software library containing both tools and core facilities. Ultimately, the new code better serves the ever-growing community of users and contributors in coping with the new challenges arising in the field.

Program summary

Program title: PLUMED 2

Catalogue identifier: AEEE_v2_0

Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEE_v2_0.html

Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland

Licensing provisions: Yes

No. of lines in distributed program, including test data, etc.: 700646

No. of bytes in distributed program, including test data, etc.: 6618136

Distribution format: tar.gz

Programming language: ANSI-C++.

Computer: Any computer capable of running an executable produced by a C++ compiler.

Operating system: Linux operating system, Unix OSs.

Has the code been vectorized or parallelized?: Yes, parallelized using MPI.

RAM: Depends on the number of atoms, the method chosen and the collective variables used.

Classification: 3, 7.7, 23. Catalogue identifier of previous version: AEEE_v1_0.

Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1961.

External routines: GNU libmatheval, Lapack, Bias, MPI. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data processing is an essential part of Acoustic Doppler Profiler (ADP) surveys, which have become the standard tool in assessing flow characteristics at tidal power development sites. In most cases, further processing beyond the capabilities of the manufacturer provided software tools is required. These additional tasks are often implemented by every user in mathematical toolboxes like MATLAB, Octave or Python. This requires the transfer of the data from one system to another and thus increases the possibility of errors. The application of dedicated tools for visualisation of flow or geographic data is also often beneficial and a wide range of tools are freely available, though again problems arise from the necessity of transferring the data. Furthermore, almost exclusively PCs are supported directly by the ADP manufacturers, whereas small computing solutions like tablet computers, often running Android or Linux operating systems, seem better suited for online monitoring or data acquisition in field conditions. While many manufacturers offer support for developers, any solution is limited to a single device of a single manufacturer. A common data format for all ADP data would allow development of applications and quicker distribution of new post processing methodologies across the industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Low-power processors and accelerators that were originally designed for the embedded systems market are emerging as building blocks for servers. Power capping has been actively explored as a technique to reduce the energy footprint of high-performance processors. The opportunities and limitations of power capping on the new low-power processor and accelerator ecosystem are less understood. This paper presents an efficient power capping and management infrastructure for heterogeneous SoCs based on hybrid ARM/FPGA designs. The infrastructure coordinates dynamic voltage and frequency scaling with task allocation on a customised Linux system for the Xilinx Zynq SoC. We present a compiler-assisted power model to guide voltage and frequency scaling, in conjunction with workload allocation between the ARM cores and the FPGA, under given power caps. The model achieves less than 5% estimation bias to mean power consumption. In an FFT case study, the proposed power capping schemes achieve on average 97.5% of the performance of the optimal execution and match the optimal execution in 87.5% of the cases, while always meeting power constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Das Forschungsinformationssystem VIVO bietet als Linked-Data-basiertes System die Möglichkeit, Daten aus anderen Quellen wiederzuverwenden. In der Praxis kann man dabei auf Konvertierungsprobleme stoßen. Oft liegen Daten nur in tabellarischem Format vor, z.B. als CSV-Datei. Zur Konvertierung dieser Daten existieren verschiedene Werkzeuge, viele dieser Werkzeuge erfordern jedoch entweder spezielle technische Umgebungen (oft Linux-Systeme) oder sie sind in der Bedienung sehr anspruchsvoll. Im Artikel wird ein Workflow für die Konvertierung von Daten aus GeoNames für VIVO mit Google Refine beschrieben.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Distributed Computer-Controlled Systems (DCCS), a special emphasis must be given to the communication infrastructure, which must provide timely and reliable communication services. CAN networks are usually suitable to support small-scale DCCS. However, they are known to present some reliability problems, which can lead to an unreliable behaviour of the supported applications. In this paper, an atomic multicast protocol for CAN networks is proposed. This protocol explores the CAN synchronous properties, providing a timely and reliable service to the supported applications. The implementation of such protocol in Ada, on top of the Ada version of Real-Time Linux is presented, which is used to demonstrate the advantages and disadvantages of the platform to support reliable communications in DCCS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consider the problem of scheduling a set of sporadic tasks on a multiprocessor system to meet deadlines using a task-splitting scheduling algorithm. Task-splitting (also called semi-partitioning) scheduling algorithms assign most tasks to just one processor but a few tasks are assigned to two or more processors, and they are dispatched in a way that ensures that a task never executes on two or more processors simultaneously. A particular type of task-splitting algorithms, called slot-based task-splitting dispatching, is of particular interest because of its ability to schedule tasks with high processor utilizations. Unfortunately, no slot-based task-splitting algorithm has been implemented in a real operating system so far. In this paper we discuss and propose some modifications to the slot-based task-splitting algorithm driven by implementation concerns, and we report the first implementation of this family of algorithms in a real operating system running Linux kernel version 2.6.34. We have also conducted an extensive range of experiments on a 4-core multicore desktop PC running task-sets with utilizations of up to 88%. The results show that the behavior of our implementation is in line with the theoretical framework behind it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A crescente evolução dos dispositivos contendo circuitos integrados, em especial os FPGAs (Field Programmable Logic Arrays) e atualmente os System on a chip (SoCs) baseados em FPGAs, juntamente com a evolução das ferramentas, tem deixado um espaço entre o lançamento e a produção de materiais didáticos que auxiliem os engenheiros no Co- Projecto de hardware/software a partir dessas tecnologias. Com o intuito de auxiliar na redução desse intervalo temporal, o presente trabalho apresenta o desenvolvimento de documentos (tutoriais) direcionados a duas tecnologias recentes: a ferramenta de desenvolvimento de hardware/software VIVADO; e o SoC Zynq-7000, Z-7010, ambos desenvolvidos pela Xilinx. Os documentos produzidos são baseados num projeto básico totalmente implementado em lógica programável e do mesmo projeto implementado através do processador programável embarcado, para que seja possível avaliar o fluxo de projeto da ferramenta para um projeto totalmente implementado em hardware e o fluxo de projeto para o mesmo projeto implementado numa estrutura de harware/software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste trabalho foi considerada a possibilidade de incorporar serviços remotos, normalmente associados a serviços web e cloud computing, numa solução local que centralizasse os vários serviços num único sistema e permitisse aos seus utilizadores consumir e configurar os mesmos, quer a partir da rede local, quer remotamente a partir da Internet. Desta forma seria possível conciliar o acesso a partir de qualquer local com internet, característico nas clouds, com a simplicidade de concentrar num só sistema vários serviços que são por norma oferecidos por entidades distintas e ainda permitir aos seus utilizadores o controlo e configuração sobre os mesmos. De forma a validar que este conceito é viável, prático e funcional, foram implementadas duas componentes. Um cliente que corre nos dispositivos dos utilizadores e que proporciona a interface para consumir os serviços disponíveis e um servidor que irá conter e prestar esses serviços aos clientes. Estes serviços incluem lista de contactos, mensagens instantâneas, salas de conversação, transferência de ficheiros, chamadas e conferências de voz e vídeo, pastas remotas, pastas sincronizadas, backups, pastas partilhadas, VoD (Video-on Demand) e AoD (Audio-on Demand). Para o desenvolvimento do cliente e do servidor foi utilizada a framework Qt que recorre à linguagem de programação C++ e ao conjunto de bibliotecas que possui, para o desenvolvimento de aplicações multiplataforma. Para as comunicações entre clientes e servidor, foi utilizado o protocolo XMPP (Extensible Messaging and Presence Protocol), pela forma da biblioteca qxmpp e do servidor XMPP ejabberd. Pelo facto de conter um conjunto de centenas de extensões atualmente ativas que auferem funcionalidades como salas de conversação, transferências de ficheiros e até estabelecer sessões multimédia, graças à sua flexibilidade permitiu ainda a criação de extensões personalizada necessárias para algumas funcionalidades que se pretendeu implementar. Foi ainda utilizado no servidor a framework ffmpeg para suportar algumas funcionalidades multimédia. Após a implementação do cliente para Windows e Linux, e de implementar o servidor em Linux foi realizado um conjunto de testes funcionais para perceber se as funcionalidades e seus mecanismos funcionam corretamente. No caso onde a análise da performance e do consumo de recursos era importante, foram realizados testes de performance e testes de carga.