987 resultados para software implementation
Resumo:
Esta dissertação tem como objetivo principal apresentar a implementação de um software-modem ADSL totalmente escrito em Java, utilizando o framework Ptolemy II, denominado de Hermes (de "A Handy Experimental Software Modem System"). Um software-modem é útil em ocasiões onde se precisem executar testes e simulações de sistemas de comunicação com um número grande de modens e quando os parâmetros desses sistemas precisem ser modificados com um alto grau de liberdade. Além disso, um software-modem possui características que tornam mais fácil a tarefa de acrescentar, eliminar, validar e analisar funções e algoritmos de processamento de sinais e de telecomunicações. Testes e simulações foram realizados para analisar a funcionalidade do Hermes, utilizando, inclusive, o Tracespan, um equipamento para análise não-intrusiva de redes DSL. A partir dos resultados obtidos em conjunto com o Tracespan, foi possível validar com sucesso as funções do Hermes.
Resumo:
Este trabalho apresenta a implementação em software da codificação de canal utilizada no padrão ADSL. A teoria da codificação de canal e descrita, bem como a codificação de canal implementada no Software Modem ADSL utilizando o ambiente de desenvolvimento Ptolemy II. A implementação de um modelo de ruído impulsivo também é apresentada. Para garantir que a implementação obedeça o padrão do ADSL, testes utilizando o analisador de sistemas DSL TraceSpan são descritos. O trabalho apresenta ainda um exemplo de aplicação do Software Modem ADSL, caracterizado por um estudo de caso sobre os efeitos do ruído impulsivo na transmissão de vídeo, analisando o impacto de alguns parâmetros da codificação de canal na correção dos erros.
Resumo:
A crescente demanda por capacidade vem levando os padrões de comunicação sem-fio a prover suporte para a coexistência de macro e pico células. O backhaul, conexão entre a rede de acesso e o núcleo da rede, é de grande interesse neste contexto devido aos diversos desafios técnicos e financeiros envolvidos ao tentar satisfazer o crescente tráfego dos usuários. Fibra óptica e micro-ondas com linha de visada são as opções mais amplamente adotadas para o backhaul de macro-células. Contudo, em muitas situações de interesse prático, estas não são factíveis devido aos altos custos e logística envolvidos. Este trabalho avalia o backhaul de pico-células, focando primeiramente na utilização de cobre como backhaul. O simulador OPNET foi utilizado para avaliar os requerimentos de backhaul para redes móveis em cenários específicos considerando garantir qualidade de serviço para os diversos tipos de tráfego envolvidos. Assumindo demandas de tráfego para LTE e LTE-Advanced, as tecnologias VDSL2 e G.fast são avaliadas e os resultados mostram que mesmo com uma grande demanda de aplicações de vídeo de alta definição, estas tecnologias podem acomodar o tráfego no backhaul de pico-células. VDSL2 é capaz de prover as taxas requeridas para cenários de pico-células LTE, mas não é capaz de acomodar tráfego LTE-Advanced típico. Por outro lado, considerando as taxas atingidas com a tecnologia G.fast, o tráfego backhaul para pico-células LTE-Advanced pode ainda ser entregue com garantias de qualidade de serviço. Neste trabalho também é proposta uma solução para simulação de cenários contendo redes de acesso heterogêneas considerando backhaul LTE sem linha de visada. São demonstrados também os resultados de simulações no OPNET com o backhaul LTE proposto para validar a solução proposta como capaz de caracterizar o tráfego de ambas as tecnologias WiFi e LTE na rede de acesso de acordo com o tipo de serviço.
Resumo:
Topographical surfaces can be represented with a good degree of accuracy by means of maps. However these are not always the best tools for the understanding of more complex reliefs. In this sense, the greatest contribution of this work is to specify and to implement the architecture of an opensource software system capable of representing TIN (Triangular Irregular Network) based digital terrain models. The system implementation follows the object oriented programming and generic paradigms enabling the integration of various opensource tools such as GDAL, OGR, OpenGL, OpenSceneGraph and Qt. Furthermore, the representation core of the system has the ability to work with multiple topological data structures from which can be extracted, in constant time, all the connectivity relations between the entities vertices, edges and faces existing in a planar triangulation what helps enormously the implementation for real time applications. This is an important capability, for example, in the use of laser survey data (Lidar, ALS, TLS), allowing for the generation of triangular mesh models in the order of millions of points.
Resumo:
The present work aims to prepare a study of selectivity and coordination in an isolated electrical system with the aid of computer software PTW (Power Tools for Windows). Based on appropriate protection standards, on equipment data and the survey of the curves of “time versus current” (Time Current Curve – TCC), may be defined protection settings to leave the system selective, coordinated and properly protected. Definitions of adjustments are made taking into account the data of, so called, thermal curves of the equipment, which take into account the rated current and the supportability of short-circuit current of the equipment and cables involved in the installation in question. For that we use the tools provided by the PTW in which an industrial electrical circuit is simulated, presenting and discussing the results. With that validates the software PTW, taking it as a great tool helper implementation the coordination and selectivity study
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Telecommunications have been in constant evolution during past decades. Among the technological innovations, the use of digital technologies is very relevant. Digital communication systems have proven their efficiency and brought a new element in the chain of signal transmitting and receiving, the digital processor. This device offers to new radio equipments the flexibility of a programmable system. Nowadays, the behavior of a communication system can be modified by simply changing its software. This gave rising to a new radio model called Software Defined Radio (or Software-Defined Radio - SDR). In this new model, one moves to the software the task to set radio behavior, leaving to hardware only the implementation of RF front-end. Thus, the radio is no longer static, defined by their circuits and becomes a dynamic element, which may change their operating characteristics, such as bandwidth, modulation, coding rate, even modified during runtime according to software configuration. This article aims to present the use of GNU Radio software, an open-source solution for SDR specific applications, as a tool for development configurable digital radio.
Resumo:
Introduction: The implementation of hearing screening programs can be facilitated by reducing operating costs, including the cost of equipment. The Telessaúde (TS) audiometer is a low-cost, software-based, and easy-to-use piece of equipment for conducting audiometric screening. Aim: To evaluate the TS audiometer for conducting audiometric screening. Methods: A prospective randomized study was performed. Sixty subjects, divided into those who did not have (group A, n = 30) and those who had otologic complaints (group B, n = 30), underwent audiometric screening with conventional and TS audiometers in a randomized order. Pure tones at 25 dB HL were presented at frequencies of 500, 1000, 2000, and 4000 Hz. A "fail" result was considered when the individual failed to respond to at least one of the stimuli. Pure-tone audiometry was also performed on all participants. The concordance of the results of screening with both audiometers was evaluated. The sensitivity, specificity, and positive and negative predictive values of screening with the TS audiometer were calculated. Results: For group A, 100% of the ears tested passed the screening. For group B, "pass" results were obtained in 34.2% (TS) and 38.3% (conventional) of the ears tested. The agreement between procedures (TS vs. conventional) ranged from 93% to 98%. For group B, screening with the TS audiometer showed 95.5% sensitivity, 90.4% sensitivity, and positive and negative predictive values equal to 94.9% and 91.5%, respectively. Conclusions: The results of the TS audiometer were similar to those obtained with the conventional audiometer, indicating that the TS audiometer can be used for audiometric screening.
Resumo:
Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.
Resumo:
Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.
Resumo:
A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions. Our PhD project placed in the context of that effort. At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them. The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was indeed the best solution for the fulfillment of the high-level requirements. The software reference architecture we set on building rests on four constituents: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to ensure that the architectural description of the software is statically analyzable; (iii) a programming model, to ensure that the implementation of the design entities conforms with the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis. The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an international project that preceded the commencement of the PhD work. The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model. Our proposed component model is centered on: (i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities; (ii) the support for specification and model-based analysis of extra-functional properties; (iii) the inclusion space-specific concerns.
Resumo:
Research work carried out in focusing a novel multiphase-multilevel ac motor drive system much suitable for low-voltage high-current power applications. In specific, six-phase asymmetrical induction motor with open-end stator winding configuration, fed from four standard two-level three-phase voltage source inverters (VSIs). Proposed synchronous reference frame control algorithm shares the total dc source power among the 4 VSIs in each switching cycle with three degree of freedom. Precisely, first degree of freedom concerns with the current sharing between two three-phase stator windings. Based on modified multilevel space vector pulse width modulation shares the voltage between each single VSIs of two three-phase stator windings with second and third degree of freedom, having proper multilevel output waveforms. Complete model of whole ac motor drive based on three-phase space vector decomposition approach was developed in PLECS - numerical simulation software working in MATLAB environment. Proposed synchronous reference control algorithm was framed in MATLAB with modified multilevel space vector pulse width modulator. The effectiveness of the entire ac motor drives system was tested. Simulation results are given in detail to show symmetrical and asymmetrical, power sharing conditions. Furthermore, the three degree of freedom are exploited to investigate fault tolerant capabilities in post-fault conditions. Complete set of simulation results are provided when one, two and three VSIs are faulty. Hardware prototype model of quad-inverter was implemented with two passive three-phase open-winding loads using two TMS320F2812 DSP controllers. Developed McBSP (multi-channel buffered serial port) communication algorithm able to control the four VSIs for PWM communication and synchronization. Open-loop control scheme based on inverse three-phase decomposition approach was developed to control entire quad-inverter configuration and tested with balanced and unbalanced operating conditions with simplified PWM techniques. Both simulation and experimental results are always in good agreement with theoretical developments.
Resumo:
The PhD activity described in the document is part of the Microsatellite and Microsystem Laboratory of the II Faculty of Engineering, University of Bologna. The main objective is the design and development of a GNSS receiver for the orbit determination of microsatellites in low earth orbit. The development starts from the electronic design and goes up to the implementation of the navigation algorithms, covering all the aspects that are involved in this type of applications. The use of GPS receivers for orbit determination is a consolidated application used in many space missions, but the development of the new GNSS system within few years, such as the European Galileo, the Chinese COMPASS and the Russian modernized GLONASS, proposes new challenges and offers new opportunities to increase the orbit determination performances. The evaluation of improvements coming from the new systems together with the implementation of a receiver that is compatible with at least one of the new systems, are the main activities of the PhD. The activities can be divided in three section: receiver requirements definition and prototype implementation, design and analysis of the GNSS signal tracking algorithms, and design and analysis of the navigation algorithms. The receiver prototype is based on a Virtex FPGA by Xilinx, and includes a PowerPC processor. The architecture follows the software defined radio paradigm, so most of signal processing is performed in software while only what is strictly necessary is done in hardware. The tracking algorithms are implemented as a combination of Phase Locked Loop and Frequency Locked Loop for the carrier, and Delay Locked Loop with variable bandwidth for the code. The navigation algorithm is based on the extended Kalman filter and includes an accurate LEO orbit model.
Resumo:
Among the scientific objectives addressed by the Radio Science Experiment hosted on board the ESA mission BepiColombo is the retrieval of the rotational state of planet Mercury. In fact, the estimation of the obliquity and the librations amplitude were proven to be fundamental for constraining the interior composition of Mercury. This is accomplished by the Mercury Orbiter Radio science Experiment (MORE) via a strict interaction among different payloads thus making the experiment particularly challenging. The underlying idea consists in capturing images of the same landmark on the surface of the planet in different epochs in order to observe a displacement of the identified features with respect to a nominal rotation which allows to estimate the rotational parameters. Observations must be planned accurately in order to obtain image pairs carrying the highest information content for the following estimation process. This is not a trivial task especially in light of the several dynamical constraints involved. Another delicate issue is represented by the pattern matching process between image pairs for which the lowest correlation errors are desired. The research activity was conducted in the frame of the MORE rotation experiment and addressed the design and implementation of an end-to-end simulator of the experiment with the final objective of establishing an optimal science planning of the observations. In the thesis, the implementation of the singular modules forming the simulator is illustrated along with the simulations performed. The results obtained from the preliminary release of the optimization algorithm are finally presented although the software implemented is only at a preliminary release and will be improved and refined in the future also taking into account the developments of the mission.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.