992 resultados para Hardware reconfigurable
Resumo:
Internet-palvelujen määrä kasvaa jatkuvasti. Henkilöllä on yleensä yksi sähköinen identiteetti jokaisessa käyttämässään palvelussa. Autentikointitunnusten turvallinen säilytys käy yhä vaikeammaksi, kun niitä kertyy yhdet jokaisesta uudesta palvelurekisteröitymisestä. Tämä diplomityö tarkastelee ongelmaa ja ratkaisuja sekä palvelulähtöisestä että teknisestä näkökulmasta. Palvelulähtöisen identiteetinhallinnan liiketoimintakonsepti ja toteutustekniikat – kuten single sign-on (SSO) ja Security Assertion Markup Language (SAML) – käydään läpi karkeiden esimerkkien avulla sekä tutustuen Nokia Account -hankkeessa tuotetun ratkaisun konseptiin ja teknisiin yksityiskohtiin. Nokia Account -palvelun ensimmäisen version toteutusta analysoidaan lopuksi identiteetinhallintapalveluiden suunnitteluperiaatteita ja vaatimuksia vasten.
Resumo:
Tämä diplomityö käsittelee sääntöpohjaisen verkkoon pääsyn hallinnan (NAC) ratkaisuja arkkitehtonisesta näkökulmasta. Työssä käydään läpi Trusted Computing Groupin, Microsoft Corporationin, Juniper Networksin sekä Cisco Systemsin NAC-ratkaisuja. NAC koostuu joukosta uusia sekä jo olemassa olevia teknologioita, jotka auttavat ennalta määriteltyyn sääntökantaan perustuen hallitsemaan suojattuun verkkoon pyrkivien laitteiden tietoliikenneyhteyksiä. Käyttäjän tunnistamisen lisäksi NAC pystyy rajoittamaan verkkoon pääsyä laitekohtaisten ominaisuuksien perusteella, esimerkiksi virustunnisteisiin ja käyttöjärjestelmäpäivityksiin liittyen ja paikkaamaan tietyin rajoituksin näissä esiintyviä puutteita verkkoon pääsyn sallimiseksi. NAC on verraten uusi käsite, jolta puuttuu tarkka määritelmä. Tästä johtuen nykymarkkinoilla myydään ominaisuuksiltaan puutteellisia tuotteita NAC-nimikkeellä. Standardointi eri valmistajien NAC-komponenttien yhteentoimivuuden takaamiseksi on meneillään, minkä perusteella ratkaisut voidaan jakaa joko avoimia standardeja tai valmistajakohtaisia standardeja noudattaviksi. Esitellyt NAC-ratkaisut noudattavat standardeja joko rajoitetusti tai eivät lainkaan. Mikään läpikäydyistä ratkaisuista ei ole täydellinen NAC, mutta Juniper Networksin ratkaisu nousee niistä potentiaalisimmaksi jatkokehityksen ja -tutkimuksen kohteeksi TietoEnator Processing & Networks Oy:lle. Eräs keskeinen ongelma NAC-konseptissa on työaseman tietoverkolle toimittama mahdollisesti valheellinen tietoturvatarkistuksen tulos, minkä perusteella pääsyä osittain hallitaan. Muun muassa tähän ongelmaan ratkaisuna voisi olla jo nykytietokoneista löytyvä TPM-siru, mikä takaa tiedon oikeellisuuden ja koskemattomuuden.
Resumo:
Imaging in neuroscience, clinical research and pharmaceutical trials often employs the 3D magnetisation-prepared rapid gradient-echo (MPRAGE) sequence to obtain structural T1-weighted images with high spatial resolution of the human brain. Typical research and clinical routine MPRAGE protocols with ~1mm isotropic resolution require data acquisition time in the range of 5-10min and often use only moderate two-fold acceleration factor for parallel imaging. Recent advances in MRI hardware and acquisition methodology promise improved leverage of the MR signal and more benign artefact properties in particular when employing increased acceleration factors in clinical routine and research. In this study, we examined four variants of a four-fold-accelerated MPRAGE protocol (2D-GRAPPA, CAIPIRINHA, CAIPIRINHA elliptical, and segmented MPRAGE) and compared clinical readings, basic image quality metrics (SNR, CNR), and automated brain tissue segmentation for morphological assessments of brain structures. The results were benchmarked against a widely-used two-fold-accelerated 3T ADNI MPRAGE protocol that served as reference in this study. 22 healthy subjects (age=20-44yrs.) were imaged with all MPRAGE variants in a single session. An experienced reader rated all images of clinically useful image quality. CAIPIRINHA MPRAGE scans were perceived on average to be of identical value for reading as the reference ADNI-2 protocol. SNR and CNR measurements exhibited the theoretically expected performance at the four-fold acceleration. The results of this study demonstrate that the four-fold accelerated protocols introduce systematic biases in the segmentation results of some brain structures compared to the reference ADNI-2 protocol. Furthermore, results suggest that the increased noise levels in the accelerated protocols play an important role in introducing these biases, at least under the present study conditions.
Resumo:
O objetivo é apresentar uma técnica recente para a avaliação de obstruções arteriais dos membros inferiores por ressonância magnética em um único momento e com a utilização de dose dupla do meio de contraste paramagnético administrado de forma lenta através de bomba injetora. O método baseia-se em um "software" e "hardware" denominado Mobitrak, disponível nos aparelhos de alto campo de ressonância da Philips, que permite a avaliação de uma grande extensão vascular a partir de aquisições de alta resolução, segmentares e contínuas. A seqüência utilizada é um gradiente eco (FFE) que permite a programação de três segmentos simultaneamente, com pequena sobreposição nas intersecções desses segmentos. Essa seqüência dinâmica é obtida em duas fases, uma previamente ao contraste e outra durante a injeção lenta deste, com subtração do sinal dos tecidos adjacentes e reconstruções em 3D. O método apresenta vários benefícios, como: melhor visualização dos segmentos tibiofibulares, estudo de toda a aorta e o membro inferior em uma única visita do paciente, com baixo volume de meio de contraste.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
Testing of a complex software is time consuming. Automated tools are available quite a lot for desktop applications, but for embedded systems a custom-made tool is required Building a complete test framework is a complicated task. Therefore, the test platform was built on top of an already existing tool, CANoe. CANoe is a tool for CAN bus analysis and node simulation. The functionality of CANoe was extended with LabVIEW DLL. The LabVIEW software was used for simulating hardware components of the embedded device As a result of the study, a platform was created where tests could be automated. Of the current test plan, 10 percent were automated and up to 60 percent could be automated with the current functionality.
Resumo:
Internationalization and the following rapid growth have created the need to concentrate the IT systems of many small-to-medium-sized production companies. Enterprise Resource Planning systems are a common solution for such companies. Deployment of these ERP systems consists of many steps, one of which is the implementation of the same shared system at all international subsidiaries. This is also one of the most important steps in the internationalization strategy of the company from the IT point of view. The mechanical process of creating the required connections for the off-shore sites is the easiest and most well-documented step along the way, but the actual value of the system, once operational, is perceived in its operational reliability. The operational reliability of an ERP system is a combination of many factors. These factors vary from hardware- and connectivity-related issues to administrative tasks and communication between decentralized administrative units and sites. To accurately analyze the operational reliability of such system, one must take into consideration the full functionality of the system. This includes not only the mechanical and systematic processes but also the users and their administration. All operational reliability in an international environment relies heavily on hardware and telecommunication adequacy so it is imperative to have resources dimensioned with regard to planned usage. Still with poorly maintained communication/administration schemes no amount of bandwidth or memory will be enough to maintain a productive level of reliability. This thesis work analyzes the implementation of a shared ERP system to an international subsidiary of a Finnish production company. The system is Microsoft Dynamics Ax, currently being introduced to a Slovakian facility, a subsidiary of Peikko Finland Oy. The primary task is to create a feasible base of analysis against which the operational reliability of the system can be evaluated precisely. With a solid analysis the aim is to give recommendations on how future implementations are to be managed.
Resumo:
Dixon techniques are part of the methods used to suppress the signal of fat in MRI. They present many advantages compared with other fat suppression techniques including (1) the robustness of fat signal suppression, (2) the possibility to combine these techniques with all types of sequences (gradient echo, spin echo) and different weightings (T1-, T2-, proton density-, intermediate-weighted sequences), and (3) the availability of images both with and without fat suppression from one single acquisition. These advantages have opened many applications in musculoskeletal imaging. We first review the technical aspects of Dixon techniques including their advantages and disadvantages. We then illustrate their applications for the imaging of different body parts, as well as for tumors, neuromuscular disorders, and the imaging of metallic hardware.
Resumo:
The extensional theory of arrays is one of the most important ones for applications of SAT Modulo Theories (SMT) to hardware and software verification. Here we present a new T-solver for arrays in the context of the DPLL(T) approach to SMT. The main characteristics of our solver are: (i) no translation of writes into reads is needed, (ii) there is no axiom instantiation, and (iii) the T-solver interacts with the Boolean engine by asking to split on equality literals between indices. As far as we know, this is the first accurate description of an array solver integrated in a state-of-the-art SMT solver and, unlike most state-of-the-art solvers, it is not based on a lazy instantiation of the array axioms. Moreover, it is very competitive in practice, specially on problems that require heavy reasoning on array literals
Resumo:
AbstractObjective:To evaluate by magnetic resonance imaging changes in bone marrow of patients undergoing treatment for type I Gaucher’s disease.Materials and Methods:Descriptive, cross-sectional study of Gaucher’s disease patients submitted to 3 T magnetic resonance imaging of femurs and lumbar spine. The images were blindly reviewed and the findings were classified according to the semiquantitative bone marrow burden (BMB) scoring system.Results:All of the seven evaluated patients (three men and four women) presented signs of bone marrow infiltration. Osteonecrosis of the femoral head was found in three patients, Erlenmeyer flask deformity in five, and no patient had vertebral body collapse. The mean BMB score was 11, ranging from 9 to 14.Conclusion:Magnetic resonance imaging is currently the method of choice for assessing bone involvement in Gaucher’s disease in adults due to its high sensitivity to detect both focal and diffuse bone marrow changes, and the BMB score is a simplified method for semiquantitative analysis, without depending on advanced sequences or sophisticated hardware, allowing for the classification of the disease extent and assisting in the treatment monitoring.
Resumo:
Automation was introduced many years ago in several diagnostic disciplines such as chemistry, haematology and molecular biology. The first laboratory automation system for clinical bacteriology was released in 2006, and it rapidly proved its value by increasing productivity, allowing a continuous increase in sample volumes despite limited budgets and personnel shortages. Today, two major manufacturers, BD Kiestra and Copan, are commercializing partial or complete laboratory automation systems for bacteriology. The laboratory automation systems are rapidly evolving to provide improved hardware and software solutions to optimize laboratory efficiency. However, the complex parameters of the laboratory and automation systems must be considered to determine the best system for each given laboratory. We address several topics on laboratory automation that may help clinical bacteriologists to understand the particularities and operative modalities of the different systems. We present (a) a comparison of the engineering and technical features of the various elements composing the two different automated systems currently available, (b) the system workflows of partial and complete laboratory automation, which define the basis for laboratory reorganization required to optimize system efficiency, (c) the concept of digital imaging and telebacteriology, (d) the connectivity of laboratory automation to the laboratory information system, (e) the general advantages and disadvantages as well as the expected impacts provided by laboratory automation and (f) the laboratory data required to conduct a workflow assessment to determine the best configuration of an automated system for the laboratory activities and specificities.
Resumo:
This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
In modern day organizations there are an increasing number of IT devices such as computers, mobile phones and printers. These devices can be located and maintained by using specialized IT management applications. Costs related to a single device accumulate from various sources and are normally categorized as direct costs like hardware costs and indirect costs such as labor costs. These costs can be saved in a configuration management database and presented to users using web based development tools such as ASP.NET. The overall costs of IT devices during their lifecycle can be ten times higher than the actual purchase price of the product and ability to define and reduce these costs can save organizations noticeable amount of money. This Master’s Thesis introduces the research field of IT management and defines a custom framework model based on Information Technology Infrastructure Library (ITIL) best practices which is designed to be implemented as part of an existing IT management application for defining and presenting IT costs.
Resumo:
Aquest projecte té com a finalitat desenvolupar un sistema no destructiu per a la caracterització de les plantacions de vinya i d’arbres fruiters mitjançant la utilització d’un sensor làser (LiDAR - Light Detection and Ranging). La informació obtinguda ha de permetre estudiar la resposta del cultiu a determinades accions (poda, reg, adobs, etc.); i també realitzar tractaments fitosanitaris adaptats a la densitat foliar del cultiu. La posada a punt del sistema (software i hardware) es va realitzar a escala reduïda mitjançant proves de laboratori sobre un arbre ornamental. Obtenint la configuració del sensor LiDAR més adequada i la calibració de tot el sistema. L’any 2004 van realitzar assajos en plantacions de pomera, perera, cítrics i vinya. L’objectiu era posar a prova el sistema i obtenir dades dels cultius. Amb la introducció de canvis i millores en el sistema i en la metodologia de treball, l’any 2005 es van realitzar nous assajos, però només en perera Blanquilla i en vinya Merlot. En tots els assajos s’escanejaven unes franges de vegetació concretes i posteriorment es desfullaven manualment per a calcular-ne l’Índex d’Àrea Foliar (IAF). Les dades obtingudes amb el sensor LiDAR s’han analitzat mitjançant l’aplicació de la metodologia desenvolupada per Walklate et al.(2002) i s’han obtingut determinats paràmetres vegetatius de cultiu, que posteriorment s’han correlacionat amb l’Índex d’Àrea Foliar (IAF) obtingut de forma experimental. La capacitat de predicció de l’Índex d’Àrea Foliar (IAF) per part dels diferents paràmetres calculats es diferent en cada cultiu, essent necessàries més proves i major nombre de dades a fi d’obtenir un model fiable per a l’estimació de l’IAF a partir de les lectures del sensor LiDAR. L’estudi de la variabilitat de la vegetació mitjançant l’anàlisi de la variabilitat del Tree Area Index (TAI) al llarg de la fila ha permès determinar el nombre mínim necessari d’escanejades acumulades per a l’estimació fiable de l’Índex d’Àrea Foliar. Finalment s’ha estudiat la incidència de l’alçada de col•locació del sensor LiDAR respecte la vegetació.