929 resultados para Hardware and software
Resumo:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e de Computadores
Resumo:
La gestión de recursos en los procesadores multi-core ha ganado importancia con la evolución de las aplicaciones y arquitecturas. Pero esta gestión es muy compleja. Por ejemplo, una misma aplicación paralela ejecutada múltiples veces con los mismos datos de entrada, en un único nodo multi-core, puede tener tiempos de ejecución muy variables. Hay múltiples factores hardware y software que afectan al rendimiento. La forma en que los recursos hardware (cómputo y memoria) se asignan a los procesos o threads, posiblemente de varias aplicaciones que compiten entre sí, es fundamental para determinar este rendimiento. La diferencia entre hacer la asignación de recursos sin conocer la verdadera necesidad de la aplicación, frente a asignación con una meta específica es cada vez mayor. La mejor manera de realizar esta asignación és automáticamente, con una mínima intervención del programador. Es importante destacar, que la forma en que la aplicación se ejecuta en una arquitectura no necesariamente es la más adecuada, y esta situación puede mejorarse a través de la gestión adecuada de los recursos disponibles. Una apropiada gestión de recursos puede ofrecer ventajas tanto al desarrollador de las aplicaciones, como al entorno informático donde ésta se ejecuta, permitiendo un mayor número de aplicaciones en ejecución con la misma cantidad de recursos. Así mismo, esta gestión de recursos no requeriría introducir cambios a la aplicación, o a su estrategia operativa. A fin de proponer políticas para la gestión de los recursos, se analizó el comportamiento de aplicaciones intensivas de cómputo e intensivas de memoria. Este análisis se llevó a cabo a través del estudio de los parámetros de ubicación entre los cores, la necesidad de usar la memoria compartida, el tamaño de la carga de entrada, la distribución de los datos dentro del procesador y la granularidad de trabajo. Nuestro objetivo es identificar cómo estos parámetros influyen en la eficiencia de la ejecución, identificar cuellos de botella y proponer posibles mejoras. Otra propuesta es adaptar las estrategias ya utilizadas por el Scheduler con el fin de obtener mejores resultados.
Resumo:
La acelerada invención de nuevos hardware y software van modificando, casi diariamente, la percepción del mundo, y, por lo tanto, la producción cultural, permeabilizando conceptos como arte-literatura, cuadro-libro, imagen-texto. Si bien estas parejas han sido siempre objeto del discurso teórico, la discusión asume hoy una urgencia creciente al momento que las nuevas tecnologías exponen lo que estaba refugiado en el reino de la teoría. La misma forma de comprender la realidad se ve afectada por la inmediatez de estos medios. La investigación analiza la obra de diferentes autores de los nuevos medios que trabajan en torno a la problemática de la representación de la memoria en esta perspectiva contemporánea. El trabajo de investigación desarrollado en la Tesis Doctoral se centra en la forma de representación de la memoria, así como esta planteada en la obra de Chris Marker. Interesan especialmente los últimos dispositivos creados por el autor en el marco de las llamadas nuevas tecnologías y los nuevos espacios de exposición de cine. El proyecto propone un análisis en torno a la memoria que dichos discursos sugieren a través de los temas que les son propios: archivo, identidades culturales, contribución del espectador, base de datos y tratamiento tecnológico de la información. Se ha seleccionado la obra de Chris Marker por las características de realización y de discurso que permiten una amplia discusión sobre las llamadas nuevas tecnologías y el mundo que éstas representan en el nuevo espacio híbrido construido entre las artes visuales, la literatura y la tecnología.
Resumo:
Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user’s job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic.
Resumo:
Las aplicaciones que se distribuyen a través de Internet como un servicio (Software as a service, SaaS) y el hardware y software de base de los centros de datos (Nube, Cloud) son los dos elementos de la ecuación llamada cloud computing. En este paradigma, se juegan tres roles principales: proveedor del cloud, usuario del cloud que a su vez es proveedor de servicio (como los repositorios) y los usuarios finales del servicio. Los primeros se benefician de la especialización y las economías de escala; mientras que los segundos de una mayor elasticidad en el aprovisionamiento. En este sentido, DuraSpace ha creado un piloto llamado DuraCloud para probar el uso de tecnologías de almacenamiento en la nube para la preservación de contenido digital. El taller pretende describir los conceptos básicos del cloud, con ejemplos de donde se está usando este tipo de tecnología; y el impacto que puede tener en los repositorios digitales.
Resumo:
The Mechatronics Research Centre (MRC) owns a small scale robot manipulator called aMini-Mover 5. This robot arm is a microprocessor-controlled, six-jointed mechanical armdesigned to provide an unusual combination of dexterity and low cost.The Mini-Mover-5 is operated by a number of stepper motors and is controlled by a PCparallel port via a discrete logic board. The manipulator also has an impoverished array ofsensors.This project requires that a new control board and suitable software be designed to allow themanipulator to be controlled from a PC. The control board will also provide a mechanism forthe values measured using some sensors to be returned to the PC.On this project I will consider: stepper motor control requirements, sensor technologies,power requirements, USB protocols, USB hardware and software development and controlrequirements (e.g. sample rates).In this report we will have a look at robots history and background, as well as we willconcentrate how stepper motors and parallel port work
Resumo:
House File 2196 required the Department of Transportation (DOT) to study the acceptance of electronic payments at its customer service sites and sites operated by county treasurers. Specifically the legislation requires the following: “The department of transportation shall review the current methods the department employs for the collection of fees and other revenues at sites operated by county treasurers under chapter 321M and at customer service sites operated by the department. In conducting its review, the department, in cooperation with the treasurer of state, shall consider providing an electronic payment option for all of its customers. The department shall report its findings and recommendations by December 31, 2008, to the senate and house standing committees on transportation regarding the advantages and disadvantages of implementing one or more electronic payment systems.” This review focused on estimating the costs of providing an electronic payment option for customers of the DOT driver’s license stations and those of the 81 county treasurers. Customers at these sites engage in three primary financial transactions for which acceptance of electronic payments was studied: paying for a driver’s license (DL), paying for a non-operator identification card (ID), and paying certain civil penalties. Both consumer credit cards and PIN-based debit cards were reviewed as electronic payment options. It was assumed that most transactions would be made using a consumer credit card. Credit card companies charge a fee for each transaction for which they are used. The amount of these fees varies among credit card companies. The estimates for credit card fees used in this study were based on the State Treasurer of Iowa’s current credit card contract, which is due to expire in September 2009. Since credit card companies adjust their fees each year, estimates were based on the 2008 fee schedule. There is also a fee for the use of PIN-based debit cards. The estimates for PIN-based debit card transactions were based on information provided by Wells Fargo Merchant Services for current fees charged by debit card networks. Credit and debit card transactions would be processed through vendor-provided hardware and software. The costs would be determined through the competitive bidding process since several vendors provide this function; therefore, these costs are not reflected in this document.
Resumo:
En els últims anys, la popularitat de les xarxes sensefils (WIFI) ha anat en augment a un ritme incansable. Des de petits aparells instal•lats a les cases amb aquesta tecnologia com a complement dels routers d’accés a internet instal•lats per diverses companyies, fins a empreses fent petits desplegaments per comunicar entre si les seves seus. Al marge d’aquests escenaris, s’ha produït un fenomen social d’acolliment d’aquesta tecnologia a nivell mundial, en forma del que coneixem com a xarxes ciutadanes / xarxes lliures / xarxes socials. Aquestes xarxes han estat possibles gràcies a diverses raons que han fet assequible a col•lectius de persones, tant els aparells com els coneixements necessaris per dur a terme aquestes actuacions. Dintre d’aquest marc, al Bages, concretament a Manresa, es va començar a desenvolupar una d’aquestes xarxes. Les decisions d’aquesta xarxa d’utilitzar exclusivament hardware i software de codi obert, i determinats aspectes tècnics de la xarxa, ha comportat que la xarxa fos incompatible amb algunes de les aplicacions de gestió de xarxes existents desenvolupades per comunicats com gufi.net a Osona. És per això que per garantir el creixement, la supervivència i l’èxit d’aquesta xarxa en el temps, és indispensable poder comptar amb una eina de gestió que s’adigui a les característiques de GuifiBages. L’objectiu principal d’aquest treball és dotar a la xarxa GuifiBages de les eines necessàries per poder gestionar tota la informació referent a l’estructura de la seva xarxa, tant per facilitar l’accés a nous usuaris sense molts coneixements tècnics, com per facilitar nous desplegaments / reparacions / modificacions de la xarxa d’una manera automàtica. Com a conclusió d’aquest treball, podem afirmar que les avantatges que proporciones tecnologies com Plone, faciliten enormement la creació d’aplicacions de gestió de continguts en entorn web. Alhora, l’ús de noves tècniques de programació com AJAX o recursos com els que ofereix Google, permeten desenvolupar aplicacions web que no tenen res a envejar al software tradicional. D’altra banda, voldríem destacar l’ús exclusiu de programari lliure tant en els paquets de software necessaris pel desenvolupament, com en el sistema operatiu i programes dels ordinadors on s’ha dut a terme, demostrant que es poden desenvolupar sistemes de qualitat sense dependre de programari privatiu.
Resumo:
Pursuant to Iowa Code Section 307.46(2), the following report is submitted on the use of reversions. The Iowa Department of Transportation spent $476,566 of the Fiscal Year 2009 Road Use Tax Fund/Primary Road Fund budget reversion in Fiscal Year 2010 for network hardware and software, server hardware and software and communications and computer equipment.
Resumo:
Current limitations of coronary magnetic resonance angiography (MRA) include a suboptimal signal-to-noise ratio (SNR), which limits spatial resolution and the ability to visualize distal and branch vessel coronary segments. Improved SNR is expected at higher field strengths, which may provide improved spatial resolution. However, a number of potential adverse effects on image quality have been reported at higher field strengths. The limited availability of high-field systems equipped with cardiac-specific hardware and software has previously precluded successful in vivo human high-field coronary MRA data acquisition. In the present study we investigated the feasibility of human coronary MRA at 3.0 T in vivo. The first results obtained in nine healthy adult subjects are presented.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
Recent reports indicate that of the over 25,000 bridges in Iowa, slightly over 7,000 (29%) are either structurally deficient or functionally obsolete. While many of these bridges may be strengthened or rehabilitated, some simply need to be replaced. Before implementing one of these options, one should consider performing a diagnostic load test on the structure to more accurately assess its load carrying capacity. Frequently, diagnostic load tests reveal strength and serviceability characteristics that exceed the predicted codified parameters. Usually, codified parameters are very conservative in predicting lateral load distribution characteristics and the influence of other structural attributes. As a result, the predicted rating factors are typically conservative. In cases where theoretical calculations show a structural deficiency, it may be very beneficial to apply a "tool" that utilizes a more accurate theoretical model which incorporates field-test data. At a minimum, this approach results in more accurate load ratings and many times results in increased rating factors. Bridge Diagnostics, Inc. (BDI) developed hardware and software that are specially designed for performing bridge ratings based on data obtained from physical testing. To evaluate the BDI system, the research team performed diagnostic load tests on seven "typical" bridge structures: three steel-girder bridges with concrete decks, two concrete slab bridges, and two steel-girder bridges with timber decks. In addition, a steel-girder bridge with a concrete deck previously tested and modeled by BDI was investigated for model verification purposes. The tests were performed by attaching strain transducers on the bridges at critical locations to measure strains resulting from truck loading positioned at various locations on the bridge. The field test results were used to develop and validate analytical rating models. Based on the experimental and analytical results, it was determined that bridge tests could be conducted relatively easy, that accurate models could be generated with the BDI software, and that the load ratings, in general, were greater than the ratings, obtained using the codified LFD Method (according to AASHTO Standard Specifications for Highway Bridges).
Resumo:
The goal of this work was to move structural health monitoring (SHM) one step closer to being ready for mainstream use by the Iowa Department of Transportation (DOT) Office of Bridges and Structures. To meet this goal, the objective of this project was to implement a pilot multi-sensor continuous monitoring system on the Iowa Falls Arch Bridge such that autonomous data analysis, storage, and retrieval can be demonstrated. The challenge with this work was to develop the open channels for communication, coordination, and cooperation of various Iowa DOT offices that could make use of the data. In a way, the end product was to be something akin to a control system that would allow for real-time evaluation of the operational condition of a monitored bridge. Development and finalization of general hardware and software components for a bridge SHM system were investigated and completed. This development and finalization was framed around the demonstration installation on the Iowa Falls Arch Bridge. The hardware system focused on using off-the-shelf sensors that could be read in either “fast” or “slow” modes depending on the desired monitoring metric. As hoped, the installed system operated with very few problems. In terms of communications—in part due to the anticipated installation on the I-74 bridge over the Mississippi River—a hardline digital subscriber line (DSL) internet connection and grid power were used. During operation, this system would transmit data to a central server location where the data would be processed and then archived for future retrieval and use. The pilot monitoring system was developed for general performance evaluation purposes (construction, structural, environmental, etc.) such that it could be easily adapted to the Iowa DOT’s bridges and other monitoring needs. The system was developed allowing easy access to near real-time data in a format usable to Iowa DOT engineers.
Resumo:
Following the success of the first round table in 2001, the Swiss Proteomic Society has organized two additional specific events during its last two meetings: a proteomic application exercise in 2002 and a round table in 2003. Such events have as their main objective to bring together, around a challenging topic in mass spectrometry, two groups of specialists, those who develop and commercialize mass spectrometry equipment and software, and expert MS users for peptidomics and proteomics studies. The first round table (Geneva, 2001) entitled "Challenges in Mass Spectrometry" was supported by brief oral presentations that stressed critical questions in the field of MS development or applications (Stöcklin and Binz, Proteomics 2002, 2, 825-827). Topics such as (i) direct analysis of complex biological samples, (ii) status and perspectives for MS investigations of noncovalent peptide-ligant interactions; (iii) is it more appropriate to have complementary instruments rather than a universal equipment, (iv) standardization and improvement of the MS signals for protein identification, (v) what would be the new generation of equipment and finally (vi) how to keep hardware and software adapted to MS up-to-date and accessible to all. For the SPS'02 meeting (Lausanne, 2002), a full session alternative event "Proteomic Application Exercise" was proposed. Two different samples were prepared and sent to the different participants: 100 micro g of snake venom (a complex mixture of peptides and proteins) and 10-20 micro g of almost pure recombinant polypeptide derived from the shrimp Penaeus vannamei carrying an heterogeneous post-translational modification (PTM). Among the 15 participants that received the samples blind, eight returned results and most of them were asked to present their results emphasizing the strategy, the manpower and the instrumentation used during the congress (Binz et. al., Proteomics 2003, 3, 1562-1566). It appeared that for the snake venom extract, the quality of the results was not particularly dependant on the strategy used, as all approaches allowed Lication of identification of a certain number of protein families. The genus of the snake was identified in most cases, but the species was ambiguous. Surprisingly, the precise identification of the recombinant almost pure polypeptides appeared to be much more complicated than expected as only one group reported the full sequence. Finally the SPS'03 meeting reported here included a round table on the difficult and challenging task of "Quantification by Mass Spectrometry", a discussion sustained by four selected oral presentations on the use of stable isotopes, electrospray ionization versus matrix-assisted laser desorption/ionization approaches to quantify peptides and proteins in biological fluids, the handling of differential two-dimensional liquid chromatography tandem mass spectrometry data resulting from high throughput experiments, and the quantitative analysis of PTMs. During these three events at the SPS meetings, the impressive quality and quantity of exchanges between the developers and providers of mass spectrometry equipment and software, expert users and the audience, were a key element for the success of these fruitful events and will have definitively paved the way for future round tables and challenging exercises at SPS meetings.