953 resultados para General-purpose computing
Resumo:
Article About the Authors Metrics Comments Related Content Abstract Introduction Functionality Implementation Discussion Acknowledgments Author Contributions References Reader Comments (0) Figures Abstract Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
This study compares the impact of quality management tools on the performance of organisations utilising the ISO 9001:2000 standard as a basis for a quality-management system band those utilising the EFQM model for this purpose. A survey is conducted among 107 experienced and independent quality-management assessors. The study finds that organisations with qualitymanagement systems based on the ISO 9001:2000 standard tend to use general-purpose qualitative tools, and that these do have a relatively positive impact on their general performance. In contrast, organisations adopting the EFQM model tend to use more specialised quantitative tools, which produce significant improvements in specific aspects of their performance. The findings of the study will enable organisations to choose the most effective quality-improvement tools for their particular quality strategy
Resumo:
Objective To develop procedures to ensure consistency of printing quality of digital images, by means of hardcopy quantitative analysis based on a standard image. Materials and Methods Characteristics of mammography DI-ML and general purpose DI-HL films were studied through the QC-Test utilizing different processing techniques in a FujiFilm®-DryPix4000 printer. A software was developed for sensitometric evaluation, generating a digital image including a gray scale and a bar pattern to evaluate contrast and spatial resolution. Results Mammography films showed maximum optical density of 4.11 and general purpose films, 3.22. The digital image was developed with a 33-step wedge scale and a high-contrast bar pattern (1 to 30 lp/cm) for spatial resolution evaluation. Conclusion Mammographic films presented higher values for maximum optical density and contrast resolution as compared with general purpose films. The utilized digital processing technique could only change the image pixels matrix values and did not affect the printing standard. The proposed digital image standard allows greater control of the relationship between pixels values and optical density obtained in the analysis of films quality and printing systems.
Resumo:
A physical model for the simulation of x-ray emission spectra from samples irradiated with kilovolt electron beams is proposed. Inner shell ionization by electron impact is described by means of total cross sections evaluated from an optical-data model. A double differential cross section is proposed for bremsstrahlung emission, which reproduces the radiative stopping powers derived from the partial wave calculations of Kissel, Quarles and Pratt [At. Data Nucl. Data Tables 28, 381 (1983)]. These ionization and radiative cross sections have been introduced into a general-purpose Monte Carlo code, which performs simulation of coupled electron and photon transport for arbitrary materials. To improve the efficiency of the simulation, interaction forcing, a variance reduction technique, has been applied for both ionizing collisions and radiative events. The reliability of simulated x-ray spectra is analyzed by comparing simulation results with electron probe measurements.
Resumo:
We present a general algorithm for the simulation of x-ray spectra emitted from targets of arbitrary composition bombarded with kilovolt electron beams. Electron and photon transport is simulated by means of the general-purpose Monte Carlo code PENELOPE, using the standard, detailed simulation scheme. Bremsstrahlung emission is described by using a recently proposed algorithm, in which the energy of emitted photons is sampled from numerical cross-section tables, while the angular distribution of the photons is represented by an analytical expression with parameters determined by fitting benchmark shape functions obtained from partial-wave calculations. Ionization of K and L shells by electron impact is accounted for by means of ionization cross sections calculated from the distorted-wave Born approximation. The relaxation of the excited atoms following the ionization of an inner shell, which proceeds through emission of characteristic x rays and Auger electrons, is simulated until all vacancies have migrated to M and outer shells. For comparison, measurements of x-ray emission spectra generated by 20 keV electrons impinging normally on multiple bulk targets of pure elements, which span the periodic system, have been performed using an electron microprobe. Simulation results are shown to be in close agreement with these measurements.
Resumo:
Tässä työssä kehitetään yleiskäyttöinen palvelupyyntömalli, jonka avulla Lahden kaupungin Lahti Fenix –projektin Kuntalaistilijärjestelmän palveluväylän kautta voidaan kutsua järjestelmän tietokantatasoa tai muita palveluväylän avulla integroituja järjestelmiä. Työn tavoitteena oli suoraviivaistaa järjestelmäintegraatioihin liittyvien palveluiden kehittämistä suunnittelemalla sellainen palvelupyyntömuodostin, joka ei sisällä staattisia viittauksia jossakin tietyssä palvelussa käytettäviin luokkiin tai muihin ominaisuuksiin. Työssä hyödynnettiin Java-kielen kehittyneitä ominaisuuksia; reflektiivistä ohjelmointia, geneeristä ohjelmointia ja Java-virtuaalikoneen metodipinon lukemista. Tavoitteen saavuttamista mitattiin käyttäen avuksi McCaben syklomaattista kompleksisuutta ja metodeissa käytettyä rivimäärää. Työ aloitettiin joulukuussa 2008 ja saatiin valmiiksi helmikuussa 2009. Työn tuloksena syntyi toimiva, syklomaattiselta kompleksisuudeltaan matala ja helppokäyttöinen palvelukutsumuodostin.
Resumo:
Data traffic caused by mobile advertising client software when it is communicating with the network server can be a pain point for many application developers who are considering advertising-funded application distribution, since the cost of the data transfer might scare their users away from using the applications. For the thesis project, a simulation environment was built to mimic the real client-server solution for measuring the data transfer over varying types of connections with different usage scenarios. For optimising data transfer, a few general-purpose compressors and XML-specific compressors were tried for compressing the XML data, and a few protocol optimisations were implemented. For optimising the cost, cache usage was improved and pre-loading was enhanced to use free connections to load the data. The data traffic structure and the various optimisations were analysed, and it was found that the cache usage and pre-loading should be enhanced and that the protocol should be changed, with report aggregation and compression using WBXML or gzip.
Resumo:
Tässä työssä kehitettiin teollisuusrobottijärjestelmiin soveltuva, mallinsovitusta hyödyntävä konenäköohjelmisto. Yleiskäyttöiseksi tarkoitettuun ohjelmistoon tehtiin toiminnot konenäköjärjestelmän kalibrointiin, mallinsovitukseen käytettävien mallien hallintaan ja tulosten välitykseen teollisuusroboteille. Ohjelmiston tuli olla myös niin helppokäyttöinen, että sen käyttö onnistuu lyhyellä koulutuksella. Ohjelmistoa sovellettiin puuikkunapuitteiden robotisoituun maalausjärjestelmään. Maalausjärjestelmästä onnistuttiin tekemään automaattinen, tuotteisiin mukautuva ja virhetilanteista toipuva pitkälti toimitetun konenäköjärjestelmän ansiosta.
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.
Resumo:
Tämä diplomityö on osa erään yrityksen sisäistä tuotekehitysprojektia. Projektin päämäärä oli kehittää ja tuoda markkinoille uusi modulaarinen testilaite. Markkinoille tuotava laite voi asiakkaan toivomuksesta pitää sisällään jännitelujuus-, eristysvastus- ja vuotovirtamittarin. Tämän lisäksi uuteen testilaitteeseen on mahdollista lisätä muita sisäisiä rele- ja mittalaitekortteja mm. yleismittarimoduulin. Diplomityössä perehdytään elektroniikan eri suureiden mittaamiseen, sekä syvennytään modulaarisen testilaitteen yleismittarikortin prototyypin suunnitteluun. Prototyypin tuli vastata tarkkuudeltaan ja nopeudeltaan markkinoiden johtavien mittalaitevalmistajien tarjoamia yleiskäyttöön tarkoitettuja järjestelmäyleismittareita. Työn lopputuloksena syntyi digitaaliyleismittarikortin prototyyppi. Käytännön syistä prototyyppikortin mittaustulokset ja tulosten käsittely on rajattu kokonaan pois tästä kirjallisesta dokumentista.
Resumo:
Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.
Resumo:
The general purpose of the thesis was to describe and explain the particularities of inbound marketing methods and the key advantages of those methods. Inbound marketing can be narrowed down to a set of marketing strategies and techniques focused on pulling prospects towards a business and its products on the Internet by producing useful and relevant content to prospects. The main inbound marketing methods and channels were identified as blogging, content publishing, search engine optimization and social media. The best way to utilise these methods is producing great content that should cover subjects that interest the target group, which is usually a composition of buyers, existing customers and influencers, such as analysts and media The study revealed increase in Lainaaja.fi traffic and referral traffic sources that was firmly confirmed as statistically significant, while number of backlinks and SERP placement were clearly positively correlated, but not statistically significant. The number of new registered users along with new loan applicants and deposits did not show correlation with increased content producing. The conclusion of the study shows inbound marketing campaign clearly increasing website traffic and plausible help on getting better search engine results compared to control period. Implications are clear; inbound marketing is an activity that every business should consider implementing. But just producing content online is not enough; equal amount of work should be put into turning the visitors into customers. Further studies are recommended on using inbound marketing combined with monitoring of landing pages and conversion optimization to incoming visitors.
Resumo:
Kandidaatintyössä toteutetaan OBD2 (On-Board Diagnostics 2) -lukija ajoneuvon päästöjenhallintajärjestelmän diagnostiikkatiedoille yleiskäyttöisellä mikro-ohjaimella. Lukija tukee tiedonsiirtoprotokollana SAE J1850 VPW protokollaa. Mikro-ohjaimena on Atmel Corporationin AVR ATMega328. Työn tavoitteena on havainnoida vastaantulevia käytännön ongelmia ja haasteita mikro-ohjaimen käytöllä tiedonsiirtoprotokollan toteutukseen, ja verrata toteutettua järjestelmää kaupallisiin OBD2-lukijoihin. Työn johtopäätöksenä havaitaan mikro-ohjaimen suorituskyvyn rajoitteet ja sen tuomat toiminnan epävarmuustekijät. Työssä myös todetaan, että mikro-ohjain soveltuu tiedonsiirtoprotokollan toteutukseen kun rajoitteet otetaan huomioon. Kaupallisiin lukijoihin verrattuna yleiskäyttöiseen mikro-ohjaimeen perustuva toteutettu järjestelmä on kalliimpi ja toiminnoiltaan suppeampi. Mikro-ohjaimeen perustuva järjestelmä on kuitenkin muokattavissa ja laajennettavissa tarvittaessa, jolloin toteutukseen voidaan saada kaupallisista järjestelmistä mahdollisesti puuttuvia ominaisuuksia, kuten valmistajakohtaisia protokollia ja toimintoja, joita ei ole määritelty OBD2:ssa. Yhtenä esimerkkinä tällaisesta toiminnosta voi mainita ajoneuvoissa yleistyvän sähköisen käsijarrun säätöä ohjaavat komennot jarruhuoltoa varten.
Resumo:
Industrial production of pulp and paper is an intensive consumer of energy, natural resources, and chemicals that result in a big carbon footprint of the final product. At present companies and industries aspire to calculate their gas emissions into the atmosphere in order to afterwards reduce atmospheric contamination. One of the approaches allowing to increase carbon burden from the pulp and paper manufacture is paper recycling. The general purpose of the current paper is to establish methods of quantifying and minimizing the carbon footprint of paper. The first target of this research is to derive a mathematical relationship between virgin fibre requirements with respect to the amount of recycled paper used in the pulp. One more purpose is to establish a model to be used to clarify the contribution of recycling and transportation to decreasing carbon dioxide emissions. For this study sensitivity analysis is used to investigate the robustness of obtained results. The results of the present study show that an increasing of recycling rate does not always lead to minimizing the carbon footprint. Additionally, we derived that transportation of waste paper throughout distances longer than 5800 km has no sense because the use of that paper will only increase carbon dioxide emissions and it is better to reject recycling at all. Finally, we designed the model for organization of a new supply chain of paper product to a customer. The models were implemented as reusable MATLAB frameworks.