961 resultados para General-purpose computing on graphics processing units (GPGPU)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän kannattavuustutkimuksen lähtökohtana oli se, että Yhtyneet Sahat Oy:n Kaukaan sahalla ja Luumäen jatkojalostuslaitoksella haluttiin selvittää pellettitehtaan kannattavuus nykyisessä markkinatilanteessa. Tämä työon luonteeltaan teknis-taloudellinen selvitys eli ns. feasibility study. Pelletöintiprosessi on tekniikaltaan yksinkertainen eikä edellytä korkea teknologian laitteita. Toimiala on maailmanlaajuisesti varsin uusi. Suomessa pellettimarkkinat ovat vielä pienet ja kehittymättömät, mutta kasvua on viime vuosina tapahtunut. Valtaosa kotimaan tuotannosta menee vientiin. Investoinnin laskentaprosessissa saadut tuotannon alkuarvot sekä kustannusrakenteen määrittelyt ovat perustana varsinaisille kannattavuuslaskelmille. Laskelmista on selvitetty investointeihin liittyvät yleisimmät taloudelliset tunnusluvut ja herkimpiä muuttujia on tutkittu ja pohdittu herkkyysanalyysiä apuna käyttäen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työssä tarkastellaan kolmen eri valmistajan signaaliprosessoriperheitä. Työn tavoitteena on tutkia prosessoreiden teknistä soveltuvuutta suunnitteilla olevaan taajuusmuuttajatuoteperheeseen. Työn alkuosassa käydään taajuusmuuttajan rakenne läpi ja selostetaan oikosulkumoottorin yleisimmät ohjausmenetelmät. Työssä selvitetään myös signaaliprosessorin ja integroitujen oheispiirien toimintaa. Työn painopiste prosessoreiden teknisten ominaisuuksien vertailussa. Työssä on vertailtu muun muassa prosessoreiden sisäistä rakennetta, käskykantojen ominaisuuksia, keskeytysten palveluun kuluvaa aikaa ja oheispiirien ominaisuuksia. Oheispiirien, erityisesti analogiadigitaalimuuntimen halutunlainen toiminta on moottorinohjausohjelmiston kannalta tärkeää. Työhön sisällytetyt prosessoriperheet on pisteytetty tarkasteltujen ominaisuuksien osalta. Vertailun tuloksena on esitetty haettuun tarkoitukseen teknisesti soveltuvin prosessoriperhe ja prosessorityyppi. Työssä ei kuitenkaan voida antaa yleistä paremmuusjärjestystä tutkituille prosessoreille.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compares the impact of quality management tools on the performance of organisations utilising the ISO 9001:2000 standard as a basis for a quality-management system band those utilising the EFQM model for this purpose. A survey is conducted among 107 experienced and independent quality-management assessors. The study finds that organisations with qualitymanagement systems based on the ISO 9001:2000 standard tend to use general-purpose qualitative tools, and that these do have a relatively positive impact on their general performance. In contrast, organisations adopting the EFQM model tend to use more specialised quantitative tools, which produce significant improvements in specific aspects of their performance. The findings of the study will enable organisations to choose the most effective quality-improvement tools for their particular quality strategy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To develop procedures to ensure consistency of printing quality of digital images, by means of hardcopy quantitative analysis based on a standard image. Materials and Methods Characteristics of mammography DI-ML and general purpose DI-HL films were studied through the QC-Test utilizing different processing techniques in a FujiFilm®-DryPix4000 printer. A software was developed for sensitometric evaluation, generating a digital image including a gray scale and a bar pattern to evaluate contrast and spatial resolution. Results Mammography films showed maximum optical density of 4.11 and general purpose films, 3.22. The digital image was developed with a 33-step wedge scale and a high-contrast bar pattern (1 to 30 lp/cm) for spatial resolution evaluation. Conclusion Mammographic films presented higher values for maximum optical density and contrast resolution as compared with general purpose films. The utilized digital processing technique could only change the image pixels matrix values and did not affect the printing standard. The proposed digital image standard allows greater control of the relationship between pixels values and optical density obtained in the analysis of films quality and printing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing enables on-demand network access to shared resources (e.g., computation, networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and system software in the data centers. Software as a service (SaaS) is part of cloud computing. It is one of the cloud service models. SaaS is software deployed as a hosted service and accessed over the Internet. In SaaS, the consumer uses the provider‘s applications running in the cloud. SaaS separates the possession and ownership of software from its use. The applications can be accessed from any device through a thin client interface. A typical SaaS application is used with a web browser based on monthly pricing. In this thesis, the characteristics of cloud computing and SaaS are presented. Also, a few implementation platforms for SaaS are discussed. Then, four different SaaS implementation cases and one transformation case are deliberated. The pros and cons of SaaS are studied. This is done based on literature references and analysis of the SaaS implementations and the transformation case. The analysis is done both from the customer‘s and service provider‘s point of view. In addition, the pros and cons of on-premises software are listed. The purpose of this thesis is to find when SaaS should be utilized and when it is better to choose a traditional on-premises software. The qualities of SaaS bring many benefits both for the customer as well as the provider. A customer should utilize SaaS when it provides cost savings, ease, and scalability over on-premises software. SaaS is reasonable when the customer does not need tailoring, but he only needs a simple, general-purpose service, and the application supports customer‘s core business. A provider should utilize SaaS when it offers cost savings, scalability, faster development, and wider customer base over on-premises software. It is wise to choose SaaS when the application is cheap, aimed at mass market, needs frequent updating, needs high performance computing, needs storing large amounts of data, or there is some other direct value from the cloud infrastructure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biofilms constitute a physical barrier, protecting the encased bacteria from detergents and sanitizers. The objective of this work was to analyze the effectiveness of sodium hypochlorite (NaOCl) against strains of Staphylococcus aureus isolated from raw milk of cows with subclinical mastitis and Staphylococcus aureus isolated from the milking environment (blowers and milk conducting tubes). The results revealed that, in the presence of NaOCl (150ppm), the number of adhered cells of the twelve S. aureus strains was significantly reduced. When the same strains were evaluated in biofilm condition, different results were obtained. It was found that, after a contact period of five minutes with NaOCl (150ppm), four strains (two strains from milk , one from the blowers and one from a conductive rubber) were still able to grow. Although with the increasing contact time between the bacteria and the NaOCl (150ppm), no growth was detected for any of the strains. Concerning the efficiency of NaOCl on total biofilm biomass formation by each S. aureus strain, a decrease was observed when these strains were in contact with 150 ppm NaOCl for a total period of 10 minutes. This study highlights the importance of a correct sanitation protocol of all the milk processing units which can indeed significantly reduce the presence of microorganisms, leading to a decrease of cow´s mastitis and milk contamination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research was to provide a deeper insight into the consequences of electronic human resource management (e-HRM) for line managers. The consequences are viewed as used information system (IS) potentials pertaining to the moderate voluntaristic category of consequences. Due to the need to contextualize the research and draw on line managers’ personal experiences, a qualitative approach in a case study setting was selected. The empirical part of the research is loosely based on literature on HRM and e-HRM and it was conducted in an industrial private sector company. In this thesis, method triangulation was utilized, as nine semi-structured interviews, conducted in a European setting, created the main method for data collection and analysis. Other complementary data such as HRM documentation and statistics of e-HRM system usage were utilized as background information to help to put the results into context. E-HRM has partly been taken into use in the case study company. Line managers tend to use e-HRM when a particular task requires it, but they are not familiar with all the features and possibilities which e-HRM has to offer. The advantages of e-HRM are in line with the company’s goals. The advantages are e.g. an transparency of data, process consistency, and having an efficient and easy-to-use tool at one’s disposal. However, several unintended, even contradictory, and mainly negative outcomes can also be identified, such as over-complicated processes, in-security in use of the tool, and the lack of co-operation with HR professionals. The use of e-HRM and managers’ perceptions regarding e-HRM affect the way in which managers perceive the consequences of e-HRM on their work. Overall, the consequences of e-HRM are divergent, even contradictory. The managers who considered e-HRM mostly beneficial to their work found that e-HRM affects their work by providing information and increasing efficiency. Those managers who mostly perceived challenges in e-HRM did not think that e-HRM had affected their role or their work. Even though the perceptions regarding e-HRM and its consequences might reflect the strategies, the distribution of work, and the ways of working in all HRM in general and can’t be generalized as such, this research contributed to the field of e-HRM and it provides new perspectives to e-HRM in the case study organization and in the academic field in general.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work was carried out with the objective of elaborating mathematical models to predict growth and development of purple nutsedge (Cyperus rotundus) based on days or accumulated thermal units (growing degree days). Thus, two independent trials were developed, the first with a decreasing photoperiod (March to July) and the second with an increasing photoperiod (August to November). In each trial, ten assessments of plant growth and development were performed, quantifying total dry matter and the species phenology. After that, phenology was fit to first degree equations, considering individual trials or their grouping. In the same way, the total dry matter was fit to logistic-type models. In all regressions four temporal scales possibilities were assessed for the x axis: accumulated days or growing degree days (GDD) with base temperatures (Tb) of 10, 12 and 15 oC. For both photoperiod conditions, growth and development of purple nutsedge were adequately fit to prediction mathematical models based on accumulated thermal units, highlighting Tb = 12 oC. Considering GDD calculated with Tb = 12 oC, purple nutsedge phenology may be predicted by y = 0.113x, while species growth may be predicted by y = 37.678/(1+(x/509.353)-7.047).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet of Things (IoT) technologies are developing rapidly, and therefore there exist several standards of interconnection protocols and platforms. The existence of heterogeneous protocols and platforms has become a critical challenge for IoT system developers. To mitigate this challenge, few alliances and organizations have taken the initiative to build a framework that helps to integrate application silos. Some of these frameworks focus only on a specific domain like home automation. However, the resource constraints in the large proportion of connected devices make it difficult to build an interoperable system using such frameworks. Therefore, a general purpose, lightweight interoperability framework that can be used for a range of devices is required. To tackle the heterogeneous nature, this work introduces an embedded, distributed and lightweight service bus, Lightweight IoT Service bus Architecture (LISA), which fits inside the network stack of a small real-time operating system for constrained nodes. LISA provides a uniform application programming interface for an IoT system on a range of devices with variable resource constraints. It hides platform and protocol variations underneath it, thus facilitating interoperability in IoT implementations. LISA is inspired by the Network on Terminal Architecture, a service centric open architecture by Nokia Research Center. Unlike many other interoperability frameworks, LISA is designed specifically for resource constrained nodes and it provides essential features of a service bus for easy service oriented architecture implementation. The presented architecture utilizes an intermediate computing layer, a Fog layer, between the small nodes and the cloud, thereby facilitating the federation of constrained nodes into subnetworks. As a result of a modular and distributed design, the part of LISA running in the Fog layer handles the heavy lifting to assist the lightweight portion of LISA inside the resource constrained nodes. Furthermore, LISA introduces a new networking paradigm, Node Centric Networking, to route messages across protocol boundaries to facilitate interoperability. This thesis presents a concept implementation of the architecture and creates a foundation for future extension towards a comprehensive interoperability framework for IoT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis describes utilisation of reclaimed rubber, Whole Tyre Reclaim (WTR) produced from bio non- degradable solid pollutant scrap and used tyres. In this study an attempt has made to optimize the substitution of virgin rubber with WTR in both natural and synthetic rubber compounds without seriously compromising the important mechanical properties. The WTR is used as potent source of rubber hydrocarbon and carbon black filler. Apart from natural rubber (NR), Butadiene rubber (BR), Styrene butadiene rubber (SBR), Acrylonitrile butadiene rubber (NBR) and Chloroprene rubber (CR) were selected for study, being the most widely used general purpose and specialty rubbers. The compatibility problem was addressed by functionalisation of WTR with maleic anhydride and by using a coupling agent Si69.The blends were systematically evaluated with respect to various mechanical properties. The thermogravimetric analyses were also carried out to evaluate the thermal stability of the blends.Mechanical properties of the blends were property and matrix dependant. Presence of reinforcing carbon black filler and curatives in the reclaimed rubber improved the mechanical properties with the exception of some of the elastic properties like heat build up, resilience, compression set. When WTR was blended with natural rubber and synthetic rubbers, as the concentration of the low molecular weight, depolymerised WfR was increased above 46-weight percent, the properties deteriorates.When WTR was blended with crystallizing rubbers such as natural rubber and chloroprene rubber, properties like tensile strength, ultimate elongation were decreased in presence of WTR. Where as in the case of blends of WTR with non-crystallizing rubbers reinforcement effect was more prominent.The effect of functionalisation and coupling agent was studied in three matrices having different levels of polarity(NBR, CR and SBR).The grafting of maleic anhydride on to WTR definitely improved the properties of its blends with NBR, CR and SBR, the effect being prominent in Chloroprene rubber.Improvement in properties of these blends could also achieved by using a coupling agent Si69. With this there is apparent plasticizing effect at higher loading of the coupling agent. The optimum concentration of Si69 was 1 phr for improved properties, though the improvements are not as significant as in the case of maleic anhydride grafting.Thermal stability of the blend was increased by using silane-coupling agent.